Dcm4chee: Configuring automated backups to the cloud (or anywhere really)

In a production PACS, a common setup is to back up studies either in real-time or on a periodic basis (ie: nightly) to a more secure, longer-term storage location (aka “nearline” storage).

The studies will typically persist on the local ‘online’ storage system for some pre-specified time interval until they are eventually deleted, leaving the only copies in the nearline storage system.

The system which manages nearline storage, ie: how exactly are the files stored (tar.gz, random proprietary format, etc) and where (dvd, cloud, etc), is called a hierarchical storage management system (HSM).

Luckily, dcm4chee is HSM aware and abstracts out the concept of an HSM pretty well. This means it can be interfaced with an HSM without too many restrictions on the structure of the HSM itself.

Here we’ll be examining some of the details of setting up an archiving system where the key HSM tasks are executed by dcm4chee via a set of scripts we make available to it (ie: the HSM API is defined by a set of scripts made available to dcm4chee via some standard interface provided by dcm4chee).

We’ll be writing scripts for the case where the backup destination is the cloud (AWS S3 to be precise), however we could just as easily be doing this for a local RAID setup, an automated dvd burner, etc.

An overview of the relevant dcm4chee services

There are four services that are involved in performing the backup and checking its integrity. They each will need to be configured. The services are:

  • FileCopy – this service is responsibility for initiating the copy of the original files to the backup destination (and verifying integrity).
  • FileCopyHSMModule – this module contains the details of how the originals are copied to the backup. It is kind of like an API to the backup system, exposing a set of general functions to FileCopy, hiding the backup system specific implementation inside. Notice we will be using the “type=Command” version of this service, which executes its basic functions via external scripts that are for us to specify.
  • SyncFileStatus – this service checks the status of files on the destination (ie: backup system). This is needed to say check that an archiving task has indeed completed, or even to verify the integrity of the end result by refetching it and checking its md5 checksum (basically a proof is in the pudding philosophy).
  • TarRetriever – this service although highly recommended, is actually optional. It is only required if you want dcm4chee to tarball your series before initiating the archiving task. However, it’s obligatory for us, as uploading each of the 200 or so images in a thin-slice CT scan separately is simply put, awfully inefficient – it is much better to tarball the series first.

Configuring the NEARLINE storage group and the HSM

The first step is to set up the nearline storage system and let dcm4chee know that we will be using an HSM.

To do so, we can follow the instructions here almost verbatim. I say almost, because these instructions are actually for setting up a java-based HSM plugin that happens to be built specifically for AWS S3 but that is unfortunately not available with the default dcm4chee install.

Although the instructions aren’t 100% transferrible they are quite good, and rather than plagiarizing I point you to them, while noting that you will need to make the following two changes in applying them to our use case here:

  1. Wherever you see “dcm4chee.archive:service=FileCopyHSMModule,type=S3” you must substitute it with “dcm4chee.archive:service=FileCopyHSMModule,type=Command”.
  2. For configuring the HSM Module (step 4 in the link), follow our directions below instead, unless you plan to use the custom S3 plugin as opposed to the more general and flexible scripting solution we are examining here.

Configuring FileCopyHSMModule (type=Command)

In order to complete the configuration, we must tell the FileCopyHSMModule (of type=Command) where our scripts are.

To do this we open up the MBean for service=FileCopyHSMModule,type=Command from the jmx-console, and configure the following fields:

Snapshot of the FileCopyHSMModule config from the jmx-console.

CopyCommand

python $PATH_TO_SCRIPTS/hsm_copy.py --in-tar %p --dest %f

FetchCommand

python $PATH_TO_SCRIPTS/hsm_fetch.py --remote-path %f --dest %p

QueryCommand

python $PATH_TO_SCRIPTS/hsm_mmls.py --remote-path %f

Below are the descriptions for each script, including code. The latest version of the scripts can be downloaded in one pop from github.

hsm_copy.py

This script is in charge of copying the tarball containing a series to the backup location. It implements this by storing the tarball as an S3 object on AWS.

#!/usr/bin/env python

from boto.s3.key import Key
from boto.s3.connection import S3Connection

AWS_ACCESS_KEY_ID = "YOUR_AWS_ACCESS_KEY_ID"
AWS_SECRET_KEY = "YOUR_AWS_SECRET_KEY"
ORG_BUCKET = 'YOUR_BUCKET'

s3conn = S3Connection(
    AWS_ACCESS_KEY_ID, AWS_SECRET_KEY)


def copy_file_to_s3(inpath, remote_path):
    """
    Copies file to regulsr s3 storage.
    """
    bucket = s3conn.create_bucket(ORG_BUCKET)
    key = Key(bucket)
    key.key = remote_path
    key.set_contents_from_filename(inpath)
    return key


if __name__ == "__main__":
    import argparse
    parser = argparse.ArgumentParser()
    parser.add_argument("--in-tar", required=True, type=str)
    parser.add_argument('--dest', type=str)
    args = parser.parse_args()
    copy_file_to_s3(args.in_tar, args.dest)

hsm_mmls.py

This script checks the status of a backup. It does so by querying AWS S3 for the tarball, and verifying that it indeed exists. It returns “Archived” if yes, and “Not_Found” if no (it’s not actually important what the latter is, as long as it doesn’t match the regex of the “Pattern” field defined in the FileCopyHSMModule bean).

#!/usr/bin/env python
import sys
from boto.s3.key import Key
from boto.s3.connection import S3Connection

AWS_ACCESS_KEY_ID = "YOUR_AWS_ACCESS_KEY_ID"
AWS_SECRET_KEY = "YOUR_AWS_SECRET_KEY"
ORG_BUCKET = 'YOUR_BUCKET'

s3conn = S3Connection(
    AWS_ACCESS_KEY_ID, AWS_SECRET_KEY)


def get_s3_key(remote_path):
    """
    Checks status of s3 archive.
    """
    bucket = s3conn.create_bucket(ORG_BUCKET)
    key = Key(bucket)
    key.key = remote_path
    return key


if __name__ == "__main__":
    import argparse
    parser = argparse.ArgumentParser()
    parser.add_argument("--remote-path", required=True, type=str)
    args = parser.parse_args()
    key = get_s3_key(args.remote_path)
    if key.exists():
        print "Archived"
        sys.exit(0)
    else:
        print "Not_Found"
        sys.exit(0)
    sys.exit(1)

hsm_fetch.py

This script is in charge of fetching the archive from the back up location. It gets called when a study is requested and the only copy exists in nearline, or if the SynFileStatus service is configured to check the integrity of the tar archive (ie: the “VerifyTar” is set to True), which it does by actually fetching the archive and checking its MD5 checksum .

#!/usr/bin/env python
from boto.s3.key import Key
from boto.s3.connection import S3Connection

AWS_ACCESS_KEY_ID = "YOUR_AWS_ACCESS_KEY_ID"
AWS_SECRET_KEY = "YOUR_AWS_SECRET_KEY"
ORG_BUCKET = 'YOUR_BUCKET'

s3conn = S3Connection(
    AWS_ACCESS_KEY_ID, AWS_SECRET_KEY)


def fetch_from_s3(remote_path, dest_file):
    """
    Checks status of s3 archive.
    """
    bucket = s3conn.create_bucket(ORG_BUCKET)
    key = Key(bucket)
    key.key = remote_path
    key.get_contents_to_filename(dest_file)
    return key


if __name__ == "__main__":
    import argparse
    parser = argparse.ArgumentParser()
    parser.add_argument("--remote-path", required=True, type=str)
    parser.add_argument("--dest", required=True, type=str)
    args = parser.parse_args()
    key = fetch_from_s3(args.remote_path, args.dest)

How to restore studies from nearline

As stated in the opening, a typical setting is that with time, studies are deleted from local (“online”) storage with the only remaining copies in nearline. What happens if you need to restore a study from nearline to online?

I know of a few options, they are:

  • dcm4chee will auto-restore the required studies if it is ever directly requested (ie: you try to open it in a dicom viewer). It will subsequently persist in online storage for as long as a brand new study.
  • You can go to the FileCopyHSMModule bean via the jmx-console and invoke the function fetchHSMFile() (or perhaps invoke this via twiddle.sh if you prefer). (Seems to only work for one file at a time, however I am not 100% sure.)
  • You can set up dcm4chee Prefetch service, which will prefetch the study upon receipt of an appropriate Hl7 message referencing the patient.
  • You can resend the archive from the backup location directly via a command line util such as dcmsnd.

That’s all that comes to mind right now. If there are additional alternatives I would love to hear about them via a comment or email!

Common problems and confusions

From my and others’ experience, a few common problems / confusions seem to be:

    1. What the heck is mmls?

In the default config of the HSM module there is an innocent reference to a script called ‘mmls’ in the QueryCommand field. Although named the same as a well-known linux util, it has nothing to do with it as far as I can tell, hence why we replaced it with the script hsm_mmls.py, source code included above.

It turns out that dcm4chee expects the output of this script (whatever you decide to call it) to match the regex pattern defined in the HSM module for a successfull archive event by the field Pattern.

Before understanding this, I was faced errors like these:

2016-08-03 11:44:59,727 ERROR -> (Thread-5089) [org.dcm4chex.archive.hsm.module.HSMCommandModule] Failed to execute mmls null/2016/7/28/10/5403D4F7/DC3524CD/3C807614
org.dcm4chex.archive.hsm.module.HSMException: queryStatus failed!
    at org.dcm4chex.archive.hsm.module.AbstractHSMModule.doCommand(AbstractHSMModule.java:128)
    at org.dcm4chex.archive.hsm.module.HSMCommandModule.queryStatus(HSMCommandModule.java:246)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:155)
    at org.jboss.mx.server.Invocation.dispatch(Invocation.java:94)
    at org.jboss.mx.interceptor.AbstractInterceptor.invoke(AbstractInterceptor.java:133)
    at org.jboss.mx.server.Invocation.invoke(Invocation.java:88)
    at org.jboss.mx.interceptor.ModelMBeanOperationInterceptor.invoke(ModelMBeanOperationInterceptor.java:142)
    at org.jboss.mx.server.Invocation.invoke(Invocation.java:88)
    at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264)
    at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659)
    at org.dcm4chex.archive.hsm.SyncFileStatusService.queryHSM(SyncFileStatusService.java:575)
    at org.dcm4chex.archive.hsm.SyncFileStatusService.check(SyncFileStatusService.java:446)
    at org.dcm4chex.archive.hsm.SyncFileStatusService.check(SyncFileStatusService.java:395)
    at org.dcm4chex.archive.hsm.SyncFileStatusService$1$1.run(SyncFileStatusService.java:134)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Cannot run program "mmls": error=2, No such file or directory
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
    at java.lang.Runtime.exec(Runtime.java:617)
    at java.lang.Runtime.exec(Runtime.java:485)
    at org.dcm4che.util.Executer.<init>(Executer.java:111)
    at org.dcm4che.util.Executer.<init>(Executer.java:104)
    at org.dcm4chex.archive.hsm.module.AbstractHSMModule.doCommand(AbstractHSMModule.java:125)
    ... 18 more
Caused by: java.io.IOException: error=2, No such file or directory

Which indicates there is no such script ‘mmls’ on the system. After installing the linux util mmls and getting nowhere, I stumbled upon this thread, which explained the nature of the script and its output requirements, and fromt his the above hsm_mmls.py script was born.

For me, figuring out mmls was actually the main stumbling block in getting the archiving service to work. The other parts were smooth sailing.

  1. Scripts don’t have the correct permissions to be executed by dcm4chee. This one is pretty self-explanatory.

Note: I am running dcm4chee v2.18.1.


No fancy tricks or popups, simply an article like the above, which I write a few times a month - just for my subscribers.