Software Compression

I know there are a couple of other threads on compression, but I couldn’t find anything about this in particular:

Is there a way that instances stored on disk by Orthanc with transparent compression on can be re-inflated manually (ie on the command line)? I keep dropping my orthanc database and unfortunately frequently end up re-indexing the entire file structure (docker can be great but also terrible!) I would love to be able to use the built-in compression, but I am worried that I won’t have a recoverable/re-indexible system anymore if I switch over.

Also, is there a way to override the “already stored” response during import to force Orthanc to compress its files when I reimport the file system? Or could I just compress the files on the command-line according to the same method orthanc uses and then just reconfigure Orthanc to understand that now it should read the existing data as compressed?


Hi Derek,

There’s a tool to recover the compressed files but you’ll have to build it yourself:

Instead of using the zip compression, we quite often change the compression of the pixels-data inside the DICOM files (this way, the files stored by Orthanc are standard DICOM files that you can read directly from the file system).
To do this, we usually add another Orthanc in front of the Orthanc used to store data. This input Orthanc simply transcodes the DICOM files into i.e. JPEG2000. The drawback is that all files going out of the Orthanc storage are then in JPEG2000 and some modalities/software might not be able to accept this format.
There’s a sample of such a setup available here:

Concerning the “already stored” overriding, there’s currently no way to force it.

BTW, I’m interested to know more about the problems you’ve encountered with Docker. We use it in all our production setups and we have never encountered a database loss.

Thanks so much for the info, Alain – I am testing out your example of jpeg2k encoding on our systems now to see how well it’s supported.

Basically I run a research PACS that keeps outgrowing its storage as we add more and more modalities and studies sending to it. We started with just a couple of TB using hardware-level compression and that scaled well for quite some time.

Then we moved up to a larger system, but I didn’t know how to deal with moving both the data and the postgres database. So, I copied the image data and reindexed it in-place with, which worked pretty well. I actually had to do this a couple of times as I worked out how to make multiple dockerized orthanc instances work with the same database.

At some point my storage device got renamed (thanks IT) and docker could no longer reference any of the containers on it, so I had to rebuild the PG index again in the same way (I keep the files on the file system through a mapped volume).

Now I just realized that the 15TB storage volume doesn’t have hardware compression turned on (thanks IT) and I am 98% full. So I am going to have to figure out how to do some kind of post-facto user-level compression and potentially reindex the whole thing again regardless.

So it’s not so much docker, it’s my lack of IT expertise coupled with this ever expanding monster data set that is giving me trouble!

As I said, I really appreciate your pointer to the pixel-level compression trick. That will probably suit us much better than compressing each file into something that is no-longer DICOM. Feel free to email me outside the group discussion if useful!