Disable compression level during downloading/transferring

Hello! Huge thanks to Jodogne and his folks for the great orthanc project!

In our hospital, we’re doing lots of functional MRI which could have 10k+ instances and take up over 200MB per study. So right now both downloading and sending(using transfer plugin) study/series are quite slow and I guess one of the reasons could be orthanc is trying to compress all the instances using gzip (or else compressing algo) under a certain compression level before writing/streaming the zip file.

So my question is that, is there anyway to disable compression or set the compression level to store (zip without compress) during downloading and sending job?

NOTE: modify the source codes is doable for me.

Any help is appreciated!

Ops, I find out that in TransferSchedulre.cpp file, the transfer plugin dose support no compression. However, setting {“Compression”: “none”} only improve the total transferring time slightly. So maybe the bottleneck of transfer plugin for large study (in my test case: 25K instance and 704MB TotalSize) could be elsewhere.

在2021年12月27日星期一 UTC+8 18:05:35 写道:


Sorry but the description of your issue is too vague for us to investigate. Please, provide a minimal working example that would allow us to investigate.

Please also describe what you mean by “quite slow” and how small and large studies transfer compare ones to each other.

Best regards,


Hello alian, many thanks for your reply.

And sorry for my unclear description. But I am not reporting a bug of orthanc, instead, I am trying to find out the mechanism that how orthanc handles study/series/instance compression internally when it receives a download or transmission (transfer) job.

For the download job, orthanc will compress the study/series into a zip file before starting streaming. As I’ve stated, we have a large study in which one series could have over 10k instances and count up to 200MB (so that study would have totally 100k+ instances and over 1GB file size). It is kind of obvious that orthanc would take a pretty long time to compress them.

So back to my question, I am trying to find a way to told/recompile orthanc just store the files into zip instead of compress them (compress level 0).

I don’t know how to give you a working example, maybe I need to upload my example study zip?

Hope I make my point clear, if not, you are welcome to correct me.

在2021年12月28日星期二 UTC+8 17:06:03alain...@osimis.io 写道:

Sorry I forgot to mention that, for the transferring job, even though I’ve passed the {“Compression”: “none”}, orthanc would still need over 2 minutes of preparation before starting the network transferring (until when we observed orthanc takes up network bandwidth).

I am not sure if it resulted from compression, maybe the time is consumed in orthanc extracting the 100k+ instances one by one from the postgresql database?

I wonder if anyone is doing functional MRI and encounters similar problems? Or how your orthanc performs for such a large study?

The following is the code snippet that I used to transfer the study to another peer named as BMEC-Orthanc-F.

curl --location --request POST '' \
--header 'Content-Type: application/json' \
--data-raw '{
"Resources": [
"Level": "Study",
"ID": "cee64877-76259d2d-2c68851b-223363c0-45250fde"
"Compression": "none",
"Peer": "BMEC-Orthanc-F"

在2021年12月28日星期二 UTC+8 17:06:03alain...@osimis.io 写道:


Ok, thanks for clarifying.

I would not be surprised if Orthanc needed 2 min to retrieve the 100k instances (that’s around 800 files/sec which is not that bad !). So, compression is probably not the bottleneck here. Enabling the verbose logs might give you some hints on where the time is being spent.

I’m afraid there’s not much we can do to improve that. You can still try to trigger transfers series by series and with possibly multiple series at a time such that Orthanc starts transferring while still preparing other series…



Thanks alian! What about the downloading zip problem? Is there any way to set the compression level of the zip achieve?

在2021年12月28日星期二 UTC+8 20:06:14alain...@osimis.io 写道:

no, there’s no such configuration (and I don’t know this part of the code so I’ll let you check).

You could change the compression level by modifying class ArchiveJob:

Method “SetCompressionLevel()” of class “HierarchicalZipWriter” should be called from “ArchiveJob”.

Note however that the transfers accelerator plugin does not use ArchiveJob, the latter being used when creating ZIP archives using the REST API of Orthanc: