Incomplete zip downloads from GET /studies/{id}/media

We are using the orthancteam/orthanc:25.4.2 docker image.

We noticed while using the GET /studies/{id}/media endpoint (synchronous zip archive download) with large (~1.3Gb) studies that when we issue multiple such requests concurrently, we don’t always get the full data back, the streaming stops before we could get everything in the response. It might be related to the client’s network bandwidth as some of us could reliably reproduce it, while others couldn’t.

Is this supposed to happen with weaker internet connections?

We are using the default value for the relevant config param:

"SynchronousZipStream" : true,

I’ll try to get more details but if any of you have any ideas in the meantime, it would be much appreciated.

Thank you!
András

I wrote a script that starts a synchronous download every 30 seconds (curl running in the background) of the same study. Here is a detailed list of the files that were created. The file names contain the creation date of the file and we can see the “last update” column too, so we can calculate the running time of each download (not the same as the running time of the Orthanc jobs, those finish quite a bit sooner, so Orthanc must keep the results in memory for some time - maybe there lies the problem). Only 2 jobs could run in parallel on this Orthanc instance.

Note that the file sizes are always different - all of them ended prematurely for some reason.

asallai@thinkpad-e16:~$ ls -la /tmp/concurrent-downloads --time-style=full-iso | grep .zip
-rw-rw-r--  1 asallai asallai 1101163137 2025-07-17 14:56:40.420077883 +0200 study_1_25-07-17_14:53:28.514221087.zip
-rw-rw-r--  1 asallai asallai 1094745793 2025-07-17 14:58:41.358103258 +0200 study_2_25-07-17_14:53:58.518406479.zip
-rw-rw-r--  1 asallai asallai 1119070833 2025-07-17 15:00:29.435840808 +0200 study_3_25-07-17_14:54:28.522509389.zip
-rw-rw-r--  1 asallai asallai 1173406681 2025-07-17 15:01:14.485020556 +0200 study_4_25-07-17_14:54:58.526803034.zip
-rw-rw-r--  1 asallai asallai 1100810425 2025-07-17 15:01:50.834847638 +0200 study_5_25-07-17_14:55:28.530191965.zip
-rw-rw-r--  1 asallai asallai 1105248377 2025-07-17 15:01:59.116021359 +0200 study_6_25-07-17_14:55:58.533337712.zip
-rw-rw-r--  1 asallai asallai         93 2025-07-17 14:57:18.706136189 +0200 study_7_25-07-17_14:56:28.536719101.zip
-rw-rw-r--  1 asallai asallai 1099753465 2025-07-17 15:02:13.706314651 +0200 study_8_25-07-17_14:56:58.540239542.zip

Running times:

3'12"
4'43"
6'01"
6'16"
6'22"
6'01"
timeout (was in the queue for too long, this is normal I guess)
5'09"

Nothing conclusive yet, but it might be useful information.

Here is my repro script: repro script for https://discourse.orthanc-server.org/t/incomplete-zip-downloads-from-get-studies-id-media/6046 · GitHub
The downloaded .zip archive (when downloaded correctly) is 1.3Gb in our case, I guess any large study should do it.

Some updates:
The results really depend on the download rate, or in other words, the time Orthanc has to keep the finished archives in memory, while clients are downloading.
It might not be reproduced at all with high bandwidth so I added the --limit-rate flag to my repro script to emulate a slow connection, so we can reproduce it even with a local setup.
Please use the updated version if you want to reproduce it: repro script for https://discourse.orthanc-server.org/t/incomplete-zip-downloads-from-get-studies-id-media/6046 · GitHub

So if my hypothesis is right, then I’d humbly suggest a configuration option for the max amount of time/space Orthanc is allowed to use for storing the finished archives after a job is finished, so even slow clients can download them.

Hi @a.sallai

Could you share the Orthanc verbose logs while your script is running ?

I have tried it here with my dev setup but did not face any issues.

Best,

Alain.

Hi Alain,

Here you’ll find the Orthanc logs that I collected during the repro script run on my local setup (Orthanc container running locally): https://drive.google.com/file/d/1tg3uiK5j_hRsy-A6V4jOwSW7h18CHiKk/view?usp=sharing

Here is a summary of the files that were created:

$ ./running_times.sh  /tmp/concurrent-downloads
size in bytes: 1093405141, (1.1G), file was being written for: 1038 seconds (17:18) - study_10_25-07-23_14:35:38.993296773.zip
size in bytes: 1094412608, (1.1G), file was being written for: 1044 seconds (17:24) - study_11_25-07-23_14:36:08.997106750.zip
size in bytes: 1092160030, (1.1G), file was being written for: 1036 seconds (17:16) - study_12_25-07-23_14:36:39.000209393.zip
size in bytes: 1091537356, (1.1G), file was being written for: 1037 seconds (17:17) - study_1_25-07-23_14:31:08.961253327.zip
size in bytes: 1097917063, (1.1G), file was being written for: 1046 seconds (17:26) - study_13_25-07-23_14:37:09.003567684.zip
size in bytes: 1093279158, (1.1G), file was being written for: 1036 seconds (17:16) - study_14_25-07-23_14:37:39.006962217.zip
size in bytes: 1093838718, (1.1G), file was being written for: 1042 seconds (17:22) - study_15_25-07-23_14:38:09.009955836.zip
size in bytes: 1098978544, (1.1G), file was being written for: 1041 seconds (17:21) - study_16_25-07-23_14:38:39.015802718.zip
size in bytes: 1092901286, (1.1G), file was being written for: 1036 seconds (17:16) - study_17_25-07-23_14:39:09.019250824.zip
size in bytes: 1094845950, (1.1G), file was being written for: 1043 seconds (17:23) - study_18_25-07-23_14:39:39.026802115.zip
size in bytes: 1094799001, (1.1G), file was being written for: 1043 seconds (17:23) - study_19_25-07-23_14:40:09.030966729.zip
size in bytes: 1095659312, (1.1G), file was being written for: 1039 seconds (17:19) - study_20_25-07-23_14:40:39.034570917.zip
size in bytes: 1093836630, (1.1G), file was being written for: 1042 seconds (17:22) - study_21_25-07-23_14:41:09.038973925.zip
size in bytes: 1095870566, (1.1G), file was being written for: 1039 seconds (17:19) - study_22_25-07-23_14:41:39.043010187.zip
size in bytes: 1095193249, (1.1G), file was being written for: 1044 seconds (17:24) - study_2_25-07-23_14:31:38.965620272.zip
size in bytes: 1095035944, (1.1G), file was being written for: 1037 seconds (17:17) - study_23_25-07-23_14:42:09.046451276.zip
size in bytes: 1092206039, (1.1G), file was being written for: 1036 seconds (17:16) - study_24_25-07-23_14:42:39.050256772.zip
size in bytes: 1093979054, (1.1G), file was being written for: 1037 seconds (17:17) - study_3_25-07-23_14:32:08.969292706.zip
size in bytes: 1094919529, (1.1G), file was being written for: 1044 seconds (17:24) - study_4_25-07-23_14:32:38.972770122.zip
size in bytes: 1093837726, (1.1G), file was being written for: 1043 seconds (17:23) - study_5_25-07-23_14:33:08.976208636.zip
size in bytes: 1094844172, (1.1G), file was being written for: 1044 seconds (17:24) - study_6_25-07-23_14:33:38.979376639.zip
size in bytes: 1093860936, (1.1G), file was being written for: 1043 seconds (17:23) - study_7_25-07-23_14:34:08.983590228.zip
size in bytes: 1099997464, (1.1G), file was being written for: 1043 seconds (17:23) - study_8_25-07-23_14:34:38.986827126.zip
size in bytes: 1093847190, (1.1G), file was being written for: 1043 seconds (17:23) - study_9_25-07-23_14:35:08.990143604.zip

None of them are the same size, but very similar - 1.1Gb instead of the full 1.3Gb of this test study, so none of these was downloaded correctly.

However, the total time that each file was being written for are strikingly similar: many of them exactly 1043 seconds, the others are all very similar. These times are just the difference of the file creation time and the “last updated” time so they include system time. If we exclude the context switches and whatnot, the user time might be around 1000 or 1024 or even 900 (15*60) which sounds like some hidden default timeout when Orthanc (a library used by Orthanc) might decide to close the streaming of data.
It could be that the client (curl in this case) is responsible but they say here that --max-time has no default so it should wait indefinitely after the connection was built.

Thanks a lot for looking into it!

András

@alainmazy, if you have any findings or just some work in progress, please keep us updated here, I’d like to help if I can!
Thank you!

No progress so far. That’s the kind of issue that takes a lot of time to debug and I can’t find a slot for that in this period.

Hi @a.sallai

Thanks for providing the reproduction script.

This should now be fixed.

You’ll be able to test it with the mainline binaries before the next Orthanc version is released.

Alain

1 Like

May we request a new orthanc release? Our users really need this fixed.

Sorry but we do can not make releases “on demand”. This takes time that we don’t spend improving the software.

What prevents you to use the mainline binaries ?

The docs say the mainline binaries are not necessarily safe, I would only use a tested release.
No on-demand releases is understandable.