We are using the orthancteam/orthanc:25.4.2
docker image.
We noticed while using the GET /studies/{id}/media
endpoint (synchronous zip archive download) with large (~1.3Gb) studies that when we issue multiple such requests concurrently, we don’t always get the full data back, the streaming stops before we could get everything in the response. It might be related to the client’s network bandwidth as some of us could reliably reproduce it, while others couldn’t.
Is this supposed to happen with weaker internet connections?
We are using the default value for the relevant config param:
"SynchronousZipStream" : true,
I’ll try to get more details but if any of you have any ideas in the meantime, it would be much appreciated.
Thank you!
András
I wrote a script that starts a synchronous download every 30 seconds (curl running in the background) of the same study. Here is a detailed list of the files that were created. The file names contain the creation date of the file and we can see the “last update” column too, so we can calculate the running time of each download (not the same as the running time of the Orthanc jobs, those finish quite a bit sooner, so Orthanc must keep the results in memory for some time - maybe there lies the problem). Only 2 jobs could run in parallel on this Orthanc instance.
Note that the file sizes are always different - all of them ended prematurely for some reason.
asallai@thinkpad-e16:~$ ls -la /tmp/concurrent-downloads --time-style=full-iso | grep .zip
-rw-rw-r-- 1 asallai asallai 1101163137 2025-07-17 14:56:40.420077883 +0200 study_1_25-07-17_14:53:28.514221087.zip
-rw-rw-r-- 1 asallai asallai 1094745793 2025-07-17 14:58:41.358103258 +0200 study_2_25-07-17_14:53:58.518406479.zip
-rw-rw-r-- 1 asallai asallai 1119070833 2025-07-17 15:00:29.435840808 +0200 study_3_25-07-17_14:54:28.522509389.zip
-rw-rw-r-- 1 asallai asallai 1173406681 2025-07-17 15:01:14.485020556 +0200 study_4_25-07-17_14:54:58.526803034.zip
-rw-rw-r-- 1 asallai asallai 1100810425 2025-07-17 15:01:50.834847638 +0200 study_5_25-07-17_14:55:28.530191965.zip
-rw-rw-r-- 1 asallai asallai 1105248377 2025-07-17 15:01:59.116021359 +0200 study_6_25-07-17_14:55:58.533337712.zip
-rw-rw-r-- 1 asallai asallai 93 2025-07-17 14:57:18.706136189 +0200 study_7_25-07-17_14:56:28.536719101.zip
-rw-rw-r-- 1 asallai asallai 1099753465 2025-07-17 15:02:13.706314651 +0200 study_8_25-07-17_14:56:58.540239542.zip
Running times:
3'12"
4'43"
6'01"
6'16"
6'22"
6'01"
timeout (was in the queue for too long, this is normal I guess)
5'09"
Nothing conclusive yet, but it might be useful information.
Here is my repro script: repro script for https://discourse.orthanc-server.org/t/incomplete-zip-downloads-from-get-studies-id-media/6046 · GitHub
The downloaded .zip archive (when downloaded correctly) is 1.3Gb in our case, I guess any large study should do it.
Some updates:
The results really depend on the download rate, or in other words, the time Orthanc has to keep the finished archives in memory, while clients are downloading.
It might not be reproduced at all with high bandwidth so I added the --limit-rate
flag to my repro script to emulate a slow connection, so we can reproduce it even with a local setup.
Please use the updated version if you want to reproduce it: repro script for https://discourse.orthanc-server.org/t/incomplete-zip-downloads-from-get-studies-id-media/6046 · GitHub
So if my hypothesis is right, then I’d humbly suggest a configuration option for the max amount of time/space Orthanc is allowed to use for storing the finished archives after a job is finished, so even slow clients can download them.