Very slow upload of large DICOM

Hi all,
I’m looking for some pointers after hitting a wall for 4 weeks now.

We’re building a service for the storage of dicoms with orthanc.
In a local docker environmentv the upload of a 1Gb dicom takes
around 80 secs.
However in AWS we’re not getting anything below around 15 min.

The infra looks like this:

  • kong (nginx) → orthanc → RDS & EFS
  • kong and orthanc both on Fargate

We switched from S3 to EFS and that improved from 20 min+ to the current
but it is still unusable.

Interestingly, all metrics (CPU, Mem) are always below 10%, so we’ve no
clue as of what the bottleneck might be.

Does anybody has a similar experience and knows what might be that
hidden bottleneck?

Kind regards,

I’ve come across similar upload speeds from my local machine to an EC2 in AWS.

My theory is that each of the (thousands of) Instances is uploaded individually from local to AWS, so that’s thousands of individual connections.

A solution is to scp a zip file containing the DICOMs to another EC2 then upload from EC2 to your Orthanc, which means that the thousands of connections are only from AWS → AWS, which is likely much faster than your local → AWS.

Hi Matthew,
Than you for your response. The problematic dicom is send to orthanc as a zip. However you might be on to something.
This dicoms contains some 6000 instances. If orthanc would write those sequentially to a remote service (S3, EFS) and
each time a connection has to be made then that would be a possible cause.

Does anybody has an idea if this could be the case? Does orthanc store instances like described here?


Hi All,

I guess the answer lies in this test:
Extrapolation the ONCO test (1635) instances to our 1G with 6000 instances will lead to a 10min+ upload.

In other words: it seems that for these kind of files S3 / EFS is not an option and a viable solution will be more expensive and complicated.

Is there anybody who disagree with this? Are there options I overlooked?



I confirm that the current S3 plugin is uploading each file individually.

If you upload a zip to Orthanc, the zip is uncompressed and files are pushed to S3 in a single thread. This could clearly be optimized I agree.

Another possible optimization is to implement the S3 Transfer Mode . I’ve heard it would also improve small files upload. This is in our roadmap but we’re quite busy on other stuff right now. Note that this plugin implements it.

A third option is to upload your files with multiple HTTP clients in parallel. In this case, Orthanc will use a new connection to S3 for each HTTP request it receives.




Thanks for the update. I’m going to propose leaving Fargate and using EC2 with EBS, which comes with a price-tag.


I found fargate networking was badly throttled by AWS.