Tuning HTTP for slow connections

Hi Team,

I’m wondering if anyone has any experience or comments on how to tune Orthanc to send to remote peers (using Accelerated Transfers or Orthanc Peers) over a poor network connection.

I am experiencing challenges at the moment getting reliable transfers across low quality DSL & 4G networks. Normal internet browsing works (albeit slowly). However when transferring instances from 1 orthanc instance to another almost always fails.

When using Accelerated Transfers, the Job reports

ErrorCode: 1
ErrorDescription: Error encountered within the plugin engine

When using an Orthanc Peer, the job reports:

ErrorCode: 9
ErrorDescription: Error in the network protocol

At the moment, both Orthanc instances are using the default Http timeouts.

What is unusual some instances get through, but larger ones almost always fail.



Hi James,

There can be many possible causes, but I’ve seen the pattern of small transfers succeeding and large ones failing with VPNs and other setups where firewall drop ICMP packets, causing Path MTU Discovery to fail (https://en.wikipedia.org/wiki/Path_MTU_Discovery )

A simple but not conclusive diagnostic is whether you are able to ping your destination system.


Thanks Walco!

In order to help others, here’s a bit more info on what I found out.

We have several installations that have very slow upload links, ~50kB/sec. We are using the accelerated transfers plugin to upload the instances. What I experienced was that when instances were uploaded, they would fail with an error “Error encountered within the plugin engine”. After further reading (https://book.orthanc-server.com/plugins/transfers.html#advanced-options) I increased the HttpTimeout to 300seconds. The default is 60 seconds

What I understand is happening, is that over the slow networks the default 4meg bucket would not complete inside the default 60 seconds timeout. As a result, the plugin would error due to a timeout. I increased the timeout to 300 seconds giving each 4meg bucket additional time to complete. This has been successful.

For future reference, I understand there are 2 options, 1 is to increase the HttpTimeout. The second is to reduce the bucket size (BucketSize) to something smaller than the default 4096. It is also possible to increase the number of attempts made to complete each bucket (default is 0) by changing MaxHttpRetries. This is helpful for less reliable networks.

Hope that helps someone.