Best configuration to accept heavy load in short period of the time

Hi,

Can you please suggest, based on your experience, configuration or setup of one or more Orthanc servers which are able to handle/accept heavy load in a short period of the time? Let’s say approx. 10 GB or more in one hour.

The best option is to have one central server, but all opinions based on practical experience are acceptable, maybe by using some gateway for redirection in a distributed environment, or anything useful what works in real life for you.

Thanks a lot,
Vedran

10GB in an hour shouldn’t be a problem, are you running into issues? There are a lot of variables:

  1. Bandwidth to Orthanc and from sender. Multiple or one sender?
  2. Hardware of Orthanc Server.
  3. Image type. Are these a few large images, or lots of tiny instances?
  4. Processing if images. Is Orthanc doing any processing of the images, such as compression, tag editing, etc.

Yes, I experience very slow upload to the publicly available Orthanc:

  • Server is hosted on Azure in a Docker container on Linux.

  • I need 10 minutes for the upload of the study size 250 MB.

  • My upload bandwidth from my testing location is about 10 Mbps. This is standard bandwidth of the public providers in my area.

  • On the end I expect to have multiple senders. Should I require some minimal bandwidth?

  • I am uploading study with 500 images, every image is 500 KB.

  • I don’t process the images.

  • I will check my hardware configuration and reply later, but it should be fine.

NOTE: Upload is done over Orthanc API, not DICOM protocol.

Thanks,
Vedran

You will never get 10GB over a 10Mbps connection. If you were looking for multiple sites to send a total of 10GB over an hour, that may be possible depending on what bandwidth Azure has.
You can look at compressing, either DICOM compression or compressing via HTTP compression if you aren’t using DICOM compression. If possible DICOM compression is best from the source side.

You are right! I didn’t pay enough attention to the bandwidth. It seems that calculator is my best friend now.

Regarding compression, I am not able to control senders/sources always, but let’s assume that I can. It seems logical that I can have some performance benefits only if data are sent from the senders already compressed (what you said is good option if possible, but I would add, it is mandatory), and that Orthanc can save them without additional processing. Is it the case if I am using Ambra Gateway or Nuance on the side of the senders?

Otherwise, if data are sent not compressed, I can only have worse performance because Orthanc will spend some time in order to compress the data, right?

Over that slow of an internet connection, compressing should be faster. I would suggest trying three tests, uncompressed, JPG2K Lossless compression, and HTTP compression and see performance between them.

Ok, I will test that asap.

One more question/clarification, please. I have one additional sender which is not standard DICOM tool like Ambra or Nuance. It is my piece of software which takes DICOM folder and uploads file by file over Orthanc API. What would be the proper handling of that case? I can’t upload zip file. It has to be done file by file. I already noticed that I can’t just asynchronously upload all of the files at once, because it totally blocks the Orthanc if this directory is over 50MB or so. My conclusion is that I have to have some strategy for example upload 100 by 100 files, and wait until they are uploaded by checking /statistics endpoint. I mean, if I have few parallel uploads, it will also not work, but I don’t have better solution at the moment. Do you know for some better option?

What are you using to upload them? Script of some sort?
I would very much recommend checking out Mirth connect. You should easily be able to monitor a folder and send either to the remote Orthanc, or a local Orthanc instance then have it send remote no problem as it can monitor a folder, do REST API but also DICOM transfers.

What are you using to upload them? Script of some sort?
I have my custom solution written in ASP.NET Core which sends files to the Orthanc over API.

I am going to check Mirth Connect…

If your goal is to speed-up transfers between Orthanc peers, you may have an interest in the “transfers accelerator” plugin:
http://book.orthanc-server.com/plugins/transfers.html

This plugin notably transparently handles compression to reduce network bandwidth. There is no official release yet, but it should be out for Christmas.

HTH,
Sébastien-

I don’t understand your statement that Orthanc is blocked when uploading many files asynchronously through the REST API. Please could you be more explicit and provide a way for us to independently reproduce your issue?

I also notice that it is possible to develop a plugin to Orthanc that would receive a ZIP file through its REST API, and that would uncompress it to the Orthanc store. Please learn how to contribute to Orthanc:
http://book.orthanc-server.com/contributing.html

http://book.orthanc-server.com/developers/creating-plugins.html

HTH,
Sébastien-

Some time ago we experienced a similar issue and it turned out that the Orthanc package in the Ubuntu repo was a debug build.

So you might just want to check your Orthanc to be a release build to be sure.

Dear Sebastian,

I didn’t post to this thread for quite some time, even I was very busy to improve my system memory consumption, performance etc. I actually took seriously all comments here in order to make my system better. I didn’t post the details because some of the issues I faced were not related to the Orthanc itself, so it didn’t make sense to bother you or other guys with it.

There is still one thing which I wanted to check, and I did. I accepted possibility and your suggestion to create Orthanc Plugin which will receive a ZIP file through its REST API, and that would uncompress it to the Orthanc store. Here is link you mentioned:

http://book.orthanc-server.com/developers/creating-plugins.html

I see one big problem here. It is mentioned that in communication between the Plugin and the Orthanc we can use not more than 4GB of memory. If this is true then in my case I can’t rely on the plugin. I am working in environment where multiple users can upload something to the system. If few of them decide to upload 1GB zip file in the same time, it will not work for me.

Do you agree with this conclusion, or maybe do you have some additional comment?

Thanks a lot,
Vedran

Dana subota, 8. prosinca 2018. u 11:39:48 UTC+1, korisnik Sébastien Jodogne napisao je:

Dear Vedran,

The 4GB-limit is just a current limitation of the Orthanc SDK header. It can obviously be extended by introducing new primitives in the SDK that replace the “uint32_t” sizes by “uint64_t” sizes.

Please provide a list of the SDK primitives that should be extended to reach your objective: “If the current plugin SDK is insufficient for you to develop some feature as a plugin, do not hesitate to request an extension to the Orthanc SDK on the mailing list.”
http://book.orthanc-server.com/contributing.html#contributing

Regards,
Sébastien-