The comment that follows might not be worth much, but I’ll make it anyway.
Maybe you are in the same situation as I am where the problem is not so much the peak memory usage, but the sustained memory usage : the fact that this unused memory is not always reclaimed. When uploading, zipping or retrieving images or archives, the big memory buffers allocated by Orthanc are not always reclaimed by the OS. (due to how heap mgmt works).
Since some cloud services are invoiced based on the sustained memory usage (EKS), keeping 2GB around for a long time has a very nasty effect on the monthly AWS bill…
I have considered a solution where a single main Orthanc would run permanently but would only be used to perform find requests, retrieve the list of series for a patient, read or write metadata, etc. In short, all the “non-bulky” operations.
As soon as a bulky/costly operation must be performed (in my case, it was the posting of .zip fies to /instances
or the retrieving from /archive
), the plan is to spin another ephemeral Orthanc process (on the same shared database, which means that you need to use Postgres), perform the operations, and then, once everything is done, shut it down. I rely on the fact that stopping a process fully reclaims its memory footprint.
Of course the effect on request latency is brutal. This is also a complex solution, but for some use cases, it could be a valid approach.
(Please note that I have started putting the parts and scripts of this project together but I still don’t have a working solution : this is side pet project… so I haven’t been able to measure the actual memory gains)
In case this can be useful for you or someone else…