resource-problem with large instances

Hello,
I’ve been running an Orthanc-Server (now ver. 1.0) on a Linux-Mint (Ubuntu based) System for about 3 years with aprox. 9200 Studies, all of them Ultrasound pictures or Clips. (PostgreSQL (9.5) index and storage).
The system was running almost fine but frequently crashes, when receiving large instances (long ultrasound-clips about 1/2 to 1,5 minutes length).

For weeks i’ve been trying to perform a Backup via Replicate.py (on a new system with debian testing, Orthanc 1.3 (from repository) and Postgresql 10.1, 4GB ram), but the script always crashes after successfully transmitting some hundreds or thousands studies.

Meanwhile i figured out, that the problem occurs with very large instances in some studies, in almost the same manner as orthanc crashes in regular operation, when a modality sends large instances - so it must be the same problem. My colleague sometimes makes examinations with clips about 1 minute length (more than 1000 Frames) and these instances make the Replicate script crash. (On the new target-server i changed the storage to filesystem storage (postgres for indexing), the former server indexes and stores in postgresql).

However, when using replicate.py Orthanc uses a vast amount of memory in both the target and the source system (look “top” report below), so this could also be a performance/timeout-issue. I also tried with postgresql storage before, but the crashes always corrupted the database. Since storing in filesystem, at least the index database of postgres survives the crashes of the replicate script and the target system now doesn’t report anything special in the logs of orthanc or postgres. Finally after manually deleting large instances on the source-System i managed to complete the replicate script and i am now running the new system.

In regular operation on the new system, when a modality is sending such large instances, the system performance ist collapsing dramatically (the physical memory ist flooded and the system is swapping aggressivly; look at the “top” report with a 3GB example, 2070 frames). At least, the system doesn’t crash anymore. (The modality reports a failure, the instance is stored in orthanc but damaged).

Is there a way to make Orthanc refuse large instances over a defined size?

Could there be a way to store larger instances in Orthanc like longer clips without overloading system resources?

hope, someone can help me, thank you
Best regards

Mark Sajthy

Hello,

According to the output of “top”, your Linux system only has “3872492” bytes of available memory (i.e. about 4GB).

If Orthanc receives a large file (say, above 1.5GB), the RAM becomes insufficient to simultaneously read the file within Python, to receive the file in the Orthanc memory, and to communicate with the PostgreSQL server. This results in consuming the swap space (hence a large slowing down in the system), then eventually in a crash (when both swap and RAM are exhausted).

As a consequence, please upgrade your system so that it has more RAM and/or swap space.

Also, make sure that your Orthanc binaries are compiled in 64-bit mode, and that you use a 64-bit version of Linux Mint. You can check this with the “file” standard command-line tool (e.g. “file /usr/local/sbin/Orthanc”).

HTH,
Sébastien-

Yes, the system has 4GB and Orthanc is in 64bit mode.
The problem occurs not only with python scripts like replicate.py. It also occurs in regular operation of orthanc, if a modality is sending large instances; my system very often crashed due to run out of memory.

Of course, i could upgrade system memory to 8gb, but that will not solve the problem if an instance with 9GB or higher is sent by a modality. Also i wouldn’t like that solution very much, because it contradicts the lightweight philosophy and that’s what i like very much in orthanc: running an mini-pacs on small resourced systems.

Actually i don’t really want to store those large instances, because they will soon fill my storage. And i can’t tell my colleague to stop making long clips. I just need a way to protect the system from those “harming instances” by making orthanc to refuse or ignore them.

I also tried to write a lua-script to refuse instances with dicomtag “NumberOfFrames” over e.g. 150 by using ‘function ReceivedInstanceFilter’. That works pretty good with smaller instances but large instances lead to the same problem because orthanc obviously has to receive the complete instance in memory before invoking ‘function ReceivedInstanceFilter’.

Isn’t there any way to make orthanc refuse large instances? Or any other mechanism, that protect orthanc from running out of memory? If not, i will upgrade memory.

Thank you
kind regards
Mark

No, right now, there’s really no way to make Orthanc refuse large instances. Indeed, Orthanc needs to receive the full file before being able to filter it out → it needs a RAM at least twice as big as the biggest instance you might receive.

Optimizing memory usage up to the level you would expect would require a huge rewrite and it is not planned in the near future.

As hinted by Sebastien and as an immediate safety measure, you should
try simply increasing your swap space to let the kernel dump parts of
the working set of Orthanc to secondary memory (e.g. a hard drive). If
you don't have room on any dedicated volume/partition/device, you can
create swap files on any available filesystem to add to the pool.
You'll be interested in the kernel "vm.swappiness" runtime parameter,
which you might want to tweak.

Of course in theory it feels like only a small buffer would be
necessary for most transfer and storage operations. If you think you
can contribute a patchset or know how to make one happen, I'm sure it
will be welcomed.