Hello to everyone,
I’ve been having problems with memory using Orthanc so far, in particular during downloads.
I use docker osimis/orthanc:20.9.5 and due to several problems with 4GB dedicated vm, I decided to test with more powerful server.
Obviously OutOfMemory error with 500 code appears with large studies, so I noted the following with 3.75 GB study (15 series, 7463 images):
- zip (2 GB) successfully created with 280 MB average memory usage in about 10 minutes
- when zip is ready for download → 500, OutOfMemory because for a while Orthanc tries to use about 5 GB of RAM.
I’m wondering
- how Orthanc really works for zip generating/download preparing
- there is an option to control/set jobs queue (for no large studies but to manage multiple concurrent jobs which produce same issue)
Thank you so much
Jacopo
I’m proceeding with my tests and I share the following with you:
when Orthanc (same configuration as above) goes OutOfMemory, it restarts the same job itself.
Hope this can be useful to help me
Thank you
Jacopo
Orthanc is using zlib library to generate zip file. It does zip dicom files inmemory so if the study is large the RAM should be large enough
Thank you so much for your response.
Ok, it’s clear that zip file is stored in RAM, thank you again.
I’m trying to use Orthanc jobs management, in particular with this configuration (always in Docker):
“JobsHistorySize” : 10,
“LimitJobs” : 1
with async call to following endpoint:
http://ip:port/studies/study_id/archive
In this way Orthanc produces a queue (pending jobs), when job is complete another one can be processed.
The problems are:
-
In jobs queue Orthanc does not consider downloads, but only archive generations: so, when zipping operation is done, completed zip file is stored in RAM and in the meanwhile next job (zipping) starts. I want to avoid multiple operations (zipping/downloading).
Is there a solution to manage downloads like archiving jobs?
-
When Orthanc fails due to OutOfMemory, processing jobs fail, but they resubmit themselves.
Is there a solution to avoid automatic retry/resubmit?
I read this happens because Orthanc save jobs in its db (also if I use indexing in fs?), so when it is ready after OutOfMemory, failed jobs start again.
I also read that SaveJobs manages above feature, but also set as false I can’t avoid automatic retry.
https://book.orthanc-server.com/users/advanced-rest.html
“By default, Orthanc saves the jobs into its database (check out the SaveJobs option). If Orthanc is stopped then relaunched, the jobs whose processing was not finished are automatically put into the queue of pending tasks. The command-line option --no-jobs can also be used to prevent the loading of jobs from the database upon the launch of Orthanc.”
Maybe I don’t well understand, hope someone can help me
Thank you so much
Jacopo
Hi Jacopo,
I would say
1/ You can develop a plugin which handle archive/ zip jobs. In my case, I had developed a plugin which does zip jpeg files (orthanc just zip dicom file).
2/ The book say
“By default, Orthanc saves the jobs into its database (check out the SaveJobs option). If Orthanc is stopped then relaunched, the jobs whose processing was not finished are automatically put into the queue of pending tasks. The command-line option --no-jobs can also be used to prevent the loading of jobs from the database upon the launch of Orthanc.”
It does mean if you set the option “SaveJobs” to be true in its json configuration (orthanc.json) then failed jobs are not automatically reloaded if orthanc be restarted.
Hello!
Thank you so much for your reply.
Yes, you’re right about handling jobs (both zips or other tasks) with my own script. I think I’ll try to manage by myself different kinds of tasks.
About SaveJobs, it didn’t work because I used an older version of dockerized Orthanc.
Here another conversation which helped me: https://groups.google.com/g/orthanc-users/c/6xlxFqSbx_c
I’m monitoring a new installation with these configurations and RAM problems are not occurring so far.
Thank you so much
Jacopo