Hello there !
We encountered a strange issue recently.
We are using Orthanc to manage a decentralized network of DICOM storages. We are using the jobs a lot to move DICOM files inside the network with the TransferAccelerator plugin between Orthancs and with standard DICOM operations with other DICOM storages.
One of our users triggered an operation that leads to the creation of 750 jobs on one Orthanc instance.
We though that it mustn’t be an issue because in our configuration we have 4 parallel jobs allowed. We though processing those 750 jobs will just take some time since there is this queue system in place.
The server was put on fire, about 48Gb of RAM were consumed (that’s almost 100% on that machine…), processor was really busy too. A lot of jobs failed, we think because of performance issue…
One important note is that we are relying a lot on the job API to keep track of the progress. We call this API for each job so if there is a lot of jobs…
We are not sure about what caused those performances issue, is there specific process applied on a job whenever it has been posted on the server ? Or maybe the job API can be the root cause ?
I know that we are heavy Orthanc users and that our case is not the common one… If it’s not clear enough, we can give more details .
Thanks
Stéphane