Orthanc slow in saving dicom files sent from a modality

Hi everyone,
I have a system with orthanc + postgresql. I currently have about 3TB of dicom saved in a network NAS, while the DB has about 22GB of data.

I noticed that when I send exams from the mammogram or MRI the sending is very slow, the images are sent approximately 1 at a time.

The same exam sent to two other PACS (no orthanc) has a very fast delivery. Both PACS reside on the same network and are connected to the same switch and the STORAGE is always the same.

What could be the problem? What tests could I do to understand and identify the bottleneck?

It seems that as many send requests are made as there are files, as if there was a block preventing multiple images from being sent simultaneously.

Hi,

If the modality is opening a single DICOM connection to Orthanc, it is normal that only one file is received and written after the other.

Concerning the speed, the NAS CPU are usually very slow so, if transcoding is involved, that might slow down everything.

On a NAS, if your DB is not stored on an SSD and your RAM is small, the DB can be very slow as well…

In general, check your logs in verbose mode, that shows some timing info …

Best regards,

Alain.

The other Dicom nodes also work on the same NAS and the same modes send without any problem to the other nodes.
The NAS is an enterprise model from QNAP with 32GB of RAM and a Ryzen 7 CPU with 8 cores and 16 threads so it is definitely not the NAS that is the problem, otherwise the other Dicom nodes would also have had the same problem.

I’ll try to do a verbose check to understand

I looked at the logs while the MRI was sending the dicom files but there is no error and I don’t seem to see any strange logs.
Dicom images continue to send very slow exams unlike the other nodes.
I also made a video that shows how the ORTHANC node (SMARTPACS) receives very slowly while the others (GEPACS, ASTER, OSIRIX) receive without problems.

I just can’t figure out where the problem is.

Below is the link to the log file and the video that shows the difference in speed.

Log and Video

If you look at your logs here, you’ll see that each file takes around 100ms to store on Orthanc.

I0528 17:01:00.787850          DICOM-5 main.cpp:353] Incoming Store request from AET MR450W on IP 10.200.200.70, calling AET SMARTPACS
I0528 17:01:00.816299          DICOM-5 FilesystemStorage.cpp:126] Creating attachment "1c466982-319c-4e39-90d1-0bf9d3dff6d5" of "DICOM" type
I0528 17:01:00.826973          DICOM-5 FilesystemStorage.cpp:156] Created attachment "1c466982-319c-4e39-90d1-0bf9d3dff6d5" (52.36KB in 10.65ms = 40.27Mbps)
I0528 17:01:00.827789          DICOM-5 FilesystemStorage.cpp:126] Creating attachment "c8b0f63a-eea4-4076-bb5b-0892461b62b4" of "DICOM until pixel data" type
I0528 17:01:00.875178          DICOM-5 FilesystemStorage.cpp:156] Created attachment "c8b0f63a-eea4-4076-bb5b-0892461b62b4" (5.53KB in 47.35ms = 956.22kbps)
I0528 17:01:00.888712          DICOM-5 ServerContext.cpp:762] New instance stored (a47167fb-41c50411-e509add8-49ab05b4-1c0d40cd)

Once you send the same study again, orthanc is replacing the files and, for this, it needs to delete the old files and you’ll observe that deletings are super slow on your system - I don’t know why.

I0528 18:03:51.957464         DICOM-13 main.cpp:353] Incoming Store request from AET MR450W on IP 10.200.200.70, calling AET SMARTPACS
I0528 18:03:51.978827         DICOM-13 FilesystemStorage.cpp:126] Creating attachment "f383e393-951d-474e-8867-fa95449a61f0" of "DICOM" type
I0528 18:03:51.998270         DICOM-13 FilesystemStorage.cpp:156] Created attachment "f383e393-951d-474e-8867-fa95449a61f0" (62.68KB in 19.41ms = 26.46Mbps)
I0528 18:03:51.999588         DICOM-13 FilesystemStorage.cpp:126] Creating attachment "39839d65-26ce-404e-a3a2-66faeaaad119" of "DICOM until pixel data" type
I0528 18:03:52.046099         DICOM-13 FilesystemStorage.cpp:156] Created attachment "39839d65-26ce-404e-a3a2-66faeaaad119" (5.63KB in 46.48ms = 992.00kbps)
I0528 18:03:52.048530         DICOM-13 FilesystemStorage.cpp:266] Deleting attachment "f383e393-951d-474e-8867-fa95449a61f0" of type 1
I0528 18:03:52.499831         DICOM-13 FilesystemStorage.cpp:266] Deleting attachment "39839d65-26ce-404e-a3a2-66faeaaad119" of type 3
I0528 18:03:53.053312         DICOM-13 ServerContext.cpp:766] Instance already stored (d65f9b16-d6f647aa-209cfe8e-29e16f19-7c435cca)

You may try to enable the delayed-deletion plugin and check if that helps (if the storage is slow, that will help, if the DB is slow, that won’t be at least we’ll know who is the culprit).

HTH,

Alain

but I didn’t make any eliminations. as you say when I send it it takes about 100ms but from the video you can see that unfortunately this is not the case. the deletion is actually another test that I did by sending the same images again.

But you agree that, at some times in your logs, you receive an instance every 100ms and at some other times, you receive instances every second right ?

I can not analyze 10 h of logs if I don’t know what you have been testing and when.
You should isolate a single test in each log file. A first transmission of a study that is not yet in Orthanc and a re-send.

Yes sometimes I get multiple instances quickly and sometimes 1 every second. most of the time it is always 1 per second.

You’re right about the logs, I try to do tests with more precise logs both with a new send and with sends with overwriting.

Thank you.

Greetings,
I managed to do a test with two exams.
One test is a breast MRI while the other is an abdominal MRI.
I have not sent any other data and have split the logs for greater clarity.

Link New Log

Well, you’ve got the numbers:

200 ms between the first TCP packet and the writing of the file on disk:

  • either the network is slow
  • or storage compression is slow and you should try disabling it.

Best regards,

Alain.

Thank you first of all for your response.
The problem is definitely not the network as there are also other machines on the same network that send without problems.

On the dicom server I activated compression with 90% using the default parameters.
I try to disable compression even if it’s very strange because the server machine has fairly good hardware.

I’ll do this test and let you know if everything is ok.
Thank you.

I disabled compression and now the system works much better but it is not very fast compared to the other dicom nodes present in the same network.

Thanks

Hello, did you manage to find a solution/fix it? or know the cause of the problem? I’m having same issue, In fact I installed another different dicom server on the same machine, and it receives faster than orthanc, so it’s definitely not network or hardware.

Btw, how to disable compression from orthanc.json? like which parameter?
this one??:
// Enable the transparent compression of the DICOM instances
“StorageCompression” : false,

Hello,

Unfortunately, by turning off the compression, I slightly improved the performance but not significantly.

It’s always slow compared to other nodes of the decon.

I did a lot of testing but nothing to do.

You may analyze the logs again by enabling the PostgreSQL logs with this option: "EnableVerboseLogs": true in the "PostgreSQL" section and check the timings.

Hi,

That’s how I would approach this issue: can you maybe try the same operation with a freshly instantiated Orthanc using the (default) DB and storage on a local ssd? Maybe you can first start with the default configuration and, if this new Orthanc works well, you can gradually modify it to bring its configuration closer to your actual production setup, in terms of DB and storage location (NAS vs SSD, SQLite vs Postgres…) and in terms of configuration, Lua/Python callbacks, etc… and try to pinpoint what exactly is hurting the perfs?

You can also turn on verbose logs, as indicated by Alain, and match the log entries between the fast and slow Orthanc to investigate which specific step leads to the bottleneck?

I enabled postgres logging as directed.
and in the next message I send the log.

Furthermore, I am preparing a secondary machine to carry out tests in parallel with the current machine in production.
I’ll definitely do some testing next week.

Thank you.