Does DicomWeb Plugin support buffering response?

Hi Authors,

I have a scenario where dicom file is DSA file which is very large (~1Gb). If we retrieve dicom file via DicomWeb then the plugin read the whole file before responding to the client. In this case the server might encounter being overloaded (RAM , CPU increase). If DicomWeb has mechanism like reading chunks of file then responding immediately to the client then it can reduce the CPU, RAM overloading.

Thanks,
Chris

Hi,

The WADO-RS /dicom/studies/{id} route implements chunked HTTP transfer.

However, each chunk contains a single instance so if you have large instances, that won’t help and you’ll, indeed, still require a lot of memory.

So, to summarize, Orthanc does not implement “streaming” from disk to HTTP response.

Best regards,

Alain.

HI Alain,
Thanks a lot for your answer. Do you have any suggestion for implementing such that feature (supporting stream from disk to HTTP response) ? Or do you have any roadmap for implementing that feature ?

Thanks,
Chris

Well, I don’t have any simple suggestions because that would involve a huge Orthanc refactoring.

This is on our long-term TODO list (aka wish list :wink: ) but it probably won’t be implemented before many years …

Thanks Alain,

If Orthanc Core expose an API which return the file location, I think developer can implement stream response to the client. Do you think it’s a better idea ?

Yes, that could work for reading from Orthanc storage. You can actually already get the instance attachment id from the Rest API through http://localhost:8042/instances/aa6a6975-015289cc-844407b3-ede43544-39e4d8a4/attachments/dicom/info that will give you the file UUID.

From that, you can guess its location (the first 4 characters define the directories).

However, for writing to Orthanc storage, that would not work for now (we would need a way for Orthanc to “adopt” a file that another service would have written in its storage).

Hope this helps,

Alain.

1 Like

The comment that follows might not be worth much, but I’ll make it anyway.

Maybe you are in the same situation as I am where the problem is not so much the peak memory usage, but the sustained memory usage : the fact that this unused memory is not always reclaimed. When uploading, zipping or retrieving images or archives, the big memory buffers allocated by Orthanc are not always reclaimed by the OS. (due to how heap mgmt works).

Since some cloud services are invoiced based on the sustained memory usage (EKS), keeping 2GB around for a long time has a very nasty effect on the monthly AWS bill…

I have considered a solution where a single main Orthanc would run permanently but would only be used to perform find requests, retrieve the list of series for a patient, read or write metadata, etc. In short, all the “non-bulky” operations.

As soon as a bulky/costly operation must be performed (in my case, it was the posting of .zip fies to /instances or the retrieving from /archive), the plan is to spin another ephemeral Orthanc process (on the same shared database, which means that you need to use Postgres), perform the operations, and then, once everything is done, shut it down. I rely on the fact that stopping a process fully reclaims its memory footprint.

Of course the effect on request latency is brutal. This is also a complex solution, but for some use cases, it could be a valid approach.

(Please note that I have started putting the parts and scripts of this project together but I still don’t have a working solution : this is side pet project… so I haven’t been able to measure the actual memory gains)

In case this can be useful for you or someone else…