Handling C-MOVE requests with a high-latency storage (REST/Lua/Plugin)

Hi all,

I am currently experimenting with the StorageArea Plugin and REST Api to combine Orthanc with high-latency storages (such as Tape or aws Glacier)
Handling C-Store doesn’t seem to be a very huge issue as long as you keep the JSON header files locally and use a local cache before moving the DICOMS to the final storage.

The other way around naturally is a bit more nitty-gritty, since it may take hours to provide the requested files on a local cache to serve the StorageRead function in the Storage Plugin.
So 2 questions:

1/ is there a certain timeout when the Orthanc core calls the Storage Area plugin (StorageRead), maybe even a general issue with blocking the Application as a whole?

2/ If yes/alternatively: is there a general possibility (REST, Plugin), to externally link into the general C-Move → C-Store loop?

Concerning 2/ I am thinking about a workflow like:
onC-MOVErequest: event providing the Moveoriginator and the original request to trigger an external storage handler
As soon as the requested files are available on a local storage: trigger the final C-Store via REST, including the Move Originator etc., to provide the correct context)

Thanks for the help and the great software. Just discovered Orthanc some one month ago and already love it!

Maybe just an additional remark:

I know about the OrthancPluginMoveCallback but as far as I understand it the driver function will be called by Orthanc directly after the callback function has been performed, being responsible for the respective C-Stores.
The scenario I describe in 2/ would mean to stop the Orthanc involvement after the Query part and trigger the C-Store (adding parameters for Moveoriginator and ID) e.g. via REST when the DIcom files have been retrieved from the external storage to the local cache.

Hello,

Orthanc is built upon the assumption that it can read from its storage area in a “reasonable time” (i.e. below HTTP timeouts in the REST API, and below TCP timeouts if using the DICOM network protocol). This is fine for “regular” blob storage (“Standard” and “Standard-IA” on S3), not for the S3 Glacier.

In other words, you cannot combine the “real-time” requirements of a DICOM modality/PACS/VNA, with the several hours that might be needed to read from the Glacier. You are simply considering an use case for which Glacier is not designed for:
https://aws.amazon.com/en/s3/storage-classes/

You must re-think your workflow, and carefully ask yourself: How could e.g. a DICOM viewer ask to retrieve an archived DICOM study from the Glacier, then be warned when this study is retrieved from Glacier, possibly a few hours after the initial request? This is not possible with the DICOM protocol.

In either case, the solution will most probably be to use Orthanc as a local cache, combined with an Orthanc plugin that will be in charge from retrieving a queue of studies from the Glacier using background threads. The content of this queue could be specified by REST calls that extend the REST API of Orthanc.

HTH,
Sébastien-

Just a re-think about my previous answer: It could indeed be envisioned to have a workflow where the C-FIND/C-MOVE requests are transparently separated from the C-STORE answers emerging after the retrieval from the Glacier (possibly few hours after the C-MOVE).

This would however require to have access to the C++ class “Orthanc::DicomUserConnection()” that is implemented by the Orthanc core, in the Orthanc plugin SDK.

We may consider this for inclusion in the long-term, but this is clearly not one of our priorities.

Hi Sébastien,

yup, that’s what I thought.
It would be quite interesting since it would open the door to use Orthanc as a Dicom front end for object storage in an environment targetted for longterm archiving

(mainly storing images with only a few retrieves)

OK, this is added to our long-term roadmap:
https://bitbucket.org/sjodogne/orthanc/commits/706b60e7ee1e99df13c016f7151581ccfad63e59

We will consider this development as soon as we find funding from the industry.

Hi, I was wondering if there was any progress/implementation of these features in the last time?
I’d also love to have the feature of hot and cold (not necessary glacier) storage (e.g. with SSD and NAS).
Thanks a lot, Dominic

Hi,

Since then, we have implemented an hybrid mode that allows mixing standard disk storage together with object storage.

HTH,

Alain

Hi Alain,
thank you for you message and the information about the Cloud Object Storage plugins.
I was wondering if it’s also possible to have a local cold storage, e.g. if I wish to use a local NAS for that purpose? If I understand it correctly, those plugins it will only work with AWS S3, Azure or Google, correct?
Thanks a lot, Dominic

Indeed but you should be able to install the S3 API over a standard storage thanks to minio

Hi Alain,
thank you very much for your support! I will have a look into it.
BR, Dominic