autorouting with retry?

I have been using Hermes (https://hermes-router.github.io/) as our DICOM router. It works well but is apparently a dead project.

I was hoping to replace it with Orthanc. While it’s trivial to write a simple autorouting Lua or Python script - just a couple of lines - it’s not so trivial to handle problems. For instance, if one of the destinations is down, Hermes is smart enough to hold onto instances and keep retrying later.

(1) Has anyone already done the work of implementing a more reliable router with retries etc. on top of Orthanc?

(2) If not, is anyone aware of other open source DICOM routers?

Hi Daniel,

That is something I have in my mind for OrthancToolJs.
Currently we already have an authentication system, a backend that deal with Orthanc APIs where we can inject custom logic, a monitoring system of Orthanc recieved dicoms (that is used to tigger CD burn in Epson/Primera discproducer).
So technically maybe half of the job to do DICOM router is already done.

I did not made my strategy wet for OrthancToolsJS for next summer, these day i’m focusing on refactoring and debeugging, this summer we should get back making new feature.

If you are an academic environement, you should look if you are able to get a student developper to contribute to OrthancToolsJS, if you have some ressources to code we would be able to follow his work and guide them to build this feature that will be merged on the mainline, 6 month intership with a student with basic knowledge on Javascript should make it.

In my roadmap of OrthancToolsJS, the two next main feature I want is dicom ressource tagging / dicom ressource based access control and autorouting.
As dicom tagging is more needed for research purpose in my french community I will privilege this one for this summer (will also depend If I will be allowrd to hire an apprentice developper with me).

Best regards,

Salim

Hello,

You have two approaches to deal with routing errors:

(1) Monitor the status of the C-STORE jobs using the REST API, the Lua SDK (using OnJobFailure() callback) or the C++/Python plugin SDK (using OrthancPluginRegisterOnChangeCallback() with OrthancPluginChangeType_JobFailure event), and possibly retry them in the case of an error:

https://book.orthanc-server.com/users/advanced-rest.html#monitoring-jobs
https://book.orthanc-server.com/plugins/python.html#listening-to-changes

(2) Implement a short Python plugin that would use a small side database to store the routing status of the individual instances, periodically retrying the failed ones (this could probably hold in less than 100 lines of code):
https://book.orthanc-server.com/plugins/python.html

Sébastien-

I have done a combination of what Sébastien recommends. I have configured Orthanc to communicate two ways for example, standard C-Move and Transfer Accelerator, or two different servers for example.
I install a copy of Mirth to manage the transmissions, if Orthanc fails to send the study via C-Move/C-Store Lua will send a message to Mirth, which can then alert, and re-submit the job as a Transfer Accelerator job, or as a job to a backup gateway.

Hello friends!

I have implemented a solution similar to Brian’s and Sénastien a couple of years ago. TransferAccelerator was in development at the time, so sadly it was not to be used.

You could use an alternative approach with the changes URI. But AFAIK, you’d to change the DELETE method. I mean, for one of the more robust possibilities I can think of.

Cheers