Is there a specific processing after the resuts of a FindQuery were moved to a specific host ?

Hello everyone,

We are using Orthanc intesively to move DICOM files between multiple points of our network.

Today we have implemented to following mechanism:

  • We perform a FindQuery (API /modalities/{modality}/query) to check if some data are accessible on the given modality,
  • Then we trigger a retrieve (API /queries/{query}/retrieve) to move the data on the Orthanc instance from which we performed the find call.

This works perfectly regarding the transfer but after all those calls are successful (we monitor the Orthanc job progression), we perform a final verification using the lookup API.
For each element we transfered we trigger a lookup to verify that Orthanc has correctly aggregated the data.

Sometimes those lookup call aren’t answering the data are present until a long time. We are retrying the call 10 times after waiting 1 sec between each.
So by long I mean more than 10 sec here. If I come back after and trigger the API call manually, the data finally appear on the Orthanc server.

Is there a specific delay or processing that is performed after retrieve job has successfully finished ?

My point here is to understand better the internal behavior to adapt our verifications.

Thanks
Stéphane

I presume you are doing the retrieve asynchronously ?

If you are using the Python Plug-in there are events for Jobs like:

def OnChange(changeType, level, resource):

if changeType == orthanc.ChangeType.JOB_SUCCESS:
. . .

elif changeType == orthanc.ChangeType.JOB_FAILURE:
. . .

StableStudy as well and others.

If you don’t necessarily need to know the JOB progress but just whether or not it is complete and succeeded or failed, maybe you could use that somehow to verify the data.

Also, I’m not sure how the StableAge setting might effect that if at all.

/sds

Hello !

Yes you’re right, we are triggering the different endpoints asynchronously so we get an Orthanc job identifier, then we keep track of the progress using that identifier.
When the Orthanc job reach 100%, we perform some verification calls to ensure that all the data are correctly ingested.

We are not using the Python plugin but if that can help, we can take a look for sure. Is there any difference between the orthanc.ChangeType.JOB_SUCCESS and a Job at 100% progress in the API ?

Thanks
Stéphane

I highly recommend that you read up about the capabilities of the Python Plug-in and learn how to use it since it gives you access to most of the SDK (As of release 3.2 of the plugin, the coverage of the C SDK is about 87% (138 functions are automatically wrapped in Python out of a total of 158 functions from the Orthanc SDK 1.8.1). Even if you know C++, Python is probably much quicker for development and testing.

Python Plug-In / Orthanc Book

There are a number of examples in the book to get you started. If you are using Docker, it is quite easy to set up. If you are running on the host directly I should say that I initially had some issues using the correct version tied to my system’s minor Python version since the pre-built .so modules are tied to a particular minor version, but you can always compile your own: Compile Python Plug-in

The LSB’s are here: Python LSB’s

Regarding that last question, I’m not sure, and maybe others will have some suggestions. It sounds like you are reporting that the resources that were moved are actually not at their destination, or at least reported as not being completely there, even though the job is reported as 100 % complete ? Like I said, I’m not sure how the “Stability” status affects the API call results and if the 100% means they have all been sent and acknowledged but not necessarily completely processed.

Depending upon the scenario there, you might also be interested in this thread: Storage Commitment

and also Lua scripting if you have not explored that: Lua Scripting

/sds

Hi Stéphane,

Can you check your logs to check that the “Job has completed” appears after all “New instance stored” messages (see example below in which there are no logs after the “Job has completed”).
If that is not the case, I guess that the modality that is performing the C-Move is responding with a “complete” status although it has not sent all instances yet.

Best regards,

Alain.

I1202 14:59:33.836649 StatelessDatabaseOperations.cpp:3093] Overwriting instance: cb129b1e-beb762f3-c16b0f57-4ab4e1bb-7f14ad73
I1202 14:59:33.837207 FilesystemStorage.cpp:259] Deleting attachment “7319e09d-5817-4308-b5f1-e98678152701” of type 1
I1202 14:59:33.837345 ServerContext.cpp:650] New instance stored
I1202 14:59:33.838195 CommandDispatcher.cpp:931] (dicom) Finishing association with AET ORTHANC on IP 127.0.0.1: DUL Peer Requested Release
I1202 14:59:33.838269 CommandDispatcher.cpp:939] (dicom) Association Release with AET ORTHANC on IP 127.0.0.1
I1202 14:59:33.838916 JobsRegistry.cpp:504] Job has completed with success: cb246da9-8e9c-4904-a432-6ffa8cbb61b9

Hello Alain,

Thank you, we’ll take a look at the logs whenever the issues pop again. It’s not an easy to reproduce issue because on a same Orthanc, if we retry the job we don’t have the same delay afterward.

Also, thanks Stephen for pointing out the Python plugin. We already use that plugin for some details however we are managing a network of Orthanc servers and we prefer using the HTTP API since it allows performing the same operation everywhere without installing specific software with Orthanc. Introduce a specific Python code will require to deploy new version of this code whenever it changes, it’s easier on our side to deal with the API.

If we still encounter this issue, we’ll dig into more complex solutions.

Thanks
Stéphane