Job history combined with auto forwarding

Hey, I’ve got an issue around a combination of heavy use of async jobs (monitored by an external application) combined with a simple lua script to auto forward instances.

My external application uses the orthanc REST API to query for the status of the jobs before moving on to other things once the job is known to have succeeded.

We have recently introduced auto forwarding for some studies, and some of these are large CTs with thousands of instances. Each one of those instances is creating a job to do the auto forwarding.

What this has resulted in is the other jobs that my external application is keeping track of are getting dropped from the memory before they are observed to be finished (I am polling the API every 20 seconds for each job I care about).

I’ve got around this for the moment by asking orthanc to keep 5000 jobs in memory.

Has anyone else come across this issue or have any advice on how to best work with it? I don’t mind having 5000 jobs in memory, but I believe I may still encounter the issue if two large studies are auto forwarding at once, for example. I also am aware that 5000 jobs in memory is way more than the default provided in the config, so I am unsure if this is good practice either.

Thanks very much!

Hi Tim,

I think your approach is correct. A few workarounds:

  • perform auto-forwarding at series or study level to minimize the amount of jobs. However, that would introduce some delay because you would have to wait for a “STABLE_STUDY” or “STABLE_SERIES” event.
  • perform auto-forwarding with another orthanc that would be specific for that. Possibly, if you are using PostgreSQL, they could be connected to the same DB. Each one will have its own jobs list

On our side (Orthanc), it could make sense to add a DELETE route for jobs to remove them once you have acknowledged them. You could then even add a lua script to auto-delete instance forwarder jobs as soon as they complete to keep only the jobs of interest.

HTH,

Alain.

Thanks Alain, interesting that you suggest using the STABLE_SERIES event, I actually explored that path earlier today. My issue is that at the point of STABLE_SERIES I don’t appear to have the AET that I called from anymore, which is important for me as it’s part of my logic when deciding whether to forward it on.

I have an urgent AET and a non urgent AET, both calling the same source PACS, if the images are related to the urgent AET path I want to auto forward them, if they’re not, I don’t. This information only seems to survive as far as the OnStoredInstance? Or perhaps I’m not looking in the right place?

A DELETE certainly sounds like it would solve my problem here. I’ll keep an eye out for whether it makes it into your backlog!

Yes, this info does not exist at series level but you could get it from the /metadata from the first instance of the series (check for RemoteAET and/or CalledAET)

Note that I have just added the DELETE /jobs/{id} route in this commit

Thanks Alain! Really useful information above, and amazing that we now have a delete job route.

If I have a study that triggers STABLE_SERIES because no new instances have arrived in the allotted time, but then some more instances do arrive, maybe due to a system rebooting, does it trigger STABLE_SERIES again?

Hi Tim,

Consider that you have a StableAge of 60 seconds:
t0 → t0 +5s : Orthanc receives 100 instances
t0 + 65s: Orthanc will generate the STABLE_SERIES (and STABLE_STUDY) event
t0 + 66s: Orthanc receives an extra instance because e.g. the sender has rebooted
t0 + 126s: Orthanc will generate the STABLE_SERIES (and STABLE_STUDY) event again
t0 + 130s: Orthanc receives yet an extra instance
t0 + 132s: Orthanc switches off (for a reboot)
t0 + 135s: Orthanc restarts → you’ll never get the STABLE_SERIES/STABLE_STUDY events because Orthanc will consider all the studies as stable at startup

So, for 100% robust scenario you should:

  • either continue to use the OnStoredInstance lua callback
  • or “browser” the Orthanc content when it starts (this works when Orthanc is used as a simple forwarder and is supposed to be always empty)

FYI, here’s a sample of a robust forwarder.

HTH,

Alain

Hi Alain,
Is this possible to configure job like auto-delete if job is finished properly (i.e without error) ? Since job is asynchronous, it’s hard to predict when job is completed in time without continuously monitoring.

Thank you for all your advice Alain, extremely helpful.

For anyone who comes cross this, a word of advice from my experience. Keeping 5k jobs in memory worked OK on my setup, but wasn’t OK when combined with saving jobs to the database.

I’m running Orthanc with an AWS Aurora RDS Postgres instance as a backend, and my disk write IO went through the roof and the database really started to struggle. Not a surprise when you think about it, but wanted to share the lesson.

1 Like

Hi Christophe

That is currently not possible but that sounds like a good idea ! I’m adding it to our TODO.
Until it is implemented, the work-around is to implement a lua call-back like this one (disclaimer: not tested)

funtion OnJobSuccess(jobId)
    local jobDetails = ParseJson(RestApiGet('/jobs/' .. jobId))
    if jobDetails['Type'] == 'DicomModalityStore' then
        RestApiDelete('/jobs/' .. jobId)
    end
end