Orthanc dying on recieving DICOM - how to troubleshoot?

I have Orthanc receiving DICOMs from a MRI using a modified version of AutoClassify.py to write each instance out to disk.

For several small MRI’s (1600-3000 images) it works fine, however, when I do a test with a 51627 images it is failing:

Once this happens Orthanc seems to be automatically restarted every 14-20 minutes going from the logs (which contain no useful hints).

From AutoClassify I get the following message:

Writing new DICOM file: \storage.hcs-p01.otago.ac.nz\its-pacs\DICOMExport\Unknown\QA Phantom NiCl - test\DMHDS_PHANTOM\MR - ep2d_fid 55 mins\1.3.12.2.1107.5.2.19.46231.2016071209113992632029081.dcm
Unable to write instance 614b7d22-8e421c89-8c5d2453-91d77e05-bcbe361d to the disk
Traceback (most recent call last):
File “C:\admin\DICOM-export\DICOM-export.py”, line 161, in
‘limit’ : 4 # Retrieve at most 4 changes at once
File “C:\admin\DICOM-export\RestToolbox.py”, line 58, in DoGet
resp, content = h.request(uri + d, ‘GET’)
File “C:\Admin\Python\lib\site-packages\httplib2_init_.py”, line 1314, in request
(response, content) = self.request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "C:\Admin\Python\lib\site-packages\httplib2_init
.py", line 1064, in _request
(response, content) = self.conn_request(conn, request_uri, method, body, headers)
File "C:\Admin\Python\lib\site-packages\httplib2_init
.py", line 987, in _conn_request
conn.connect()
File “C:\Admin\Python\lib\http\client.py”, line 826, in connect
(self.host,self.port), self.timeout, self.source_address)
File “C:\Admin\Python\lib\socket.py”, line 711, in create_connection
raise err
File “C:\Admin\Python\lib\socket.py”, line 702, in create_connection
sock.connect(sa)
ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it

Does anyone have any advice on how to troubleshoot what might be going on? Increase/decrease limit/something else?

Cheers,
Mark

Hello,

Have you enabled some plugin in Orthanc (notably the PostgreSQL plugin)? If so, please disable all of them to be sure the problem lies within the core of Orthanc.

If the problem still appears using the default database engine (SQLite), make sure the directory containing the “OrthancStorage” folder is large enough to hold all the files.

Finally, please post here the log file of Orthanc in “–verbose” mode. Debugging on our side will not be possible if you do not provide us with a set of problematic files and the way to reproduce the issue (e.g. with storescu).

HTH,
Sébastien-

Thanks Sébastien.

I’m still very much a novice with Orthanc or DICOM so please bear with me.

Can you or anyone else tell me how to start Orthanc successfully from the command line in Windows with the verbose mode enabled? (I have been running it as a service so this isn’t something I am familiar with).

I tried using the following and a window flicks open then closes and nothing appears in the log file. Redirecting the output to a file with > also shows nothing apart from the enter password prompt text:

C:\Users\sstmarkh>runas /user:registry\orthanc-svc “c:\Program Files (x86)\Orthanc\Orthanc Server 1.1.0\Orthanc-1.1.0-Release.exe "–verbose"”
Enter the password for registry\orthanc-svc:
Attempting to start c:\Program Files (x86)\Orthanc\Orthanc Server 1.1.0\Orthanc-1.1.0-Release.exe “–verbose” as user “registry\orthanc-svc” …

I have not enabled any plugins, the storage is on a file server with 1TB currently assigned so storage limit is not the problem. I have attached my configuration.json but it is almost original.

I have tried starting the service with --verbose in the Start parameters but don’t know if that has done anything - ie., nothing new is in the log file and when I upload a few known good instances they appear in the Orthanc GUI but nothing is logged. How can I test if verbose logging is enabled - should it say that verbose logging is turned on in the log file? I have attached the log created when I ran started Orthanc with --verbose in the Start parameters field of the Orthanc Properties window in the services manager.

Cheers,
Mark

Configuration.json (11.7 KB)

Orthanc-1.1.0-Release.log.20160713-092118.5200 (1.5 KB)

Hello,

To ease things (especially wrt. user permissions), I suggest you to stop the Orthanc service, and to manually start the command-line version of Orthanc that is available for download at:
http://www.orthanc-server.com/download-windows.php

Then, put your “Configuration.json” and the just-downloaded “Orthanc-1.1.0-Release.exe” into the same folder (e.g. “C:\Temp”). Finally, type in a command-line shell:

cd C:\Temp

Orthanc-1.1.0-Release.exe --verbose Configuration.json > Orthanc.log 2<&1

HTH,
Sébastien-

Thank you very much for that Sébastien. I have been trying this out today.

Since moving Orthanc’s storage to local disk and modifying the Construct a target path section of the AutoClassify.py to use a “tidy” function to remove invalid characters from filenames all of our other small test scans have gone through successfully, and the expected number of instances have been exported by AutoClassify.py. The big test is tonight when a scan that has previously broken Orthanc is transferred to us.

Here is the code I am using to replace invalid (to Windows) characters from the a to d variables used in the export path construction (also using PatientID sans Patient Name):

a = tidy(‘%s’ % (GetTag(patient, ‘PatientID’)))
b = tidy(GetTag(study, ‘StudyDescription’))
c = tidy(‘%s_%s’ % (GetTag(series, ‘Modality’), GetTag(series, ‘SeriesDescription’)))
d = tidy(‘%s.dcm’ % GetTag(instance, ‘SOPInstanceUID’))

The function is as below - nice and simple, and just replaces spaces and invalid characters with the underscore character:

def tidy(value):
for c in ‘/:*?"<>|! ‘: # Replace these characters with underscores - used to ensure valid file name and path
value = value.replace(c,’_’)
return value;

It may be useful to add some of your excellent advice here to the Troubleshooting DICOM communications article at https://orthanc.chu.ulg.ac.be/book/faq/dicom.html

Will let you know how we get on with the large test scan.

Cheers,
Mark

Hello,

Thanks for your suggestions!

I have updated the sample scripts so as to better handle special characters (in the spirit of your “tidy()” function):
https://bitbucket.org/sjodogne/orthanc/commits/7e6afa0beaf6df44914f76f67659b6dd425d18b4

I have also added a FAQ entry to explain how to generate meaningful Orthanc logs (with the associated links in the Troubleshooting DICOM communications entry):
https://orthanc.chu.ulg.ac.be/book/faq/log.html

Kind regards,
Sébastien-

Hi Sébastien.

I have attached the logs from last night’s large copy (from both Orthanc and the AutoClassify script).

The relevant section is at the end of the of each log file. I’m wondering if this is being caused by a race condition between theOrthanc and AutoClassify.py since it seems to be at a random instance.

We are retrying a large scan today and if that works will suggest that they try re-sending the large scan again without AutoClassify active.

If you see anything else that suggests elsewhere to look in the logs please let me know.

Cheers,
Mark

Orthanc 707pm 15-07-2016 NZ.zip (2.63 MB)

log from AutoClassify.zip (92.4 KB)

Hi Sébastien.

A scan has failed with the following snippet in the log (full log attached). Looks like it failed and then (I guess) a retry was manually started (I’m remote from the scanner which is connected to us via a fiber link).


I0715 16:49:16.356463 ServerContext.cpp:260] New instance stored
E0715 16:49:16.372097 StoreScp.cpp:300] Store SCP Failed: DUL Peer Requested Release
E0715 16:49:46.544027 CommandDispatcher.cpp:877] DIMSE failure (aborting association): DIMSE No data available (timeout in non-blocking mode)
I0715 16:50:23.528476 CommandDispatcher.cpp:491] Association Received from AET ORGKSMR on IP 10.92.1.242
I0715 16:50:23.544095 CommandDispatcher.cpp:689] Association Acknowledged (Max Send PDV: 131060)
I0715 16:50:23.575351 ServerContext.cpp:264] Already stored

In another recent thread with the same error (https://groups.google.com/forum/#!topic/orthanc-users/6_n47MjHnas) you mentioned the following:

I have just added two new options to fine-tune the DICOM timeouts:
https://bitbucket.org/sjodogne/orthanc/commits/fabf7820d1f194149f15f0d40123dda498db5e08

// Set the timeout (in seconds) after which the DICOM associations
// are closed by the Orthanc SCP (server) if no further DIMSE
// command is received from the SCU (client).
“DicomScpTimeout” : 30,

// The timeout (in seconds) after which the DICOM associations are

// considered as closed by the Orthanc SCU (client) if the remote
// DICOM SCP (server) does not answer.
“DicomScuTimeout” : 10,

In your case, you would need to increase “DicomScpTimeout”.

Please could you tell me whether these options solve your issue?

I wonder whether this is likely what I need to be trying as well? If yes, is there a separate download for a version of Orthanc command line that I need to download that will utilize these configuration changes? I have Orthanc-1.1.0-Release.exe which I downloaded yesterday as below (times in New Zealand local time):

And can you confirm what would be a reasonable DicomScpTimeout to use?

Cheers,
Mark

Orthanc.log (731 KB)

Hello,

Thanks Sébastien.

It may be a problem at the scanner or workstation - the technician found corrupt data at their end and since he rebuilt we have now recieved four 11700 image series without incident.

Hopefully we are sorted :slight_smile:

Cheers,
Mark

Great, this is nice news!

Regards,
Sébastien-

It is looking fairly promising now; we’ve received the expected number of instances twice now.

There is a problem with the AutoClassify.py leaving a few instances unexported though, but I will discuss that in a separate thread.

Cheers,
Mark