i would like to use Orthanc as a DICOM router but i’m missing some functionality (or it is available but not documented, or my searching skills are lacking ).
if i send dicom images to Orthanc i would like to use the Called AE title in a forward lua script (i think it’s possible to use the calling AE in a lua script but not the called AE).
i would like to change the transfer syntax in a forward script, or have an option under DicomModalities to configure the desired outgoing compression for that Modality.
Is Orthanc able to send a DICOM study with multiple threads to speedup the transfer?
I already using Orthanc to auto-route images, but i’m missing the the ability to use the Called AE instead of the Calling AE in a LUA script. If the Called AE could also be made available in the metadata that would be great.
I tried to build from source but my c++ skills are lacking.
i would like to use Orthanc as a DICOM router but i’m missing some functionality (or it is available but not documented, or my searching skills are lacking ).
if i send dicom images to Orthanc i would like to use the Called AE title in a forward lua script (i think it’s possible to use the calling AE in a lua script but not the called AE).
Couldn’t you simply deploy 2 separate instances of Orthanc on the same host? In theory, the AET must be unique to any modality.
i would like to change the transfer syntax in a forward script, or have an option under DicomModalities to configure the desired outgoing compression for that Modality.
Orthanc never modifies/transcodes/compresses its incoming images: It is a DICOM store (or, in other words, a vendor neutral archive), meaning that its outputs are always the same as its inputs.
If you want to carry on things such as on-the-fly JPEG2k compression, you should either use the REST API or the plugin SDK of Orthanc.
Is Orthanc able to send a DICOM study with multiple threads to speedup the transfer?
No, Orthanc routing is single-threaded. According to the scenario you describe, I think you should deploy separate instances of Orthanc, one for each imaging flow: This will effectively speedup the transfer.
Yes i could deploy multiple instances, but it would be neat to have all config in one instance. Also the called AE will be unique. So this way i can configure multiple AE titles on a modality that when chosen will route to different places. I know i could do the same with multiple AE titles and different port numbers if i deploy multiple Othanc instances but it would be extra work and Orthanc already has the ability to not check the Called AE (DicomCheckCalledAet).
will take a look at the plugin SDK.
I need to speedup the transfer of a single big study (more then 10K CT images) over a VPN connection. The VPN connection is fast but the latency is killing the speed with all the DICOM associations for every CT image.
Once again, sorry for the delay: This is quite a busy period for Orthanc.
Yes i could deploy multiple instances, but it would be neat to have all config in one instance. Also the called AE will be unique. So this way i can configure multiple AE titles on a modality that when chosen will route to different places. I know i could do the same with multiple AE titles and different port numbers if i deploy multiple Othanc instances but it would be extra work and Orthanc already has the ability to not check the Called AE (DicomCheckCalledAet).
I need to speedup the transfer of a single big study (more then 10K CT images) over a VPN connection. The VPN connection is fast but the latency is killing the speed with all the DICOM associations for every CT image.
To avoid opening new DICOM associations for every CT image, you have 2 possibilites:
Wait for a series to become stable (by inspecting the “IsStable” field returned by the URI “/series/{series_id}”), then send it at once (“curl -X POST http://localhost:8042/modalities/sample/store” -d “{series_id}”). This will result in a single DICOM association for the entire series.
Increase the time before DICOM associations are automatically closed (defaults to 5 second). To close after e.g. 1 minute, it is sufficient to write “scu_.SetMillisecondsBeforeClose(60 * 1000)” at the following code location: http://goo.gl/qbJVLq . I have just added a task to introduce a parameter to modify this option from the configuration file (https://trello.com/c/sVeBX1tX).
HTH,
function OnStoredInstance(instanceId, tags, metadata, remoteAet, calledAet)
i have missed this last year and i have to use multiple instances.
Sebastian, thanks for your work.
i do not know, but maybe somebody else needs a way to “concentrate” multiple MWL?
Are there functions for MWL planned?
For example:
Orthanc Querying multiple MWL SCP, building a new MWL on Orthanc instance
Orthanc is MWL SCP for dicom modalities.
Modalities are sending dicom studies to orthanc.
Routing of studies to Destination PACS based on MWL original source.
Is there any why to access the content of CalledAet and RemoteAet from within a python script?
I would like to monitor /change within a python script, but get access to the AETs.
Basically I would like to the the same routing possibility as described here via a Lua script with a python script.