Good Day All,
I am working on a project to upload around 400k studies (Approx. 14TB) to Orthanc to process before being sent to the destination PACS. These studies are from a different archive and each study is individually zipped in a file location, with each zip file being in its own folder. Does anyone have ideas on the best way to get these into Orthanc?
Thoughts right now are:
Option 1 - Use a script to scan through and unzip all files then use the new plugin to attach them to Orthanc. (Will double space for zip and unzipped files)
Option 2 - HTTP Post each zip using scripts into Orthanc. (Could take some time and double the space)
Any other thoughts on a good way? Thanks!
First question: is timing a constraint? Like must this be very fast or can it take a longer time?
Thanks. Time isn’t a huge issue. What I have done is used Mirth Connect to:
- Pull from the database which lists all the file locations of the zip files
- Initiate a CURL from Windows Server to upload the zip file and retrieve the Orthanc response.
- Use the response to update the database with study details.
I am able to do around 150/minute so far (9CR/US) so shouldn’t finish too long and I will have a good inventory of what was completed afterwards also. Thanks for the input!
Hi,
Re-opening this thread since I worked on measuring ingest performance today. Full details here: https://bitbucket.org/osimis/orthanc-setup-samples/src/master/docker/ingest-performance/
Quick summary:
- storescu is quite slow but it seems to come from storescu itself. If you launch 16 storescu in parallel, you reach decent performances (145 files/sec in my case)
- HTTP is usually faster. Top performance reached from 4-8 clients in parallel (165 files/sec)
- The fastest way to transfer to Orthanc is to use … another Orthanc as the source (you know these chicken and egg problems ) and issue 4-8 C-Move in parallel (213 files/sec)
HTH,
Alain
Thanks for the info. I have completed this (very) successfully but very useful for next time.