Performance as storage server for 15 TB data

Dear friends,

Please share your thoughts on this:
Can I use Orthanc as a storage server for 15 TB (and increasing) of medical image data.

Our plan is to use migrate 15 TB of data to Orthanc server and use query retreive to access the image in client side.

Current per day data size is near to 60 GB.

Is anybody tested/benchmarked Orthanc with more than 15 TB data ?

Also I would like to know the performance of storage and query retreive SCPs with such volume of data. (LAN speed is 1 GBPS).

Thanks and Regards,
Mathew Shaan

Orthanc itself does fine. The underlying infrastructure becomes very important at those magnitude of data, however.

At those levels you have to think about Ram, CPU and IO, Orthanc has always scaled well with improvements in those areas in my experience.

Dear Chico,

Many thanks for sharing your insights!

Dear Mathew,

I personally receive few report (if any) about performance issues from users of Orthanc.

A user privately reported me in December 2015 that his Orthanc instance contained 3.6TB of data using the built-in SQLite engine. Another user reported 1TB for more than 1,6 millions of DICOM files, also using SQLite.

That being said, the built-in index of Orthanc (that uses SQLite) is not designed to efficiently handle such a large amount of data: “When the size of the content [i.e. the Orthanc index] looks like it might creep into the terabyte range, it would be good to consider a centralized client/server database.” [1] We use SQLite as it is immediate to deploy while providing good performance for small-scale to medium-scale databases.

This is why Orthanc proposes a PostgreSQL plugin, which allows you to replace the lightweight SQLite database of Orthanc by a large-scale, enterprise-ready database system [2]. With the PostgreSQL back-end installed, you should be able to store a virtually unlimited amount of images. Note that you can choose to keep the DICOM files on a standard filesystem (by simply setting the “EnableStorage” configuration option to “false”), in order to avoid any unnecessary load on the PostgreSQL server.

With the PostgreSQL plugin enabled, the performance of the Orthanc core should become independent of the number of stored DICOM files, if the PostgreSQL server is properly scaled (as Chico mentioned). Note that if you use the built-in Web interface Orthanc Explorer, you might find it slow to access the list of the patients. This is because Orthanc Explorer is primarily designed for low-level, administrative purpose [3], and thus does not implement features such as paging. If this is a problem for you, a more advanced Web user interface could be easily developed.

As a conclusion, I would really love to hear about benchmarks of Orthanc. Please share any relevant experience with the Orthanc community.

Regards,
Sébastien-

[1] https://www.sqlite.org/whentouse.html

[2] http://www.orthanc-server.com/static.php?page=postgresql
[3] https://orthanc.chu.ulg.ac.be/book/faq/improving-interface.html

Hello,

As a complement to my previous answer, the FAQ has just been updated with a trick to improve the performance of Orthanc by disabling run-time debug assertions:
https://orthanc.chu.ulg.ac.be/book/faq/troubleshooting.html#performance-issues

HTH,
Sébastien-