What is the biggest image archive tested on Orthanc?

Hello,
What is the biggest image archive anyone here has seen tested on Orthanc? In terms of Size, Number of patients amd/or Studies.

Hi Rana,

The biggest Orthanc we manage is used as the main PACS in a small hospital in France. It holds 6 year of data:

  • around 25.000 patients
  • around 120.000 studies (mainly CTs)
  • around 40.000.000 instances
  • around 13 TB of disk space used (“recent” images are compressed in jpeg and “older” images are compressed in jpeg2000 (smaller size but a little slower to visualize))

Wow…and so far no performance issues with the 13tb of data ?

No.
Note that the DB is handled by a dedicated Postgres server and we have 8 Orthanc instances using the same DB and storage (2 for the web-viewers, 1 for receiving DICOM data from modalities and 5 acting as DICOM servers for the radiology workstations).

would performance be affected adversely if just one instance of Orthanc is used?

In this case, we have implemented multiple Orthanc instances because we know that its used concurrently. There are still some “locks” in Orthanc and, therefore, the DICOM server might be slowed down while someone is accessing the HTTP server heavily (which is the case when using a viewer)… When multiplying Orthanc instances, the bottleneck is the DB and not Orthanc itself.

However, it’s very hard to tell you the real impact of using multiple instances.

So a single instance of Orthanc running on fairly powerful hardware would have trouble keeping up in a multiuser environment where its likely to be accessed by multiple users at the same time?

There’s no easy answer to that question. That really depends on how many users you have, how concurrently they work and what they do. I.e, there’s a huge difference between browsing the Orthanc interface (which just sends a few requests to the Rest API) and using a WebViewer (which sends thousands of requests to the Rest API).

As another example –

RIH has about the same – 30-45M instances, somewhere between 5-15TB depending on compression and improperly duplicated studies (don’t ask). Files are on disk, metadata is in postgres.

A single Orthanc couldn’t keep up with amount of data we produce when I set this up a couple of years ago, so I use a receiving queue (1 orthanc that receives on DICOM and forwards to HTTP) and 3-5 HTTP peers that accept incoming instances round-robin using an nginx reverse proxy. And I have one more separate Orthanc acting as a DICOM Q/R node for the main database. Very recent Orthancs are multi-threaded though, so running parallel instances may not be as important anymore (although I generally disagree with the choice to complexify Orthanc.)

When data arrives, I also convert all of the tags to JSON files and index those separately in Splunk, which is awesome for complex queries, dashboards, visualizations, alerting, and various other non-DICOM-friendly workflows. We use it heavily for dose-monitoring, for example.

Moreover, this is a non-clinical “research PACS”, and once a year or so someone in the machine room does something stupid like installing unnecessary anti-virus software on my servers or changing IDs on the mapped network drives and brings the whole thing down, sometimes for several days. So while I have very high confidence in Orthanc, I also don’t recommend doing something like this for production unless you really know what your are doing, you are staffed appropriately for 24/7 trouble-shooting, and have end-to-end control of your hardware.

Derek-

Also, Alain, I don’t think that we even can browse with the Orthanc UI on our system, because the default view is to list every subject and it takes an hour to populate that first page!

Speaking of which, how can I put in a feature request for a config flag to default to no-subjects-shown on the front page? Or maybe most-recent 20 studies or something similarly sensible…

Derek-

This issue tracker would be the best place:

  https://bitbucket.org/sjodogne/orthanc/issues/

However, take notice of this:

  http://book.orthanc-server.com/faq/improving-interface.html

Hello Derek,

Regarding this part of the discussion: