I’m interested in learning if and how the Orthanc can be “scalable” and particularly how it can scale-out its data store
the following questions are not restrictive, they are just for helping to define the area of my concern about the Orthanc ecosystem (DICOM server and plugins):
does Orthanc extract info from the DICOM files (meta-data) and store it in order to handle requests about the DICOM files?
is it able to store the meta-data into relational databases or into clustered RDBMs or into NoSQL databases?
can Orthanc handle partitioning/sharding of meta-data across multiple nodes?
can Orthanc be optimized for big data paradigms?
adding store capacity for the meta-data is as simple as adding a node and gaining proportional performance gain?
best regards,
Nikos
There are already many threads on this forum about the scalability of Orthanc, please make a search.
An optimization related to C-FIND querying is currently in progress, but handling C-STORE and C-MOVE is already fine in the presence of large amount of data, as long as you use the PostgreSQL plugin:
Yes, Orthanc extracts so-called a subset of so-called “main DICOM tags” (such as the Patient Name or Study Description), and indexes them into its database, in order to speed up querying.
I don’t understand the question “adding store capacity for the meta-data is as simple as adding a node and gaining proportional performance gain?”
HTH,