Hello,
Thanks for sharing your ideas here. They are all valuable, but you miss the point about Orthanc.
Our initial goal was not to provide a full-featured RIS/PACS system, but to provide the most easy to setup, cross-platform, standard-compliant, and open-source DICOM store. Orthanc is designed for one-click install-and-enjoy. I am convinced it is the most accessible software of this kind. And this is made possible thanks to its SQLite back-end: SQLite brings zero-configuration to the DICOM world!
Besides its ease of setup, do not forget other major features that are unique to Orthanc: one-line auto-routing with Lua, REST API to automate imaging flows or to build rich Web applications, and plugins to build complex low-level applications dealing with DICOM.
We always claimed that Orthanc is a very valuable COMPLEMENT to any proprietary, closed-source RIS/PACS.
Do not use it in a mission critical environment.
You CAN use Orthanc in clinical setups, for instance for auto-routing or to ensure proper QA of DICOM modalities.
Orthanc assumes that the patient id is unique to a single patient across all interactions. Imagine a situation where the patient id in dicom is being recycled or duplicated. For instance, 2 hospitals each use an internal id generator and there is a collision. In this case, the first patient assigned an id is assigned the record regardless of if they are in fact the correct patient. We actually have a pregnant man because of this.
The DICOM standard requires that any two studies are associated with different StudyInstanceUID. If 2 separate modalities create 2 studies with the same StudyInstanceUID, they do not fulfill the most basic DICOM requirements. Because of this, Orthanc will always assign different IDs to any pair of DICOM studies (except in there is a SHA-1 collision). The same holds at the series and instance levels. In your scenario, even if the 2 patients are merged, their studies will be separated.
You are only right about the patient level, because PatientID is not required to be globally unique. In this case, you have to use the “/studies” URI instead of the “/patients” URI in the REST API of Orthanc. It is almost trivial to create a version of Orthanc Explorer that would query “/studies” instead of “/patients”… certainly not a 400-hours job.
BTW, remember that “Orthanc Explorer” (the built-in Web interface of Orthanc) is an administrative, low-level interface that is fully implemented with a few hundreds lines of code. If your setup requires more advanced abilities, just implement another interface through the REST API. And share it with the Orthanc community: This is the most basic philosophy between any open-source project.
Secondly, Orthanc uses SQLite for it’s DB, this decision was a HUGE mistake. SQLite uses a single index file and no server front end. It cannot be accessed by multiple instances safely.
No, this is NOT a mistake, but a FEATURE to make Orthanc the most easy-to-setup DICOM server (see above).
If you need a PostgreSQL (for large-scale VNA), just wait for a couple of week, as I have written earlier today:
https://groups.google.com/d/msg/orthanc-users/XoFMV0XezJg/vfZKmHqA_tMJ
If you are somehow able to share the directory, e.g. NFS, Samba, or s3fs, that file will eventually become corrupt. SQLite is not designed for multiprocess concurrency.
The experimental PostgreSQL back-end stores its files as large objects.
Also it is a file sitting on the same filesystem as the rest of the dicom data.
You can put the SQLite index in another filesystem that the DICOM data (cf. “StorageDirectory” and “IndexDirectory” options). In particular, the SQLite index can be put on a RAID device to prevent the loss of the index.
If this file becomes corrupted or damaged you have now lost all information on all images and must re-import the entire image set. That might work for a single instance sitting in a small clinic. However we generate 50GB DAILY in images, terabytes of data being reimported is not realistic at all. Having Orthanc crawl the DICOM dir when it can’t open it’s SQLite index, would probably be a much better idea.
Wait and give a try to PostgreSQL once it is available.
Finally, user access controls are a joke.
Please be respectful. Many people have worked hard on Orthanc, it is very easy to destroy a 3-men-years software with a single mail such as yours. We have always been open to any external ideas. We always answer messages from the community, in the best delays. If user access is a problem to you, please be constructive and tell us so. We can help (see below).
I have no idea what use case the creator was imagining, but having each and every user in the same config file as the rest of the system is just bad design. A better solution would have been an LDAP connector for that stuff, or failing that a DB based solution.
Do you know about Apache reverse proxying? Do you realize that the point of Orthanc is to provide a RESTful API that should not be directly published on the cloud?
If HTTP basic authentication is insufficient for you, just branch the REST API of Orthanc behind Apache, and protect it with a proper “.htaccess”.
So you need LDAP? Just implement it with a PHP/Django/nginx wrapper around Orthanc. Bottom line.
Orthanc is nice, it has a slick ui and great licensing terms. I wanted this to work and we invested 400+ man hours in our setup (mostly importing extant data and trying to normalize it), only to find that there is no way to make this scale beyond a single instance.
Why didn’t you get in touch with the Orthanc community earlier to ask assistance?
It’s odd because a couple of early design decisions from the original devs like SQLite instead of MySQL or PostGres (ODBC would have been best IMHO) and making all admin settings including user accounts something managed in config files is really what breaks this. We are looking at the possibility of ripping out SQLite and using ODBC instead and LDAP for user auth.
To summarize:
- We work on PostgreSQL to replace SQLite.
- LDAP can be implemented at a higher level.
However this looks like a pretty fundamental change and it may be better to simply start over.
Just wait a couple of weeks for PostgreSQL and share with the open-source community your PHP/node/Django/nginx/whatever wrapper. Contribute to the project instead of recommending other people to trash it!
Sébastien-