Hi there,
I wanted to bring back here an old issue that i thought to be solved but now returning back through the window.
The issue is about viewer/dicom-web communication and especially the /studies/{uid}/metadata and /series/{uid}/metadata.
The root of the issue is a fondamental difference between dicom archive like Dcm4Chee and Orthanc, Dcm4chee stores all metadata in the relational database while Orthanc extract only a part of them in it’s database and then consume other tags by reading from the filesystem.
In the DicomWeb standard we have 2 routes that are affected by this design the /metadata routes at studies and series level that lists all childs instances metadata. As Orthanc needs to read all instances metadata from storage to generate these responses it is significantly slower than Dcm4Che.
However several patches have been attempted to reduce this issue :
- Enhancements in the DicomWeb plugin to retrieve data from database (main dicom tag) or extrapolate data (reminder : DicomWeb performance seems slow on the metadata query - #11 by jodogne)
- More recently custom storage of metadata to avoid as much as possible to read from the storage backend
These two improvements has really improved several Dicom Web APIs, in particular Qido routes to list childs series of a study that do no require to read for storage anymore.
However the studies/metadata and series/metadata routes that lists all metadata in studies/series cannot rely on the database (unless adding the full dicom dictionnary in the extra tag to store in the database which would be a killer)
3y ago I also asked for improvement in OHIF side, initially OHIF was calling the /study/metadata which was a performance killer as a DICOM study may have 2000 to 8000 instances, it was putting a huge stress on the storage backend.
A that time OHIF responded and have changed his data fetching method to first request child series of a study using Quido request and then calling the /series/{uid}/metadata route to get each series metadata when the users requested to see the images (/series/metadata performance are acceptable as it contains only few hundreds of instances at this level). (reminder : Change approach for loading study metadata from WADO to QIDO+WADO · Issue #836 · OHIF/Viewers · GitHub)
The combination of theses two improvements in OHIF and Orthanc made the OHIF viewer much faster and I thought we were out of this issue (until today).
We get back to the problem because of a change in OHIF design and the implementation of Hanging protocols.
OHIF made a very nice Hanging Protocol management when a user can choose which studies/series he wants to be loaded and displayed.
This rely on some constraints definitions, most of them are based DicomTag values (ex : open the series with modality == ‘PT’) and could be executed by simply loading 1 instance of each series.
But some of them are computed, especially the isReconstructable rule that is meant to know if a series would be able to be volume reconstructed in the viewer and opened in a volume viewport (opposite to a non reconstructable series such as conventional X ray that will be opened only in a stack viewport)
Big problem : The reconstructable status cannot be consumed from a static dicom tag and require to know about each instance position to figure out if the slice spacing is constant and so volume can be reconstructed (and the logic is even more complicated for some scintigraphy multiframe studies).
So to evaluate the requested Hanging Protocols OHIF get back to a situation where it request all metadata of each series, it does not use the /study/{uid}/metadata anymore but asks concurrently for each series the series/{uid}/metadata route.
So we get back to the situation that OHIF needs all metadata of all instances in the opened study to start displaying something (due to hanging protocol evaluation) and then put a huge stress on the backend asking for metadata of thousands of instances.
Now I think we won’t be able to get ride of this problem at the viewer level, the study/{uid}/metadata routes exists in the Dicom-Web implementation and it will be hard to ask viewers to not use it (and now OHIF has a good reason to ask for all metadata as it is needed for a extensive Hanging protocol evaluation).
So we get back to the 2020 issue, how to improve performance of /studies/metadata and /series/metadata dicom web route in Orthanc.
Hypothesis I have could be :
-
Ask OHIF to request the specific tag slice location using Quido request and store SliceLocation in extra metadata in Orthanc
=> Problem this logic works only for unique frame like CT, PT, for multiframe image we found much more complicated logic relying on position sequence tag.
=> 2nd problem it would fit only for isReconstructable status which is today issue but not future needs that might appear that would need another tag and each viewer editor would need to take account of this performance issue (it doesn’t fix the main issue of the /metadata route performances) … -
Maybe implement a sort of caching in Orthanc for these routes ? The nice thing of DICOM is that data are immutable and so favorable for caching.
When we have a series generated in Orthanc I guess a JSON cache of a series/{uid}/metadata could be generated and stored statically somewhere in the file system.
This could rely on “stableSeries” event to generate the cache and invalidate it in case of new instance arriving.
So all dicom web requesting /series/{uid}/metadata will look if the cache is available and simply output the cache rather than generating it from dicom storage (only 1 file read in the storage rather than opening each instance).
The study/{uid}/metadata will concatenate all series caches of the requested study (1 file to read per series).
I’m putting this message in this Orthanc forum because I think this issue is not anymore to be solved in the viewer level but more likely in server side (but will show the message anyway to OHIF).
Of course it is widely opened for discussion, all of this is just my understanding of this complicated problem (but i might be wrong in several places)
Best regards,
Salim