I am facing a tough situation.
I have Orthanc storing many CT exams from October 2020 to today, so we decided to remove October to December, about 3500 studies.
For deletion I made a “synchrony” script that triggers the call to Orthance’s API (delete in / studies / UID), so Orthanc does his part, but it takes a long time to delete the entire study (about 10 min or more).
Worse … Users who are viewing images are blocked until the study is deleted.
Is there a way to make deletion faster?
Thank you very much!
Sorry I can’t be of any help on this topic, but 10 minutes seems like a very long time - how many series/instances are in a typical study?
Best of luck
This may seem counterintuitive, but deleting files from some filesystem is an expensive operation that takes much more time than expected. You have 3 possibilities:
1- Instead of deleting by studies, create a script that remove DICOM resources using a finer granularity. You could delete by series. You could also delete by batches of, say, 20 instances. This would release the lock to the SQLite database more frequently. You could even consider implement this logic server-side using a C++ or Python plugin.
2- Implement a C/C++ storage area plugin that implements the “OrthancPluginStorageRemove” operation by adding the paths of the files to be removed into a message queue. A separate thread would then do the actual deletion. Ideally, this message queue should be stored persistently (e.g. in a separate SQLite database), in order to restart the deletion process after a shutdown. This is a very effective solution that would be totally transparent to the calling scripts.
3- If you use the PostgreSQL/MySQL plugins to store you data, wait for the Orthanc 1.9.2 and PostrgeSQL/MySQL 4.0 releases. These important releases will soon allow concurrency when accessing the database .