If I run ImportDicomFiles.py to rebuild the database on an existing storage directory, will it just build an index, or copy every DICOM into a new StorageDirectory

I want to migrate from an SQLite index to a postgres index, but maintain using a filesystem storage backend.

The docs are very clear on how to do this, which is great.

What I am unsure about, is if I follow https://book.orthanc-server.com/users/replication.html and run ImportDicomFiles.py on the current DICOM StorageDirectory, which contains several TB of data, will it create a “new” DICOM file in the StorageDirectory every time a file is imported?

I fear it might, as that’s probably what you’d want it to do if you were importing from say a CD or USB stick?

Would there be any way around this behaviour? If not, I will need to briefly store every file twice?

You definitely could use Folder Indexer plugin which maintains structure of dicom source. One downside is it’s using sqlite for indexing. So performance of querying old studies are not good

Hi, thanks but the whole reason I want to migrate is to move away from sqlite and use postgres indexing, because sqlite is too slow for our dataset size.

Hi James,
Another solution is to write a plugin which creates a symlink from the original dicom location. You need to register StorageCreate callback (see source code in Indexer plugin). In StorageCreate, each time a dicom file is being saved, you should create a symlink instead of saving to the hard drive.

Hi James,

Indeed, the new Orthanc will store files at a new location and, therefore, you’ll have to store all files twice !

I would advise you to use 2 different folders for the old and new storage (just to make things clearer).
You can certainly modify the python script to delete a file from the source storage once it has been uploaded to the new Orthanc.

HTH

Alain

another option is to make a temporary server (any machine with enough hard drive space will work)with your preferred settings and just do a normal replication, once you are sure your data is all there,you erase the old data in your main server, change configs to the new one, and do the reverse, this way you can maintain uptime during replication. you only need to notify the users of a “Slight slowdown” during maintenance while you redirect to the temp server. After the second replication is done you return all to the old server and you did it all with no downtime.

In case other people find it helpful, in the end I ended up reworking the ImportDicomFiles.py script so that it is better suited towards a database migration and an order of magnitude faster: https://gist.github.com/jphdotam/21581fc4a205072ecf30d2c0c846f117

1 Like