Help with Index Migration from SQLite to PostgreSQL using python-orthanc-tools

Hello everyone,

I’m working on migrating from a SQLite-backed Orthanc instance to a PostgreSQL-based setup using the official python-orthanc-tools (GitHub - orthanc-team/python-orthanc-tools), specifically the OrthancCloner class.

Source Orthanc:

  • SQLite backend (default install)
  • Running on localhost:8042
  • Uses storage folder E:\Orthanc

Destination Orthanc:

  • PostgreSQL backend enabled
  • Running on localhost:8052
  • Uses the same StorageDirectory (E:\Orthanc)
  • Configured with EnableIndex=true and EnableStorage=false
  • Both Orthancs are installed via the Osimis Windows package (Orthanc v1.12.1, PostgreSQL plugin v5.1)

PostgreSQL Config:

“PostgreSQL”: {
“EnableIndex”: true,
“EnableStorage”: false,
“Host”: “localhost”,
“Port”: 5432,
“Database”: “”,
“Username”: “”,
“Password”: “”,
“Lock”: true,
“EnableSsl”: false,
“MaximumConnectionRetries”: 10,
“ConnectionRetryInterval”: 5,
“IndexConnectionsCount”: 50,
“TransactionMode”: “ReadCommitted”,
“EnableVerboseLogs”: false,
“HousekeepingInterval”: 1,
“AllowInconsistentChildCounts”: false
}

Python Migration Script:

from orthanc_tools import OrthancCloner
from orthanc_api_client import OrthancApiClient

source = OrthancApiClient(‘http://localhost:8042’, user=‘’, pwd=‘’)
destination = OrthancApiClient(‘http://localhost:8052’, user=‘’, pwd=‘’)

cloner = OrthancCloner(source=source, destination=destination, worker_threads_count=1)

cloner.execute(existing_changes_only=False)

Issue:

While the source Orthanc works fine, I see the following error in the destination Orthanc (PostgreSQL) CMD window when I start migrating:

I0611 08:08:39.272332 OrthancRestApi.cpp:163] (http) Receiving a DICOM file of 528298 bytes through HTTP
E0611 08:08:39.301976 PluginsManager.cpp:153] SQLite error code 5
E0611 08:08:39.310073 PluginsManager.cpp:153] SQLite: Cannot run a cached statement

This is confusing because the PostgreSQL index is enabled and storage is on disk — not in SQLite. Could it be due to both Orthanc instances accessing the same StorageDirectory? Or is there something wrong with the cloner setup?

Any advice from others who have used OrthancCloner, or who have migrated index backends while preserving disk storage, would be greatly appreciated!

Thanks in advance!

Hi @HenrikPLind

This error comes from a plugin. Very likely the old WebViewer plugin that is using SQLite for a cache and the SQLite DB can not be accessed from 2 different processes.

One option is to disable the plugin (e.g. by renaming its extension from .dll to .dll_ or by removing it from the plugins folder). Or, if you want to keep that plugin, you can configure a different CachePath configuration for each Orthanc instance.

Note that this error should not happen if you are using different IndexDirectory in each Orthanc instance.

Hope this helps,

Alain.

Hi @alainmazy,

Thank you very much for your helpful response — I’ll definitely look into the plugin/cache configuration you suggested.

I had a follow-up question regarding the migration itself using the Python script from python-orthanc-tools and what I was hoping to achieve.

My goal is to migrate only the index and metadata from an existing SQLite-based Orthanc instance to a new PostgreSQL setup — without migrating or duplicating the DICOM file storage. As such, I used:

cloner.execute(existing_changes_only=True)

The migration process completed, although I encountered some cache flushing issues during the operation that forced me to restart the source Orthanc instance. Despite that, the destination (PostgreSQL-based) Orthanc instance now lists all the expected studies and instances in Orthanc Explorer 2.

However, I’ve noticed that I cannot access the metadata of the instances. When inspecting the command prompt logs, I’m seeing errors like:

E0611 12:26:44.237501 PluginsManager.cpp:153] The specified path does not point to a regular file: E:\Orthanc\50\db\50db074c-4b6d-49da-850f-f5f98aaae265

I confirmed that all of the indexed instance paths do not exist in the shared StorageDirectory.

This makes me wonder:

  • Could restarting the source Orthanc during migration cause inconsistencies in how the files are indexed?
  • Does the destination Orthanc (with PostgreSQL indexing and EnableStorage=false) perform any rehashing or reinterpretation of the storage layout that could break these references?

Any insight into this behavior would be greatly appreciated — and thanks again for all your work on Orthanc!

Best regards,
Henrik

Hi Henrik,

When you clone an Orthanc into another, you are actually re-ingesting the data into the destination. This is, BTW, the only way to transition from SQLite to PG.
This creates new SQL DB entries AND this stores the DICOM files at a new paths. (Paths are actually “random” so 1 instance ingested in 2 Orthanc will be saved at 2 different places).

So, there is no way to migrate without duplicating the DICOM file storage.
If you are limited in space, you can migrate studies one by one and delete them from the source (of course, you can script that but I don’t think we have a tool that works that way out of the box).

HTH,

Alain

Hi @alainmazy,

Thanks again for the clarification!

From what you wrote, it sounds like there is no way to migrate the index alone without also duplicating the DICOM file storage. This raises a concern for me:
If I point the PostgreSQL-based Orthanc instance to the same StorageDirectory (e.g., E:/Orthanc) that’s currently being used by the SQLite-based instance, would that risk overwriting or corrupting the original index references created by the SQLite setup, since the new paths are random?

If that’s the case, then would the correct and safest approach be to:

  1. Create an empty folder, say E:/OrthancPG,
  2. Point the PostgreSQL Orthanc to this folder, and
  3. Let it populate that folder during migration?

Additionally, I’m exploring the /tools/reconstruct API as a possible way to avoid duplicating the DICOM files and instead re-index what’s already in E:/Orthanc directly into PostgreSQL.
Would using /tools/reconstruct on the PG Orthanc instance be the only supported method if the goal is to reuse the existing storage folder without copying data again?

For context, I’m looking to migrate around 10 million DICOM instances, so I’m also interested in identifying the fastest and most robust approach to perform this migration. If you have any advice on optimizing the performance of the reconstruction or the indexing process at this scale, that would be hugely appreciated.

Thanks so much again for your support - I really appreciate it!

Best regards,
Henrik

Given the entropy, this should not happen before the sun dies :wink:

Yes

No, /tools/reconstruct can not be used to “adopt” new files from another storage.

Spinning a new Orthanc with a separate storage and using the OrthancCloner with existing_changes_only set to False is our preferred way. Configuring around 20 worker threads seems optimal.
This way, you don’t really care about how long it takes: Your primary Orthanc still receives data and they will also be copied to the target one. Once the Cloner has copied all files, you can just stop the primary Orthanc, change the ports of the target one such that it becomes the primary. You can then safely delete the first Orthanc data.

But, of course, you need twice the storage during the migration process …

Best,

Alain

Hi @alainmazy,

Thank you for the detailed response!

I have one final follow-up question regarding the setup and migration process.

At the moment, I’ve pointed the StorageDirectory in my PostgreSQL Orthanc setup to a new location that’s indexed by the OrthancIndexer.dll plugin via a configured indexer.json file. This symlinked folder is currently being processed, and based on the logs and performance, it looks like the indexing will take approximately a week to complete.

In parallel, I also have DICOM data that has arrived through C-STORE (not part of the symlink path), which Orthanc is also handling.

What I’m unclear about is how these two data sources (symlinked folder vs. C-STORE received files) are separated or treated internally. My main concern is this:

Once the symlink folder has been fully indexed, if I then use OrthancCloner with existing_changes_only=False, will it start over and reprocess the entire symlink folder all over again? Or will it only transfer the “new” instances - such as those that arrived via C-STORE or were not already handled by the indexer?

If there’s a way to avoid reprocessing already-indexed files via OrthancCloner, and instead just migrate the remaining data, that would really help optimize time, as I’m already expecting around a full week of indexing.

Best regards,
Henrik

Hi @HenrikPLind

Indexing your own StorageDirectory looks very weird to me and the behaviour might be unpredictable.

As you might have seen, we have just released a new advanced storage plugin that should facilitate this type of migration (full documentation).

Your migration scenario has been described in this Docker setup (you can just reuse the configurations on Windows).

If I understand correctly, you would now have 2 “old storages”, the initial SQLite one and the one for PG that has already received some data → you should therefore start another Orthanc instance with the new plugin and yet another empty StorageDirectory and index these 2 “old” folders.

Hope this helps,

Alain.

Hi @alainmazy,

Thanks again - I realize I’m probably nearing the end of your Orthanc IT support rope :sweat_smile:, but I truly appreciate your patience and guidance.

I’d like to clarify the setup, as I think there’s been a slight misunderstanding.

I currently have:

  1. A symlinked folder containing historical DICOMs, which is being indexed by a PostgreSQL-based Orthanc instance using the OrthancIndexer.dll plugin. This folder is shared and was never managed by Orthanc before - as I understand it, this is the intended usage of the Indexer plugin.
  2. A separate set of DICOM instances previously received via C-STORE by an SQLite-based Orthanc instance. These are not part of the symlinked folder, but were stored in the default StorageDirectory managed by Orthanc itself.

Now that the PostgreSQL Orthanc is indexing the symlinked folder, I’d like to migrate the remaining C-STORE-received instances as well.

Am I understanding the storage separation correctly?
And would using OrthancCloner with existing_changes_only=False allow me to migrate just those remaining C-STORE-received instances?

Lastly, since the indexing of the symlinked folder already takes about a week, I’d like to avoid another full week-long migration. Does using OrthancCloner in this way re-transfer everything from scratch, or will it efficiently handle only what’s not already indexed?

Best regards,
Henrik

:slightly_smiling_face:

If the symlinked folder was already indexed by your SQLite instance then, yes, the Cloner will re-import it because it is unable to identify the source of the DICOMs.

If you absolutely don’t want to re-index everything and stick with the old indexer plugin, you should index the symlinked folder (which you are already doing) AND the previous SQLite storage directory. This way, you don’t need the cloner at all.

But I strongly recommend using the latest advanced-storage :wink:

It’s quite easy to spin a few Orthancs → I would strongly recommend that you play with a test setup with a few files to repeat your scenarios such that you can analyze the final storage consumption and make sure all studies appear in the final destination…

Best,

Alain.