Postgres - Saving Modalities to DB

Hi Orthanc Team,

We use Orthanc with Postgres Plugin and DicomModalitiesInDatabase set to true. We use the osimis/orthanc docker image.

Recently, I have noticed that when I restart our systems for an update via docker-compose down && docker-compose up, our systems loose the stored modalities. I have tested it not using the Postgres database and the modalities are persisted across restarts. It appears to be restricted to the Postgres plugin.

Attached is a docker-compose.yml file that replicates the issue. Note that the Postgres database is persisted using a named docker volume.

To replicate:

  1. using the attached docker-compose.yml - run docker-compose up -d
  2. Create a modality
  3. curl --location --request PUT http://localhost:8042/modalities/test’ --header ‘Content-Type: application/json’ --data-raw ‘{“AET”:“test”,“Host”: “localhost”,“Port”: “4242”}’
  4. Verify the modality was created correctly - curl --location --request GET ‘http://localhost:8042/modalities/test
  5. Stop Docker & Postgres - docker-compose down
  6. Restart the stack - docker-compose up -d
  7. Check the modality configuration curl --location --request GET http://localhost:8042/modalities/test’ - which returns a 404.
    Thanks again for your work!


docker-compose.yml (809 Bytes)


If I start Orthanc using Docker, and if I use the PostgreSQL database that is running as a service on my computer (without Docker), I cannot reproduce your issue. The modality is properly restored if I stop, then restart Orthanc.

For reference, here is how I start Orthanc:


So, I guess that the issue is with how Docker Compose starts the “postgres:13.2-alpine” image: Visibly, you should map more volumes than just “/var/lib/postgresql/data”.


Hi Sebastian,

Thanks very much for the quick response.

I should have added that as a test I added an instance to the stack using the same docker-compose.yml and restarted the stack and the instance was persisted, but strangely the modality wasn’t.

I will review the volumes mapping for Postgres but was intrigued by the difference in persistence between modalities and instances.


Hi Sébastien,

I spent some more time investigating this.

The problem only occurs after stopping and restarting both Orthanc & Postgres.

If I use the osimis/orthanc:21.2.0 container, the problem does not occur.

If I use the 21.4.0 container, which is the first with the PostgreSQL v4 plugin the problem occurs.

What I have noticed, is that using the 21.2.0 image, the modalities are stored in the serverproperties table. With this version, the server UUID remains the same after both containers are restarted.

However, with 21.4.0, the server UUID is different after the restart. I can see that the old modalities are still stored in the table, but Orthanc is using a new UUID and hence when the modalities are read from the database, they don’t exist.

It appears that with 21.4.0 & the latest Postgres Plugin, the server UUID changes after restarting both the Orthanc container and the database container.

I hope that helps. Let me know if there is anything more I can do to help you replicate it.



Hello James,

Oh, ok, I got it: You’re looking for the “DatabaseServerIdentifier” global configuration option that was introduced in Orthanc 1.9.2:

Quoting the documentation:

// Arbitrary identifier of this Orthanc server when storing its
// global properties if a custom index plugin is used. This
// identifier is only useful in the case of multiple
// readers/writers, in order to avoid collisions between multiple
// Orthanc servers. If unset, this identifier is taken as a SHA-1
// hash derived from the MAC adddresses of the network interfaces,
// and from the AET and TCP ports used by Orthanc. Manually setting
// this option is needed in Docker/Kubernetes environments. (new in
// Orthanc 1.9.2)
“DatabaseServerIdentifier” : “Orthanc1”,

In my previous answer using Docker (and not Docker Compose), Docker was visibly reusing the same MAC address for its network interface, which explains why I hadn’t been able to reproduce the issue.


Ah ha! Thank you for that. I missed that config option.

Thanks again very much for helping out so quickly.