Multiple orthanc instances connected to same DB and FS

Hi,

I have an orthanc instance installed with postgreSQL plugin and storage configured on a shared filesystem (NFS).
Is it possible connect a second instance with same configuration?
Can both write simultaneously into the same database and filesystem?

Thanks
Diego

Hi Diego,

We have experience with this kind of setup (several Orthanc with only one DB) but in that case, there is only one Orthanc writing, other ones are reading only.
So we do not advise having several Orthanc writing on the same DB.

HTH,

Thanks for the reply.
But why wouldn’t you advice writing simultaneously to the DB.
Any serious DB engine should handle this without problems.

Hi Diego,

Indeed, the DB engine would support it.

Orthanc was first designed to work with SQlite which is a local DB that does not support concurrent accesses. The ServerIndex class which is the central point to interact with a DB is still very influenced by this first SQLite implementation.

In order to really support multiple Orthanc writing on the same DB, we would need to clearly identify if a transaction is reading/writing or both and we would need to support re-issuing transactions that failed because of a conflict. This is partially explained here: https://bitbucket.org/sjodogne/orthanc/issues/83/serverindex-shall-implement-retries-for-db

Hi Diego.

I tried with two servers doing read/write taks to the same postegres dbs.

The server crash with these errors.

I’m trying using only one server to write and the other as radonly, but in this case I will not hace load balancing and fault tolerance.

E0219 13:02:33.325155 ServerIndex.cpp:968] EXCEPTION [Error with the database engine]

E0219 13:02:33.340782 ServerContext.cpp:420] Store failure

E0219 13:02:34.012763 ServerIndex.cpp:968] EXCEPTION [Error with the database engine]

E0219 13:02:34.059644 ServerContext.cpp:420] Store failure

E0219 13:02:34.372191 ServerIndex.cpp:968] EXCEPTION [Error with the database engine]

E0219 13:02:34.387821 ServerContext.cpp:420] Store failure

E0219 13:02:46.405305 PluginsManager.cpp:164] PostgreSQL error: ERROR: transacción abortada, las órdenes serán ignoradas hasta el fin de bloque de transacción

In my test configuration I point both servers to the same windows shared resource and db. But I use for each server

Server 10.2.0.38 with postgres and “Orthanc” Windows Share (With local cache for the viewer)
Server 10.2.0.35 with orthanc Service (With local cache for the viewer)

orthanc.json

“StorageDirectory” : “\\10.2.0.38\Orthanc”,

postgresql.json

{
/**

  • Configuration to use PostgreSQL instead of the default SQLite
  • back-end of Orthanc. You will have to install the
  • “orthanc-postgresql” package to take advantage of this feature.
    **/
    “PostgreSQL” : {
    // Enable the use of PostgreSQL to store the Orthanc index?
    “EnableIndex” : true,

// Enable the use of PostgreSQL to store the DICOM files?
“EnableStorage” : false,

// Option 1: Specify explicit authentication parameters
“Host” : “10.2.0.38”,
“Port” : 5432,
“Database” : “orthanc”,
“Username” : “*******”,
“Password” : “********”,

// Option 2: Authenticate using PostgreSQL connection URI
// “ConnectionUri” : “postgresql://orthanc_user:my_password@localhost:5432/orthanc_db”,

// Optional: Disable the locking of the PostgreSQL database
“Lock” : false
}
}

{
/**

  • The following options control the configuration of the Orthanc
  • Web Viewer plugin.
    **/

webviewer.json (both servers uses a local path for the cache)
“WebViewer” : {
/**

  • The location of the cache of the Web viewer. By default, the
  • cache is located inside the storage directory of Orthanc.
    **/
    “CachePath” : “E:\OrthancWebCache”

/**

  • The maximum size for the cached images, in megabytes. By
  • default, a cache of 100 MB is used.
    **/
    // “CacheSize” : 10,

/**

  • The number of threads that are used by the plugin to decode
  • the DICOM images.
    **/
    //“Threads” : 16
    }
    }

Great test! Thanks for the example.
I will follow a one-write/multiple-reading scheme.