Disable metadata storage on disk


Since a few days I try to figure out how metadatas are used/stored.
I tried a few configuration as follows:

  1. Orthanc using POSTGRES with options EnableIndex:true and EnableStorage:true
    This configuration sore metadatas, indexes and DICOM files in POSTGRES database. This approach is not very clean for me since I need to maintain docker volumes for DICOM files (a few TB of data).
    Using this configuration ORTHANC metadatas are stored in POSTGRES database and NO FILES ARE CREATED in my orthanc docker volume.

  2. Orthanc using POSTGERS with options EnableIndex:true and EnableStorage:false.
    I use a third party plugin to store the DICOM files to S3 (https://github.com/radpointhq/orthanc-s3-storage). This approach is very interesting for me because the DICOM files are stored in the AWS cloud. There I can configure a storage class, retention or replication.
    If the storage is handled by the S3 plugin and the postgres plugin option EnableIndex set to true, why orthanc will continue to store metadata as JSON files on the disk (docker volume) ?
    Is there a way to store metadata in postgres with current config? I don’t understand why is this happening since postgres keep the index.
    Is hard to maintain 3 different storages(s3, postgres database and orthanc metadatas), and I think that metadatas can be keep in postgres since the storage is not handled by orthanc.

Have a nice day!

This looks like a problem in how that S3 plugin is implemented:


if (type == OrthancPluginContentType_Dicom) {
path = GetPathStorage(uuid);
ok = s3->UploadFileToS3(path, content, size);
} else if (type== OrthancPluginContentType_DicomAsJson) {
path = GetPathInstance(uuid);
Utils::writeFile(content, size, path);
ok = true;
} else { |

  • |


Orthanc passes the JSON file to be stored but for some reason the plugin writes it to disk instead of uploading to S3 as you would expect. It would not be hard to fix that but you need to tweak the storage path for JSON because the DICOM one is using the bare uuid as the file name.


Thanks for your response.

The result is the same (JSON files are stored on the disk) using google cloud platform plugin.
Is this plugin implementation the same as S3 plugin?


Pe marți, 23 iunie 2020, la 21:50:38 UTC+3, jake...@gmail.com a scris:

I’m not sure, do you have a link to the source code for it? If you mean https://book.orthanc-server.com/plugins/google-cloud-platform.html this one, then that does not look like a storage plugin, so you might still just be seeing the behavior of the other storage plugin or the default behavior if you did not have a storage plugin.

The storage plugins are typically not a lot of code, you are looking for a function with this signature:


static OrthancPluginErrorCode StorageCreate(const char* uuid,
const void* content,
int64_t size,
OrthancPluginContentType type)


Then see what they do based on the type variable. In the S3 one, I found it here:


In case you don’t read code, I will explain it briefly, ignore this if you do. Orthanc calls a function in the plugin with the instance uuid, the data, and a variable telling the plugin whether this is the DICOM data, JSON data, or something else. The S3 plugin is saying: if it is DICOM, upload it to S3 (“s3->s3->UploadFileToS3”), but if it is JSON, write it to disk (“Utils::writeFile”). Orthanc only gives the data over to the plugin, the plugin controls where it gets stored.

The GCP plugin I found only implemented OnChangeCallback (https://hg.orthanc-server.com/orthanc-gcp/file/tip/Plugin/Plugin.cpp), so it is not controlling the storage location as far as I can tell.


Hmm, I will try to dig deeper intro implementation of these plugins.
Thanks for your time :slight_smile:

Pe miercuri, 24 iunie 2020, la 01:05:35 UTC+3, jake...@gmail.com a scris: