Storing Dicom Images of Orthanc on my S3 Bucket

Hi Everyone,
I am facing some trouble while storing my uploaded dicom data on orthanc to my local s3 bucket.
So far this is what I’ve done :

  1. I have connected two orthanc servers peering them together by adding each other in the config file, now, One is present locally and the other one is configured on cloud using docker and nginx configuration.
  2. Now, in docker configuration file i.e. docker-compose.yml file , I’ve mentioned my access credentials of my s3 bucket and I can even ping it from my cloud, and I’ve also changed the db of my orthanc,
  3. Along with that I’ve added a lua script in my cloud orthanc , which when encounters any new study in it’s explorer than it stores some meta data in the configured db and as per the normal expected behaviour of my configuration , should store the images on my s3 bucket.

Ques. As per the normal docker-orthanc setup, the images should get saved on the local container , but I’ve changed the storage path to my s3, I’ve already checked my bucket settings and given all access to my aws ID ( alongwith complete access to s3), now the images are unexpectedly not getting saved in my s3 bucket.
There is no entry in my s3 while it is still getting saved on cloud docker container locally.

If anyone is familiar with this issue, than please help me out.

Thanks in advance.
I can share both the configuration files if needed.

Himanshu.

Hi,

As always, the first thing to do is check your startup logs in verbose mode to make sure the S3 plugin is enabled and configured correctly.

Alain.

Hey alain,

I have already checked each and every line in the logs, but not able to get any error in the s3 storage lines, but it assures me that the s3 plugin is successfully enabled and configured correctly.
I guess the issue is with this configuration :
(orthanc.json file present in my docker container)
“AwsS3Storage”: {
“EnableAwsSdkLogs”: true,
“Region”: “ap-south-1”,
“RequestTimeout”: 3000,
“BucketName”: “mybucket”,
“AccessKey”: “xyz”,
“SecretKey”: xyz
},
i have set the request timeout to 3 seconds , but even changing it to 60 seconds it is not getting uploaded, also i have made sure that all permissions are given to my IAM id so that orthanc can successfully write/upload in s3.
If i am wrong at some part ,please let me know , I am also completely new to this.

Also, I am sharing my complete orthanc.json file here :
{
“StorageAccessOnFind”: “Always”,
“Transfers”: {
“MaxHttpRetries”: 5
},
“DicomWeb”: {
“Root”: “/dicom-web/”,
“EnableWado”: true,
“Enable”: true,
“Ssl”: false,
“MetadataWorkerThreadsCount”: 4,
“StudiesMetadata”: “Full”,
“EnableMetadataCache”: true,
“SeriesMetadata”: “Full”,
“WadoRoot”: “/wado”
},
“PostgreSQL”: {
“EnableSsl”: false,
“Database”: “orthanc”,
“Username”: “xyz”,
“EnableStorage”: false,
“Port”: 5432,
“TransactionMode”: “ReadCommitted”,
“EnableIndex”: true,
“EnableVerboseLogs”: true,
“Host”: “x.x.x.x”,
“Password”: “xyz”,
“Lock”: false
},
“AwsS3Storage”: {
“EnableAwsSdkLogs”: true,
“Region”: “ap-south-1”,
“RequestTimeout”: 3000,
“BucketName”: “mybucket”,
“AccessKey”: “xyz”,
“SecretKey”: xyz
},
“Housekeeper”: {
“Enable”: true,
“Schedule”: {
“Monday”: [
“1-6”
],
“Tuesday”: [
“1-6”
],
“Wednesday”: [
“1-6”
],
“Thursday”: [
“1-6”
],
“Friday”: [
“1-6”
],
“Saturday”: [
“1-6”
],
“Sunday”: [
“1-6”
]
}
},
“HttpRequestTimeout”: 3600,
“DelayedDeletion”: {
“Enable”: true
},
“StableAge”: 60,
“AuthenticationEnabled”: true,
“DicomServerEnabled”: true,
“HttpTimeout”: 3600,
“DeIdentifyLogs”: false,
“RegisteredUsers”: {
“admin”: “xyz”
},
“LuaScripts”: [
“/usr/share/orthanc/Scripts/myscript.lua”
],
“StorageDirectory”: “/var/lib/orthanc/db”,
“RemoteAccessAllowed”: true,
“HttpsCACertificates”: “/etc/ssl/certs/ca-certificates.crt”,
“Plugins”: [
“/run/orthanc/plugins”,
“/usr/share/orthanc/plugins”
],
“Gdcm”: {
“Throttling”: 4,
“RestrictTransferSyntaxes”: [
“1.2.840.10008.1.2.4.90”,
“1.2.840.10008.1.2.4.91”,
“1.2.840.10008.1.2.4.92”,
“1.2.840.10008.1.2.4.93”
]
},
“OrthancExplorer2”: {
“Enable”: true,
“IsDefaultOrthancUI”: false
}
}

If anyone can identify any problem here, Please let me know.

Thanks.

Himanshu

Hi Himansu,

There can be many kids of underlying issues (missing IAM policy, bucket policy, Orthanc configuration, etc.), so it’s not easy to debug your issue on this forum.

However, we have a working Orthanc deployment AWS Sample, where you can check a working configuration with the S3 plugin:
https://github.com/aws-samples/orthanc-cdk-deployment

It’s worth having a look and compare your solution, maybe you can spot the difference!

Please let me know if this helps!

Kind regards,
Tamas

Hi tamassanta,

Truly value your cooperation.
First of all let me tell you my entire configuration one more time :

  1. I have used orthanc-docker configuration and deployed that part on my EC2 instance.
  2. I have created a image, not the original present images like jodogne or orthancteam etc. but my own using the configuration rules given in the documentation.
  3. In the configuration I am trying to use three kinds of storage : Postgres, S3 bucket, and Local Storage(container). This local container storage i am using as a fallback/backup.
    Important(Actual Problem) :
  4. The postgres db is present on a different server and i am able to connect it and write the data in it perfectly, but the s3 bucket is not having the dicom images.

I have already checked various things , I am listing them out one by one :

  1. IAM user permissions : I have checked the permissions and i have given complete access to the user “AmazonS3FullAccess”.
  2. IAM configuration in configuration file : I have checked the access key and secret key multiple times, it is absolutely correct ,and i have tested it again by sending a test file from my system (EC2) and listing the data of s3 on terminal too using AWS CLI , giving a confirmation of connection. Also the container is using the plugin directly using aws sdk to connect with the s3 bucket.
  3. There is no bucket policy i have defined to my s3 bucket.

As per the logs given by orthanc, it is giving me some conflict with the plugins used, related to MULTIPLE STORAGE AREA ACCESSING by the plugins , mainly due to two plugins i.e. postgres and s3 plugin.
log:
E1112 12:47:44.085775 MAIN PluginsManager.cpp:201] Exception while invoking plugin service 1016: Another plugin has already registered a custom storage area
W1112 12:47:44.085786 MAIN PluginsManager.cpp:158] The storage area plugin will retry up to 10 time(s) in the case of a collision
I1112 12:47:44.059779 MAIN PluginsManager.cpp:316] (plugins) Found a shared library: “/usr/share/orthanc/plugins/libOrthancPostgreSQLStorage.so”
W1112 12:47:44.060637 MAIN PluginsManager.cpp:274] Registering plugin ‘postgresql-storage’ (version 6.2)

I am trying to configure in such a way , that all the metadata can be stored in postgres, all the media files (dicom images) can be sent to s3 bucket , and i can use the local container storage as a fallback option.

In the configuration i have tried both things :

  1. In postgres : “EnableStorage = false” , In s3 : “transfer method = transfer_manager” , i know that i don’t need to explicitly mention transfer method as transfer_manager but i still tried doing that in case it shouldn’t be “direct”. but it gave me same result.
  2. This is my current configuration :
    "
    {
    “StorageAccessOnFind”: “Always”,
    “Transfers”: {
    “MaxHttpRetries”: 5
    },
    “DicomWeb”: {
    “Root”: “/dicom-web/”,
    “EnableWado”: true,
    “Enable”: true,
    “Ssl”: false,
    “MetadataWorkerThreadsCount”: 4,
    “StudiesMetadata”: “Full”,
    “EnableMetadataCache”: true,
    “SeriesMetadata”: “Full”,
    “WadoRoot”: “/wado”
    },
    “PostgreSQL”: {
    “EnableSsl”: false,
    “Database”: “mydb”,
    “EnableStorage”: true,
    “Port”: 5432,
    “TransactionMode”: “ReadCommitted”,
    “EnableVerboseLogs”: true,
    “Host”: “10.12.0.92”,
    “Password”: “password”,
    “EnableIndex”: true,
    “Username”: “username”,
    “Lock”: false
    },
    “AwsS3Storage”: {
    “EnableAwsSdkLogs”: true,
    “ServerSideEncryption”: “AES256”,
    “Region”: “ap-south-1”,
    “RequestTimeout”: 60000,
    “BucketName”: “my-s3-orthnac”,
    “AccessKey”: “zyz”,
    “EnableStorage”: true,
    “SecretKey”: “xyz”
    },
    “Storage”: {
    “EnableStorage”: true,
    “StorageAreas”: “PostgreSQL,AWS_S3_Storage,Local”,
    “Local”: {
    “EnableStorage”: true,
    “Path”: “/var/lib/orthanc/db”
    },
    “Type”: “Composite”
    },
    “Housekeeper”: {
    “Enable”: true,
    “Schedule”: {
    “Monday”: [
    “1-6”
    ],
    “Tuesday”: [
    “1-6”
    ],
    “Wednesday”: [
    “1-6”
    ],
    “Thursday”: [
    “1-6”
    ],
    “Friday”: [
    “1-6”
    ],
    “Saturday”: [
    “1-6”
    ],
    “Sunday”: [
    “1-6”
    ]
    }
    },
    “HttpRequestTimeout”: 3600,
    “DelayedDeletion”: {
    “Enable”: true
    },
    “StableAge”: 60,
    “AuthenticationEnabled”: true,
    “DicomServerEnabled”: true,
    “HttpTimeout”: 3600,
    “Postgresql”: {
    “User”: “xyz”
    },
    “DeIdentifyLogs”: false,
    “RegisteredUsers”: {
    “admin”: “xyz”
    },
    “LuaScripts”: [
    “/usr/share/orthanc/Scripts/writeToDatabase.lua”
    ],
    “StorageDirectory”: “/var/lib/orthanc/db”,
    “RemoteAccessAllowed”: true,
    “HttpsCACertificates”: “/etc/ssl/certs/ca-certificates.crt”,
    “Plugins”: [
    “/run/orthanc/plugins”,
    “/usr/share/orthanc/plugins”
    ],
    “Gdcm”: {
    “Throttling”: 4,
    “RestrictTransferSyntaxes”: [
    “1.2.840.10008.1.2.4.90”,
    “1.2.840.10008.1.2.4.91”,
    “1.2.840.10008.1.2.4.92”,
    “1.2.840.10008.1.2.4.93”
    ]
    },
    “OrthancExplorer2”: {
    “Enable”: true,
    “IsDefaultOrthancUI”: false
    }
    "
    Could you please assist me with this.
    Thanks in advance.

Himanshu.

This should be set to “false” if you are using S3 hence the error message stating that you have 2 plugins registered for the storage area.

Hey alain,

I tried setting " EnableStorage: : false at all the places , but it still didn’t worked, updated configuration :
{
“StorageAccessOnFind”: “Always”,
“Transfers”: {
“MaxHttpRetries”: 5
},
“DicomWeb”: {
“Root”: “/dicom-web/”,
“EnableWado”: true,
“Enable”: true,
“Ssl”: false,
“MetadataWorkerThreadsCount”: 4,
“StudiesMetadata”: “Full”,
“EnableMetadataCache”: true,
“SeriesMetadata”: “Full”,
“WadoRoot”: “/wado”
},
“PostgreSQL”: {
“EnableSsl”: false,
“Database”: “mydb”,
“EnableStorage”: false,
“Port”: 5432,
“TransactionMode”: “ReadCommitted”,
“EnableVerboseLogs”: true,
“Host”: “10.12.0.92”,
“Password”: “password”,
“EnableIndex”: true,
“Username”: “username”,
“Lock”: false
},
“AwsS3Storage”: {
“EnableAwsSdkLogs”: true,
“ServerSideEncryption”: “AES256”,
“Region”: “ap-south-1”,
“RequestTimeout”: 60000,
“BucketName”: “my-s3-orthnac”,
“AccessKey”: “zyz”,
“EnableStorage”: false,
“SecretKey”: “xyz”
},
“Storage”: {
“EnableStorage”: false,
“StorageAreas”: “PostgreSQL,AWS_S3_Storage,Local”,
“Local”: {
“EnableStorage”: false,
“Path”: “/var/lib/orthanc/db”
},
“Type”: “Composite”
},
“Housekeeper”: {
“Enable”: true,
“Schedule”: {
“Monday”: [
“1-6”
],
“Tuesday”: [
“1-6”
],
“Wednesday”: [
“1-6”
],
“Thursday”: [
“1-6”
],
“Friday”: [
“1-6”
],
“Saturday”: [
“1-6”
],
“Sunday”: [
“1-6”
]
}
},
“HttpRequestTimeout”: 3600,
“DelayedDeletion”: {
“Enable”: true
},
“StableAge”: 60,
“AuthenticationEnabled”: true,
“DicomServerEnabled”: true,
“HttpTimeout”: 3600,
“Postgresql”: {
“User”: “xyz”
},
“DeIdentifyLogs”: false,
“RegisteredUsers”: {
“admin”: “xyz”
},
“LuaScripts”: [
“/usr/share/orthanc/Scripts/writeToDatabase.lua”
],
“StorageDirectory”: “/var/lib/orthanc/db”,
“RemoteAccessAllowed”: true,
“HttpsCACertificates”: “/etc/ssl/certs/ca-certificates.crt”,
“Plugins”: [
“/run/orthanc/plugins”,
“/usr/share/orthanc/plugins”
],
“Gdcm”: {
“Throttling”: 4,
“RestrictTransferSyntaxes”: [
“1.2.840.10008.1.2.4.90”,
“1.2.840.10008.1.2.4.91”,
“1.2.840.10008.1.2.4.92”,
“1.2.840.10008.1.2.4.93”
]
},
“OrthancExplorer2”: {
“Enable”: true,
“IsDefaultOrthancUI”: false
}

and then i tried to enable it for s3 only than also it didn’t worked.
Can you clarify your point to me.

I am also sharing the logs to you in case you can understand it much better and identify the problem.
14NovLog.txt (640.7 KB)

Thanks
Himanshu.

You should disable the DelayedDeletion plugin; it is also a Storage plugin and it is relevant only when working with a local disk storage.

Okay,
And should i use the composite storage approach or should i simply use only postgres and s3 configuration, enabling storage for only s3 ?

And, can you please give me a updated orthanc configuration according to you, I shall try that too.

Hi Everyone,
I tried a no. of things ,but really, special thanks to alainmazy , after disabling the “Delayed Deletion” plugin , my data is finally coming to the S3 bucket.

Now, i have some doubts of my own if someone can clear those :

  1. I want to know that whether it (delayed deletion plugin ) will not affect any other functionality.
  2. Also, the data is coming in a .dcm configuration, but i expected only image/media files there instead.
  3. Can i add the data in s3 in a sequenced way, right now directly all the dcm files are coming one by one without a clear idea that which patient the files belongs to.
  4. Also, I have Housekeeper plugin enabled as true, i don’t know it’s role in my configuration yet.
  5. Also, i want to use the OHIF viewer plugins and Orthanc Viewer plugins (using environment variable if possible ) , if someone can help me setting those in the docker-orthanc configuration.
  6. And lastly , i don’t know which dcm files are for which patient, in case i want to store the respective studies somewhere than how can i get them .

Also, this is my current configuration file , if anyone thinks that there is some issue here which might create any future problem when i will have more data than please list out, it would be a great help for me :

{
“StorageAccessOnFind”: “Always”,
“Transfers”: {
“MaxHttpRetries”: 5
},
“DicomWeb”: {
“Root”: “/dicom-web/”,
“EnableWado”: true,
“Enable”: true,
“Ssl”: false,
“MetadataWorkerThreadsCount”: 4,
“StudiesMetadata”: “Full”,
“EnableMetadataCache”: true,
“SeriesMetadata”: “Full”,
“WadoRoot”: “/wado”
},
“PostgreSQL”: {
“EnableSsl”: false,
“Database”: “orthanc”,
“Username”: “username”,
“EnableStorage”: false,
“Port”: 5432,
“TransactionMode”: “ReadCommitted”,
“EnableIndex”: true,
“EnableVerboseLogs”: true,
“Host”: “10.12.0.92”,
“Password”: “password”,
“Lock”: false
},
“AwsS3Storage”: {
“EnableAwsSdkLogs”: true,
“Region”: “ap-south-1”,
“RequestTimeout”: 60000,
“BucketName”: “my-s3-orthnac”,
“AccessKey”: “myaccesskey”,
“SecretKey”: “mysecretkey”
},
“Housekeeper”: {
“Enable”: true,
“Schedule”: {
“Monday”: [
“1-6”
],
“Tuesday”: [
“1-6”
],
“Wednesday”: [
“1-6”
],
“Thursday”: [
“1-6”
],
“Friday”: [
“1-6”
],
“Saturday”: [
“1-6”
],
“Sunday”: [
“1-6”
]
}
},
“HttpRequestTimeout”: 3600,
“StableAge”: 90,
“AuthenticationEnabled”: true,
“DicomServerEnabled”: true,
“HttpTimeout”: 3600,
“DeIdentifyLogs”: false,
“RegisteredUsers”: {
“admin”: “xyz”
},
“LuaScripts”: [
“/usr/share/orthanc/Scripts/writeToDatabase.lua”
],
“StorageDirectory”: “/var/lib/orthanc/db”,
“RemoteAccessAllowed”: true,
“HttpsCACertificates”: “/etc/ssl/certs/ca-certificates.crt”,
“Plugins”: [
“/run/orthanc/plugins”,
“/usr/share/orthanc/plugins”
],
“Gdcm”: {
“Throttling”: 4,
“RestrictTransferSyntaxes”: [
“1.2.840.10008.1.2.4.90”,
“1.2.840.10008.1.2.4.91”,
“1.2.840.10008.1.2.4.92”,
“1.2.840.10008.1.2.4.93”
]
},
“OrthancExplorer2”: {
“Enable”: true,
“IsDefaultOrthancUI”: false
}
}

Truly value everyone’s cooperation

Himanshu

1. I want to know that whether it (delayed deletion plugin ) will not affect any other functionality.

This plugin will only, as its name indicates, delay a delete operation. If you do not delete anything, it will have no impact. Furthermore, it is only relevant with filesystem storage, not S3.

2. Also, the data is coming in a .dcm configuration, but i expected only image/media files there instead.

.dcm files are image files (among others). These are DICOM files. Please read a little about DICOM, that will help you when working with Orthanc.

3. Can i add the data in s3 in a sequenced way, right now directly all the dcm files are coming one by one without a clear idea that which patient the files belongs to.

You are not supposed to inspect or use the .dcm files in your bucket directly. You need to use Orthanc to interact with it.

You can use the Orthanc GUI to list all the patients, for instance, and you can, through Orthanc, export the patients studies as a zip file or send it to another modality such as your PACS.

4. Also, I have Housekeeper plugin enabled as true, i don’t know it’s role in my configuration yet.

Please disable these plugins until you’ve learned more about Orthanc. Leaving it enabled will not cause any trouble or data loss, though.

5. Also, i want to use the OHIF viewer plugins and Orthanc Viewer plugins (using environment variable if possible ) , if someone can help me setting those in the docker-orthanc configuration.

You need to read the documentation before asking for help. The time of everybody on this forum is valuable.

6. And lastly , i don’t know which dcm files are for which patient, in case i want to store the respective studies somewhere than how can i get them .

As mentioned above, you should not deal with .dcm files directly. For your particular question, you need to use the Orthanc GUI to locate your patient (for instance through the list of patients), then clicking on it will display the list of studies. Once you have located the study you are interested in, you can use the Download ZIP button that is available in the GUI.

Good luck!

Thanks benjamin for clarifying all these points and Everyone for helping me out with the issues I faced.