Orthanc running inside AWS ECS Not Sending Messages to AWS SQS on Image Upload

I am running an Orthanc server on AWS ECS. When I upload an image using the Orthanc web interface, it is expected to send a message to AWS SQS, but this functionality is not working.

Details

Deployment: Orthanc running on AWS ECS
Expected Behavior: When an image is uploaded via the Orthanc web interface, a message should be sent to an AWS SQS queue.
Issue Observed: No messages are being sent to the SQS queue after an image upload.

Troubleshooting Steps Taken:

  1. Verified that the SQS queue exists and is accessible.
  2. Ensured that the IAM role assigned to the ECS task has the necessary permissions to send messages to SQS.
  3. Reviewed Orthanc configuration for SQS integration settings.

Request:

Could you help investigate why Orthanc is not sending messages to SQS? Please let me know if any additional logs or configurations are needed.

Thanks!

Hi,
I’m not aware of any official SQS integration with Orthanc. It will be possible through a custom plugin. But without additional information, I’m not sure anyone will be able to help.
James

Hello

You mention “When an image is uploaded via the Orthanc web interface, a message should be sent to an AWS SQS queue.

I assume you are using a plugin that is supposed to do this?

What is this plugin? What is its configuration? What are the error log messages?

Hi James & Benjamin, yes its custom python plugin. That is subscribed to the ChangeType from Orthanc. If the change type is Stable Study it drops message to AWS SQS.

Note : we are using AWS S3 to store DCIM.

The issue is orthanc does show DICM uploaded successfully, creates study for it, we can see & access study via orthanc. However, it does not store anything into AWS S3 bucket, no logs success or failure on ECS.

Looks likes orthanc is not emitting ChangeType event. I have below configuration.

const container = orthancTaskDefinition.addContainer(‘MyContainer’, {
image: ecs.ContainerImage.fromRegistry(orthancECRuri), // Sample image
logging: ecs.LogDriver.awsLogs({
streamPrefix: ${instituteIdentifier}-${siteIdentifier}-orthanc-${stage}/container-name/ecs-task-id,
logGroup: dockerComposeLogGroup,
}), // CloudWatch Logs
environment: {
ORTHANC__POSTGRESQL__HOST: rdsInstanceEndpoint,
ORTHANC__POSTGRESQL__PORT: rdsPort.toString(),
ORTHANC__POSTGRESQL__USERNAME: siteUsername,
ORTHANC__POSTGRESQL__PASSWORD: sitePassword,
ORTHANC__POSTGRESQL__DATABASE: rdsDBName,
ORTHANC__AWS_S3_STORAGE__BUCKET_NAME: orthancStorageBucket.bucketName,
ORTHANC__AWS_S3_STORAGE__REGION: env.region || ‘us-east-1’,
ORTHANC__AWS_S3_STORAGE__ACCESS_KEY: orthancIamUserAccessKey.accessKeyId,
ORTHANC__AWS_S3_STORAGE__SECRET_KEY: orthancIamUserAccessKey.secretAccessKey.unsafeUnwrap(),
QUEUE_Arn: endpointQueueArn,
QUEUE_REGION: env.region || ‘us-east-1’,
QUEUE_URL: endpointQueueUrl,
ORTHANC__NAME: ${siteIdentifier}-ultrasound.ai DICOMServer,
InstitutionName: siteIdentifier,
FullDomainName: ${siteIdentifier}-${dicomServerHostSuffix},
ConnectionInfo: pacsSecret.ref,
LOCALDOMAIN: ${env.region}.compute.internal ${siteIdentifier}-orthanconaws.local,
ORTHANC__REGISTERED_USERS: {"${siteUsername}": "${sitePassword}"},
DICOM_WEB_PLUGIN_ENABLED: ‘true’,
ORTHANC_WEB_VIEWER_PLUGIN_ENABLED: ‘false’,
ORTHANC_STONE_VIEWER_PLUGIN_ENABLED: ‘false’,
},
});

Hello,

I have never seen Orthanc failing to call the registered handlers.

If I were you, if your system is a bit complex and has a few moving parts, I would go back to basics:

  • Create a super simple Python plugin in a fresh Orthanc instance.
  • Add a print statement at the top of the file so that you can make sure, by checking in the Orthanc logs, that your Python script is loaded.
  • Then, use a very dumb orthanc.RegisterOnChangeCallback, with with a function where you simply execute print and display the callback parameters
    (for instance, you could hook orthanc.ChangeType.ORTHANC_STARTED and orthanc.ChangeType.NEW_INSTANCE)

You’ll probably find out that there’s something else not working with your plugin.

There could be something else, of course, but that would be my first check.

Regarding S3, I would do the same:

  • Create a simple orthanc container (locally, with docker on an EC2 box for instance)
  • use aws configure export-credentials to get a local access key and dump it in clear in your JSON configuration
  • Enable the AWS plugin logs with "EnableAwsSdkLogs": true)
  • Drop a study, read the logs and check the bucket.

HTH