Postgresql & MySQL 5.1 + DicomWeb 1.14: performance improvements

Hi everyone,

Following the release of Orthanc 1.12.1, we have just released

  • Postgresql & MySQL 5.1 featuring a speed up of C-Find/QIDO-RS and tools/find
  • DicomWeb 1.14 featuring, amongst other, a speed up of the famous metadata routes (full release notes). We are waiting for your feedback about these improvements. Hopefully, now the "Full" mode performance is approaching the "MainDicomTags" mode with the great advantage of not requiring any "ExtraMainDicomTags" configuration and/or reprocessing the DB with the Housekeeper plugin.
  • minor updates for:
    • Orthanc Explorer 2 (v1.0.2)
    • Authorization plugin (v0.5.3)

As usual, these updates are available:

Enjoy !

Alain

4 Likes

I’ve been off the forum for a bit. I’ll have to look at that.

image

Hello Alain,

These are not yet on https://www.orthanc-server.com/browse.php, no?
Cheers
Axel

Hi Axel,

I’ll fix that ASAP !

Best,

Alain

Hi Axel,

This is now done.

Alain

2 Likes

We upgraded our PACs to the newly available image osimis/orthanc:23.8.2. The metadata API is working awesomely. This API is sprinting after the update, which was quite slow before. :slight_smile:

However, after the update to this new version, we found a few APIs acting quite slow which was working fine before.

/dicom-web/studies/26141504030405000200283019/series/1.2.276.0.7230010.3.1.3.2692228702.4164.1694812049.374/instances/1.2.276.0.7230010.3.1.4.2692228702.4164.1694812049.372/bulk/00480105/1/00282000

These APIs are taking quite some time to load. Check out the below screenshot: a response size of 3.1 KB is taking 25 seconds to load, while the metadata API, which has a size of 553 KB, is taking 2.39 seconds.

Please have a look at this @alainmazy

Thank you,

Hi Selvesan,

Please provide sample files and a docker-compose setup with 2 orthanc (old and new version) + a test scenario that shows that there is a degradation of the performance of this route.

I just tried with a MG study with Orthanc 23.7.1 and 23.8.2 and could not observe any difference in time which is quite expected since this route is not supposed to be affected by the recent changes in the DicomWEB plugin.

Accessing the bulk route means that the DICOM file must be opened and parsed to extract the requested tag. Nothing can be cached there… so if the file is big and the storage is slow, this route can not be miraculously fast.

HTH,

Alain.

Hi @alainmazy

Thank you for this information this was really helpful for me to debug this further more.

Okay, now we have two different orthanc server setup with same resources. One has osimis/orthanc:21.11.0 and the other server has osimis/orthanc:23.8.2. Here is what we found out.

the above screenshot is from the server which is built with 21.11.0 which is not taking more than 3 seconds to get the response from the bulk API.

however checkout the blow screenshot.

One of the bulk api here is taking 33s.

I’m pretty sure that this started to occur after I upgraded the orthanc server to the latest one that is 23.8.2.

Please let me know what else I can help with to investigate on this.

Regards,
Selvesan

I am seeing this problem with the loading times of these APIs @Selvesan was mentioning too. @alainmazy could you help illuminate this issue

As already mentioned above:
Please provide sample files and a docker-compose setup with 2 orthanc (old and new version) + a test scenario that shows that there is a degradation of the performance of this route.

First I would also suggest you to play with this config:

  // Maximum size of the storage cache in MB.  The storage cache
  // is stored in RAM and contains a copy of recently accessed
  // files (written or read).  A value of "0" indicates the cache
  // is disabled.  (new in Orthanc 1.10.0)
  "MaximumStorageCacheSize" : 128,

BR,

Alain

Hi @alainmazy

We have our Orthanc server deployed at Amazon ECS Fargate, to do this we are using a orthanc-cdk-deployment setup.

Configs: https://github.com/selvesandev/orthanc-cdk/blob/main/infrastructure/lib/orthanc-stack.ts

Dockerfile: https://github.com/selvesandev/orthanc-cdk/blob/main/infrastructure/lib/local-image-official-s3/Dockerfile

@Selvesan , If you don’t want to be helped, you won’t be helped. This is my last message on this topic.

1 Like

Hi @alainmazy

Here is the sample DICOM file:

Sorry for not providing the requested resource on my last message. I was going to edit my message once I had the permission to share the video and DICOM file. However I had to take a day off and rush somewhere due to some family urgency. Hope you will understand.

Please also find the demo video of request timing comparison between old orthanc and new on my next comment.

Best Regards,
Selvesan

Here is the demo video of the request that is made from our client app to the orthanc PACs server.
This is the request comparison demo on both new and old version

Regards,
Selvesan

And finally the orthanc-cdk javascript based setup that we are using to setup our server on AWS does not seem to have a docker-compose.yaml file in it. We do our version switch on the Docker file here

and the configuration are setup on the config file

@alainmazy. Im not sure if there has been any updates on any possible resolution for the performance issue in this release? I appreciate your help.

Hi @Thanhxle

I’m actually still waiting for a standalone way to reproduce and debug the issue. Not an Amazon setup I can only run on AWS after spending hours learning how to run it and spending money to run it.

Here is a standalone setup with test instructions. This is actually what I’m expecting as a reproducible test setup.

Please modify it until you can demonstrate a degradation in performance and I’ll be happy to investigate.

# save the content of this file as 'docker-compose.yml'
#   docker compose up -d
# create the bucket in minio as explained here: 
# Before you upload any DICOM file in Orthanc, connect to the minio interface (http://localhost:9000/ with: minio/miniopwd) and create a my-sample-bucket bucket. 
# upload Selvesan file (https://drive.google.com/file/d/1UvW7bNTXg8GmGH1SjY-TOywnQupZUW73/view) to http://localhost:8045/
# restart the setup to invalidate the cache
#   docker compose up -d --force-recreate
# download from old server (run the command twice to see the effect of the cache)
#   time curl -H "Accept: multipart/related;type=application/octet-stream" "http://localhost:8044/dicom-web/studies/26141507010204090506321415/series/1.2.276.0.7230010.3.1.3.535902291.12004.1676438461.545/instances/1.2.276.0.7230010.3.1.4.535902291.12004.1676438461.543/bulk/00480105/1/00282000"
# -> 1.607 s
# -> 0.766 s

# restart the setup to invalidate the cache
#   docker compose up -d --force-recreate
# download from new server (run the command twice to see the effect of the cache)
#   time curl -H "Accept: multipart/related;type=application/octet-stream" "http://localhost:8045/dicom-web/studies/26141507010204090506321415/series/1.2.276.0.7230010.3.1.3.535902291.12004.1676438461.545/instances/1.2.276.0.7230010.3.1.4.535902291.12004.1676438461.543/bulk/00480105/1/00282000"
# -> 1.795 s
# -> 0.819 s

# so far, no real differences in terms of performance

version: "3.3"

services:
    orthanc-old:
        image: osimis/orthanc:22.2.0  # this is basically the same version as 21.11.0 but with the S3 plugin included
        ports: [8044:8042]
        environment:
            ORTHANC__AUTHENTICATION_ENABLED: "false"
            DICOM_WEB_PLUGIN_ENABLED: "true"
            VERBOSE_ENABLED: "true"
            VERBOSE_STARTUP: "true"
            ORTHANC__NAME: "old"
            ORTHANC__AWS_S3_STORAGE__BUCKET_NAME: "my-sample-bucket"
            ORTHANC__AWS_S3_STORAGE__REGION: "eu-west-1"
            ORTHANC__AWS_S3_STORAGE__ACCESS_KEY: "minio"
            ORTHANC__AWS_S3_STORAGE__SECRET_KEY: "miniopwd"
            ORTHANC__AWS_S3_STORAGE__ENDPOINT: "http://minio:9000"
            ORTHANC__AWS_S3_STORAGE__VIRTUAL_ADDRESSING: "false"
            ORTHANC__POSTGRESQL__HOST: "orthanc-index"

    orthanc-new:
        image: osimis/orthanc:23.8.2
        ports: [8045:8042]
        environment:
            ORTHANC__AUTHENTICATION_ENABLED: "false"
            DICOM_WEB_PLUGIN_ENABLED: "true"
            VERBOSE_ENABLED: "true"
            VERBOSE_STARTUP: "true"
            ORTHANC__NAME: "new"
            ORTHANC__AWS_S3_STORAGE__BUCKET_NAME: "my-sample-bucket"
            ORTHANC__AWS_S3_STORAGE__REGION: "eu-west-1"
            ORTHANC__AWS_S3_STORAGE__ACCESS_KEY: "minio"
            ORTHANC__AWS_S3_STORAGE__SECRET_KEY: "miniopwd"
            ORTHANC__AWS_S3_STORAGE__ENDPOINT: "http://minio:9000"
            ORTHANC__AWS_S3_STORAGE__VIRTUAL_ADDRESSING: "false"
            ORTHANC__POSTGRESQL__HOST: "orthanc-index"

    minio:
        command: server /data --console-address ":9001"
        image: minio/minio
        ports: [9000:9000, 9001:9001]
        environment:
            - MINIO_REGION=eu-west-1
            - MINIO_ROOT_USER=minio
            - MINIO_ROOT_PASSWORD=miniopwd
        volumes:
            - minio-storage:/data

    orthanc-index:
        image: postgres:15
        restart: unless-stopped
        volumes: ["orthanc-index:/var/lib/postgresql/data"]
        environment:
            POSTGRES_HOST_AUTH_METHOD: "trust"

volumes:
    orthanc-index:
    minio-storage:

@alainmazy . we will reproduce this and get back to you with more details. Thank you!

Hi! I’m the author of the orthanc-cdk-deploy AWS Sample. I just wanted to let you know, that I have pushed an update that moves away from the custom Docker image, and uses the official image instead. This fixes a lot of teething issues with the sample, please get the latest version if you’re still using it.

1 Like