Orthanc 1.12.6 + PostgreSQL 7.1 Memory Leak

Hi,
We have 2 systems that run Orthanc with the following configuration:

  • orthancteam/orthanc:latest (1.12.6)
  • PostgreSQL DB (one uses Postgres v13, the other Postgres v17) (configured with PostgreSQL Plugin 7.1, for indexing only
  • Application script that loops and checks every instance in Orthanc for processing

These 2 servers appear to have a memory leak. Our application script loops through every instance to check a variety of different things. It uses the rest API and the checks data in instances/:instance_id/metadata one instance at a time. Once finished, it will wait and then start again (it’s brute force but it works).

We’re having reliability with these 2 servers, they regularly (1+ times a day) run out of memory and require orthanc to be restarted to clear the memory.

From observing the servers through htop it appears orthanc is increasing memory demand by approx 10meg every ~10mins. Without restarting Orthanc, memory usage on our servers can exhaust all page file in 12-24hrs.

If I stop our application script, the memory leak either slows or stops to a manageable level.

I’m wondering if there are any troubleshooting/debugging advice that I can use to identify where this memory leak is?

Cheers,

James

Hi James,

It’s quite difficult to investigate this kind of leaks on a container.

If you could provide us with the simplest script calling the API that generates this leak, I should hopefully then be able to quickly identify it with valgrind on my system.

Best,

Alain

Hi Alain,

Thanks for the super quick response.

Here is a simple python script that is the core of the script. I haven’t verified that this script causes the same memory leak but will try it now (edited: I just verified that this does cause the swap usage to keep growing)

import time
import requests

url='http://localhost:8042'

def get_instance_metadata(instance_id):
  return requests.get(f'{url}/instances/{instance_id}', params={'expand': True}).json()

def get_all_instances():
  all_instances = []
  limit=100
  count=0
  while True:
    params = {'limit': limit, 'since': count * limit}
    instance_list = requests.get(f'{url}/instances', params=params).json()
    all_instances.extend(instance_list)
    if len(instance_list) < limit:
      break
    count += 1
  
  return all_instances


def loop():
  while True:
    instances = get_all_instances()
    for instance_id in instances:
      metadata = get_instance_metadata(instance_id)
    time.sleep(60)

loop()

Here is a minimal docker-compose.yml file that describes our server setup.

services:
  orthanc:
    image: orthancteam/orthanc
    depends_on:
      - db
    ports:
      - "127.0.0.1:8042:8042"
    environment:
      TRANSFERS_PLUGIN_ENABLED: "true"
      ORTHANC__AUTHENTICATION_ENABLED: "false"
      ORTHANC__DICOM_MODALITIES_IN_DATABASE: "true"
      ORTHANC__POSTGRESQL__ENABLE_INDEX: "true"
      ORTHANC__POSTGRESQL__HOST: "db"
      ORTHANC__POSTGRESQL__DATABASE: "postgres"
      ORTHANC__POSTGRESQL__USERNAME: "postgres"
      ORTHANC__POSTGRESQL__PASSWORD: "password"
 db:
    image: postgres:17-alpine
    ports:
      - 127.0.0.1:5432:5432
    environment:
      POSTGRES_PASSWORD: password
      POSTGRES_DB: postgres

@James thanks for this test setup.

That was a tough one because it was actually not detected by Valgrind and other memory checking tools !

We were actually storing one SQL prepared statement for each combination of since & limit so it only happened if you had a lot of instances. I was finally able to trigger it quite rapidly by storing 10.000 instances and request them by pages of 2 → this was creating 5.000 different prepared SQL statement → and, after the fix, only 1 !

I hope this is the right fix for you too … mainline docker builds are currently in progress; if all tests pass, the fix will be available in orthancteam/orthanc-pre-release:master-unstable.

Best regards,

Alain.

1 Like

Thank you Alain! I will try mainline when it’s available. But also allows me to work around it.
James

Morning Alain,
I wasn’t able to test the build as I’m not sure it was successful. Looking at the Github Actions, I think the unstable build failed.
James

Yes, indeed, the build failed. I have fixed something and re-triggered it.

Note that the orthancteam/orthanc-pre-release:master-full-unstable-arm64 is already available

As an update, I ran orthancteam/orthanc-pre-release:master-unstable all day today on and it is looking good. Memory usage is no longer growing.

I also updated our internal script to fetch all instances in 1 call. This also prevented the issue.

I just wanted to confirm that it looks like you got it.

Thanks very much!

James

2 Likes