/tools/find MetadataQuery

Hi. I need to get the “latest” studies. LastUpdate looks like a good choice for this task. I saw the MetadataQuery parameter on /tools/find, but when I try to use it

-d '{"Level": "Study", "MetadataQuery": {"LastUpdate": "20250314"}}'
"Details" : "Field \"Query\" is missing, or should be a JSON object",

It needs the Query parameter. I don’t have any query to give, but lets do it.

-d '{"Level": "Study", "MetadataQuery": {"LastUpdate": "20250314"}, "Query": {}}'
returns all our studies

It looks like MetadataQuery is ignored. Or maybe I don’t know how to use it?

You might be best to use the changes api (Orthanc REST API) and page backwards from the last change to find the most recent new instances.

Hth james

1 Like

Actually, the LastUpdate is a date time so you should use a filter like

{"LastUpdate": "20250314T*"}

But, if it is ignored, that probably means that you don’t have “ExtendedFind” which is available only with PG and SQLite and Orthanc 1.12.5

HTH,

Alain

1 Like

I wasn’t aware that this feature was just released! It does work perfectly when I use the right version :slight_smile: Thank you.

@alainmazy Is it possible to add the date range, like we can do with "Query"?

"Query": {"StudyDate": "20250330-20250331"}

In fact, I tested and it doesn’t seem to work, with or without T*. Should it work? Do you plan to add this feature?

I might as well explain my actual goal. On StableStudy, we tell an external server about a new study. For some reason, the server might be offline. StableStudy won’t run again when the server is online, so I coded a cron job to compare the studies on the server and on Orthanc. This looks like a good plan, but getting the studies on Orthanc intelligently is somewhat harder.

  • Filtering by StudyDate if wrong, because a client might send us an older study, but it’s still new for us. We don’t really care when the study was acquired, only when we received it.
  • Using a StudyDate range (see last reply) would do the job, but it doesn’t work. I need to do 2 requests?
  • A filter like “all studies updated in the last 10 minutes”, or “all studies, sort by LastUpdate, limit 20” would be perfect, but I can’t seem to find the required feature.

When I open OE2, it looks like the studies are sorted by LastUpdate, but I’m not entirely sure. It’s written “Liste des examens les plus récents”, which might mean that they are sorted by StudyDate. Can you please confirm?

This is actually exactly what OE2 is doing (provided that you are using Orthanc 1.12.5+ and SQLite or PostgreSQL):

Hope this helps (and works for you)

Alain

1 Like

Right, this helped me find getMostRecentStudiesExtended in orthancApi.js where we can see a query like this:

{
    "Level": "Study",
    "Limit": 10,
    "Query": {},
    "OrderBy": [{
        "Type": "Metadata",
        "Key": "LastUpdate",
        "Direction": "DESC"
    }]
}

So, no Query, no MetadataQuery, only an OrderBy.

Thank you again Alain!

Hi All and @alainmazy ,
Do you guys have experienced that Orthanc OE2 return query very slow ? With this simple search by date rage, it took more than 30 seconds (sometimes it took several minutes) to display the result ?

This is statistics of my Orthanc

{
   "CountInstances" : 8046517,
   "CountPatients" : 29856,
   "CountSeries" : 168839,
   "CountStudies" : 53422,
   "TotalDiskSize" : "6443366622921",
   "TotalDiskSizeMB" : 6144873,
   "TotalUncompressedSize" : "6443366622921",
   "TotalUncompressedSizeMB" : 6144873
}

I do not think it’s too much instances. In the verbose log , it just only see
I0509 15:50:02.130019 HTTP-27 HttpServer.cpp:1263] (http) POST /tools/find

I am using the Orthanc Team Docker version : orthancteam/orthanc:25.4.0-full

I already follow the optimization guide for Postgres in Scalability of Orthanc — Orthanc Book documentation , but the query is still slow
This is my orthanc configuration

{
  "Name": "MyOrthanc",
  "StorageDirectory": "/var/lib/orthanc/db",
  "DelayedDeletion": {
    "Enable": true,
    "ThrottleDelayMs": 0,
    "Path": "/var/lib/orthanc/db/delayed-deletion-1.db"
  },
  "IndexDirectory": "/var/lib/orthanc/db",
  "StorageCompression": false,
  "MaximumStorageSize": 0,
  "MaximumPatientCount": 0,
  "MaximumStorageMode": "Recycle",
  "MaximumStorageCacheSize": 128,
  "LuaScripts": [
    "/lua-scripts/MyScript.lua"
  ],
  "LuaHeartBeatPeriod": 0,
  "PostgreSQL": {
    "EnableIndex": true,
    "EnableStorage": false,
    "Lock": false,
    "Host": "orthanc-ct-mr-2025-db",
    "Port": 5432,
    "Database": "orthanc",
    "Username": "postgres",
    "Password": "orthanc",
    "EnableSsl": false,
    "MaximumConnectionRetries": 10,
    "ConnectionRetryInterval": 10,
    "IndexConnectionsCount": 20,
    "EnableVerboseLogs": false,
    "TransactionMode": "ReadCommitted"
  },
  "Java": {
    "Enabled": true,
    "Classpath": "/java/OrthancJavaSDK.jar:/java/target/OrthancJavaPlugin-1.0.1-jar-with-dependencies.jar",
    "InitializationClass": "Main",
    "Enable": true
  },
    "DicomWeb": {
    "Enable": true,
    "Root": "/wado-rs/",
    "EnableWado": true,
    "WadoRoot": "/wado",
    "Ssl": false,
    "QidoCaseSensitive": true,
    "Host": "",
    "StudiesMetadata": "Full",
    "SeriesMetadata": "Full",
    "EnableMetadataCache": true,
    "MetadataWorkerThreadsCount": 4,
    "PublicRoot": "/dicom-web/"
  },
  "Transfers": {
    "Threads": 6,
    "BucketSize": 4096,
    "CacheSize": 512,
    "MaxPushTransactions": 20,
    "MaxHttpRetries": 10,
    "PeerConnectivityTimeout": 120
  },
  "OrthancExplorer2": {
    "Enable": true,
    "IsDefaultOrthancUI": true,
    "UiOptions": {
      "ViewersOrdering": [
        "ohif",
        "wsi"
      ],
      "OhifViewer3PublicRoot": "/saola/"
    }
  },
  "OHIF": {
    "DataSource": "wado-rs"
  },
  "OrthancPeers": {
    "peer": {
      "Url": "http://27.72.147.196:8042/",
      "Username": "orthanc",
      "Password": "orthanc",
      "Timeout": 1
    }
  },
  "ConcurrentJobs": 10,
  "JobsEngineThreadsCount": {
    "ResourceModification": 1
  },
  "HttpServerEnabled": true,
  "OrthancExplorerEnabled": true,
  "HttpPort": 8042,
  "HttpDescribeErrors": true,
  "HttpCompressionEnabled": false,
  "WebDavEnabled": true,
  "WebDavDeleteAllowed": false,
  "WebDavUploadAllowed": true,
  "DicomServerEnabled": true,
  "DicomAet": "ORTHANC",
  "DicomCheckCalledAet": false,
  "DicomPort": 4242,
  "DefaultEncoding": "Latin1",
  "AcceptedTransferSyntaxes": [
    "1.2.840.10008.1.*"
  ],
  "UnknownSopClassAccepted": false,
  "DicomScpTimeout": 30,
  "RemoteAccessAllowed": true,
  "SslEnabled": false,
  "SslCertificate": "certificate.pem",
  "SslMinimumProtocolVersion": 4,
  "SslVerifyPeers": false,
  "SslTrustedClientCertificates": "trustedClientCertificates.pem",
  "AuthenticationEnabled": true,
  "RegisteredUsers": {
    "orthanc": "orthanc"
  },
  "DicomTlsEnabled": false,
  "DicomTlsRemoteCertificateRequired": true,
  "DicomTlsMinimumProtocolVersion": 0,
  "DicomAlwaysAllowEcho": true,
  "DicomAlwaysAllowStore": true,
  "DicomAlwaysAllowFind": false,
  "DicomAlwaysAllowFindWorklist": false,
  "DicomAlwaysAllowGet": false,
  "DicomAlwaysAllowMove": false,
  "DicomCheckModalityHost": false,
  "DicomModalities": {
    "SELF": [
      "SELF",
      "localhost",
      4242
    ]
  },
  "DicomModalitiesInDatabase": false,
  "DicomEchoChecksFind": false,
  "DicomScuTimeout": 10,
  "DicomScuPreferredTransferSyntax": "1.2.840.10008.1.2.1",
  "DicomThreadsCount": 4,
  "OrthancPeersInDatabase": false,
  "HttpProxy": "",
  "HttpVerbose": false,
  "HttpTimeout": 60,
  "HttpsVerifyPeers": true,
  "HttpsCACertificates": "",
  "UserMetadata": {},
  "UserContentType": {},
  "StableAge": 2,
  "StrictAetComparison": false,
  "StoreMD5ForAttachments": true,
  "LimitFindResults": 100,
  "LimitFindInstances": 100,
  "LogExportedResources": false,
  "KeepAlive": true,
  "KeepAliveTimeout": 1,
  "TcpNoDelay": true,
  "HttpThreadsCount": 100,
  "StoreDicom": true,
  "DicomAssociationCloseDelay": 5,
  "QueryRetrieveSize": 100,
  "CaseSensitivePN": false,
  "LoadPrivateDictionary": true,
  "Dictionary": {},
  "SynchronousCMove": true,
  "JobsHistorySize": 1000,
  "SaveJobs": true,
  "OverwriteInstances": true,
  "MediaArchiveSize": 1,
  "StorageAccessOnFind": "Never",
  "MetricsEnabled": true,
  "ExecuteLuaEnabled": false,
  "RestApiWriteToFileSystemEnabled": false,
  "HttpRequestTimeout": 30,
  "DefaultPrivateCreator": "",
  "StorageCommitmentReportsSize": 100,
  "TranscodeDicomProtocol": true,
  "BuiltinDecoderTranscoderOrder": "After",
  "IngestTranscodingOfUncompressed": true,
  "IngestTranscodingOfCompressed": true,
  "DicomLossyTranscodingQuality": 90,
  "SyncStorageArea": true,
  "MallocArenaMax": 5,
  "DeidentifyLogs": true,
  "DeidentifyLogsDicomVersion": "2023b",
  "MaximumPduLength": 16384,
  "DatabaseServerIdentifier": "Orthanc1",
  "CheckRevisions": false,
  "SynchronousZipStream": true,
  "ZipLoaderThreads": 0,
  "Warnings": {
    "W001_TagsBeingReadFromStorage": true,
    "W002_InconsistentDicomTagsInDb": true
  },
  "Plugins": [
    "/run/orthanc/plugins",
    "/usr/share/orthanc/plugins"
  ],
  "Gdcm": {
    "Throttling": 4,
    "RestrictTransferSyntaxes": [
      "1.2.840.10008.1.2.4.90",
      "1.2.840.10008.1.2.4.91",
      "1.2.840.10008.1.2.4.92",
      "1.2.840.10008.1.2.4.93"
    ]
  }
}

This time, it tooks 7.5 min to search,
@alainmazy do you have any idea to accelerate search query ?

No.

I have actually already spent hours on this kind of topic with one of our customers.

They had a DB with 100M+ instances and we had a test DB around the same size but we were not able to reproduce on our side. It appeared that the execution plan generated by PostgreSQL were different on both servers for the same query probably because of some index values distribution affecting the execution planning. On their side, PG was performing full scans while it was using an index on our side.

That was happening with some specific beta version - we made other improvements to speed up the queries, not directly related to that specific topic and I never heard from this issue since then. Note that all these improvements have now been released.

Note that, on their side, the range searches (“StudyDate”: “FROM-TO”) where faster than the single ended searches (“StudyDate”: “FROM-”)

If you are using a managed PG DB, you should check if you just don’t reach a CPU or IOPS limitation… Because their DB was 12x larger than yours and the problematic queries were taking 30s, not 7 min…

Hope this helps,

Alain.