Quert taking too much time

Hi everyone,
I am using Orthanc with the PostgreSQL plugin, and I need to change the large SQL query that Orthanc uses for fetching study/series/instance data (the query that starts with WITH lookup AS (...) and contains many UNION ALL sections).

I want to know:

:one: Where exactly is this SQL defined?

Is it inside Statements.sql, inside the plugin source code, or somewhere else?

:two: What is the correct method to change it?

  • Should I edit Statements.sql and rebuild the plugin?

  • Or should I use "Advanced" : { "OverrideStatements" } in orthanc.json?

  • Is it safe to override only one statement?

:three: If I modify it, will Orthanc still work normally?

I want to avoid breaking indexing or resource browsing.

Any guidance, best practices, or official instructions would be very helpful.
Thank you!

Sorry, we don’t answer to AI generated garbage/hallucinations.

I believe that when a person does not know the answer to a question, it often turns into a hallucination. In our case, images are fetched directly from Orthanc. There is no query involved that could fetch data from a hallucination.

The community would be more helpful if members focused on providing correct answers instead of making fun every time.

Hi Akash,

You can be sure that Alain isn’t making fun of anything.

From a couple of months, there are a lot of questions which have no sense and seems to be generated from ChatGPT (or other similar tools). It’s a little bit boring to spend time to read/answer this kind of questions and is usually a lack of time.

Your question is not very clear, at least the goal is not clear: What are you trying to achieve? What is the context? How does your setup look like?

Given the lack of these information, Alain probably felt that your message was (at least partially) issued by ChatGPT. Maybe he was wrong, so feel free to give more context and the community will help, for sure.

Cheers,

WITH lookup AS (
SELECT
studies.publicid,
studies.internalid,
ROW_NUMBER() OVER (
ORDER BY order0.value DESC NULLS LAST
) AS rownumber
FROM
resources AS studies
LEFT JOIN metadata AS order0
ON order0.id = studies.internalid
AND order0.type = ?
WHERE
studies.resourcetype = ?
LIMIT ?
),

validgrandchildcounts AS (
SELECT
thislevel.internalid,
COALESCE(SUM(childlevel.childcount), ?) AS totalgrandchildcount
FROM
resources AS thislevel
LEFT JOIN resources AS childlevel
ON childlevel.parentid = thislevel.internalid
WHERE
NOT EXISTS (
SELECT ?
FROM invalidchildcounts
WHERE id = thislevel.internalid
)
AND NOT EXISTS (
SELECT ?
FROM resources AS child
JOIN invalidchildcounts
ON invalidchildcounts.id = child.internalid
WHERE child.parentid = thislevel.internalid
)
GROUP BY
thislevel.internalid
)

SELECT
? AS c0_queryid,
lookup.internalid AS c1_internalid,
lookup.rownumber AS c2_rownumber,
lookup.publicid AS c3_string1,
NULL::text AS c4_string2,
NULL::text AS c5_string3,
NULL::bytea AS c6_string4,
NULL::int AS c7_int1,
NULL::int AS c8_int2,
NULL::int AS c9_int3,
NULL::bigint AS c10_big_int1,
NULL::bigint AS c11_big_int2
FROM lookup

UNION ALL

SELECT
? AS c0_queryid,
lookup.internalid AS c1_internalid,
NULL::bigint AS c2_rownumber,
value AS c3_string1,
NULL::text AS c4_string2,
NULL::text AS c5_string3,
NULL::bytea AS c6_string4,
taggroup AS c7_int1,
tagelement AS c8_int2,
NULL::int AS c9_int3,
NULL::bigint AS c10_big_int1,
NULL::bigint AS c11_big_int2
FROM
lookup
INNER JOIN maindicomtags
ON maindicomtags.id = lookup.internalid

UNION ALL

SELECT
? AS c0_queryid,
lookup.internalid AS c1_internalid,
NULL::bigint AS c2_rownumber,
value AS c3_string1,
NULL::text AS c4_string2,
NULL::text AS c5_string3,
NULL::bytea AS c6_string4,
type AS c7_int1,
revision AS c8_int2,
NULL::int AS c9_int3,
NULL::bigint AS c10_big_int1,
NULL::bigint AS c11_big_int2
FROM
lookup
INNER JOIN metadata
ON metadata.id = lookup.internalid

UNION ALL

SELECT
? AS c0_queryid,
lookup.internalid AS c1_internalid,
NULL::bigint AS c2_rownumber,
label AS c3_string1,
NULL::text AS c4_string2,
NULL::text AS c5_string3,
NULL::bytea AS c6_string4,
NULL::int AS c7_int1,
NULL::int AS c8_int2,
NULL::int AS c9_int3,
NULL::bigint AS c10_big_int1,
NULL::bigint AS c11_big_int2
FROM
lookup
INNER JOIN labels
ON labels.id = lookup.internalid

UNION ALL

SELECT
? AS c0_queryid,
lookup.internalid AS c1_internalid,
NULL::bigint AS c2_rownumber,
value AS c3_string1,
NULL::text AS c4_string2,
NULL::text AS c5_string3,
NULL::bytea AS c6_string4,
taggroup AS c7_int1,
tagelement AS c8_int2,
NULL::int AS c9_int3,
NULL::bigint AS c10_big_int1,
NULL::bigint AS c11_big_int2
FROM
lookup
INNER JOIN resources AS childlevel
ON childlevel.parentid = lookup.internalid
INNER JOIN maindicomtags
ON maindicomtags.id = childlevel.internalid
AND (taggroup, tagelement) IN ((?, ?))

UNION ALL

SELECT
? AS c0_queryid,
lookup.internalid AS c1_internalid,
NULL::bigint AS c2_rownumber,
childlevel.publicid AS c3_string1,
NULL::text AS c4_string2,
NULL::text AS c5_string3,
NULL::bytea AS c6_string4,
NULL::int AS c7_int1,
NULL::int AS c8_int2,
NULL::int AS c9_int3,
NULL::bigint AS c10_big_int1,
NULL::bigint AS c11_big_int2
FROM
lookup
INNER JOIN resources AS childlevel
ON lookup.internalid = childlevel.parentid

UNION ALL

SELECT
? AS c0_queryid,
lookup.internalid AS c1_internalid,
NULL::bigint AS c2_rownumber,
NULL::text AS c3_string1,
NULL::text AS c4_string2,
NULL::text AS c5_string3,
NULL::bytea AS c6_string4,
NULL::int AS c7_int1,
NULL::int AS c8_int2,
NULL::int AS c9_int3,
COALESCE(
(
SELECT totalgrandchildcount
FROM validgrandchildcounts
WHERE internalid = lookup.internalid
),
(
SELECT COUNT(grandchildlevel.internalid)
FROM resources AS childlevel
INNER JOIN resources AS grandchildlevel
ON childlevel.internalid = grandchildlevel.parentid
WHERE lookup.internalid = childlevel.parentid
)
) AS c10_big_int1,
NULL::bigint AS c11_big_int2
FROM lookup

UNION ALL

SELECT
? AS c0_queryid,
lookup.internalid AS c1_internalid,
NULL::bigint AS c2_rownumber,
parentlevel.publicid AS c3_string1,
NULL::text AS c4_string2,
NULL::text AS c5_string3,
NULL::bytea AS c6_string4,
NULL::int AS c7_int1,
NULL::int AS c8_int2,
NULL::int AS c9_int3,
NULL::bigint AS c10_big_int1,
NULL::bigint AS c11_big_int2
FROM
lookup
INNER JOIN resources AS currentlevel
ON currentlevel.internalid = lookup.internalid
INNER JOIN resources AS parentlevel
ON currentlevel.parentid = parentlevel.internalid

ORDER BY
c0_queryid,
c2_rownumber; I am referring to this query, which is an internal query used by Orthanc i think so. I thought that this query is responsible for fetching all metadata and related information.

However, I am concerned about its performance, as it is taking more than 2 minutes to execute.

I would like to understand how this query can be optimized or whether there is a recommended approach in Orthanc to improve its execution time.

Hi guy,

Have a look at the source code and you’ll see in seconds that there is no “Statements.sql” file, there is not “OverrideStatements” configuration. So this is 100% AI hallucination.

Don’t try to make me the bad guy. Learn to ask questions the correct way. GIVE to the community (do not always TAKE from the community) and never forget that you are not my boss and that I have no obligations at all to answer to any of your questions !

I guess you understand that you won’t get any answer from me on this topic :wink: but, hopefully, the community will answer you.

Alain.

“Thanks, Alan. I think I may not have explained it clearly earlier. My main concern is that when I fetch images in the PACS viewer, it takes a very long time—around 2–3 minutes. However, when we checked this with the team, they identified a specific database query that seems to be responsible for the delay. That’s why I’m asking how we can optimize or improve this query to reduce the loading time hope u understand thanks

Hello,

The notion “very long time” is extremely context-dependent. For instance, downloading a 1GB DICOM study over a 100Mbps network will require about 1.5 minute, regardless of the used software.

Instead of trying to dig into the low-level SQL queries, benchmark your network bandwidth, check out how your filesystem is mounted, analyze your use case, and try to run Orthanc locally (without enabling extensions like PostgreSQL or S3 in the first time). Your problem might very well be totally unrelated to the SQL queries.

Regards,
Sébastien-

hi thanks for ur advice i will work on it