OHIF Viewer with Orthanc with Postgresql

Hello All,

I have been using Orthanc for over a year. I would like first to say thank you to Sébastien for sharing extraordinary work as open source. Many thanks Sébastien!!!!!

I work with a small imaging center. Last month I migrated Orthanc from SQLite to Postgresql due to increase number of images/studies and performance issues. The migration went successfully and I saw some speed gain after the migration.

I have been using modified version of Cornerstone Viewer (http://chafey.github.io/cornerstoneDemo/) against Postgresql. Response time is not great but within acceptable range :).

I would like to use OHIF Viewer (http://viewer.ohif.org/) as it seems to be a better solution than what I have been using.

I have two servers: Windows where Orthanc is running and images are stored and Linux where Postgresql is running.

Here is my issue. When I point OHIF Viewer to our Postgresql server and try to pull a study like http://myPortal/viewer/sutyInstanceID it takes about 30 minutes to pull a study; very strange.

So, I decided to create another instance of Orthanc with SQLite and inserted 500 studies. I tried to point OHIF Viewer to pull study like http://myPortal/viewer/sutyInstanceID against SQLite. The result was about 3-4 seconds to pull the study. Not bad!!

Then, I created an empty database under Postgresql and moved 500 studies there. I pointed OHIF Viewer to newly created database with 500 studies. The result was about 6 minutes; very odd.

Following is my modifications to my Postgresql conf file for my environment -
shared_buffers = 512MB. // I have tried with 1GB as well to see any difference but none
effective_cache_size = 2GB // I have tried with 4GB as well with no improvement
work_mem = 4MB

Could you kindly let me know why I see so much speed on SQLite over Postgres? Is there something that I could try to gain some speed/performance in Postgresql? My main production database is Postgresql.

I would appreciate very much for you kind assistance in this matter.

With the very best regards,

Do you use Orthanc 1.2 ?
There where performance issues with previous versions that were not built in release mode (http://book.orthanc-server.com/faq/troubleshooting.html#performance-issues). Main performance issues were clearly related to large databases.

I use Orthanc 1.2.

I have about 500 studies in Postges running under Linux. Images are stored in Windows where Orthanc (1.2) is running. What would you suggest to gain the speed? At the moment it is about 6 min to get to any study via OHIF Viewer using URL like http://myPortal/viewer/sutyInstanceID. The amount of time to get to a study is not practical.

Many thanks!

Have you try to figure out where the bottleneck is ? … because 6mins for an study retrival … makes no sense if you are in localnet with decent hardware.

You are so correct. This does not make any sense. Yes, I have a decent server and the network.

Now, if I use SQLite with 500 studies then the response time is 3-4 seconds. This is not bad!

Its only Postgres where the issue is.

I'm already using Postgres for our database and that works fine.

What would you suggest to look into?


I’m not expecting postgresql server itself to introduce such slowness.

That would be nice to validate that no latency is introduced by your network environment between your Windows PC and the Linux server (There might be some DNS issues that introduce a 1 sec delay on each SQL query or anything else - there are plenty of queries between Orthanc and the SQL server so any latency will reduce the performance by a huge factor).

What latency do you have when pinging the postgresql server from Windows ?

Kindly see the attached for latency.


Please note I have attached graph from PgAdmin showing Tuples out and Block I/O.

This is when I pull single study via OHIF Viewer directly via URL.



3-4 seconds? …thats not bad … that WORST THAN BAD. I hope you are including download and decoding time on the client.

I have a 1.1.0 OrthanC server with SQLite DB, more than 3Tb of DICOM studies, around 380.000 studies and a single CR takes <0.5s to show on the Radiant viewer.
A full TC with about 900 images, about 40s to fully load all the images on the client.


What would you suggest to look into?

The good news is that there are a lot of things you can look into. The
bad news is that, well, there are a lot of things you need to look into

I suggest first looking at Postgres from a high-level. pg_activity is
a tool that helped us in the past for that. Essentially, it shows the
currently executing queries and the resources they consume. Looking at
whether a SQL query consumes an abnormal amount of resources during
your test transfers would be very helpful.

Next, I would look into network connectivity. Beyond latency tests (as
Alain suggested), try tools like iperf (some versions of which have
builds for both Linux and Windows). These test throughput. Let it run
for a little while (5-10m) so that it has a chance to trigger the
condition you're hitting (e.g. filling buffers that might be causing
issues). If you want to observe bandwidth during real activity, you can
use a tool like iptraf-ng.

Tools like iotop on the Postgres machine (try iotop -a for accumulated
statistics) will show you some numbers on the I/O activity of the
system associated to the processes responsible for it. Try both test
transfers with the viewer and simple operations on the machine (like
file copies) to see if you get good enough performance from the
storage, and watch for any processes reading or writing large amounts
of data that you don't expect.

top/htop will give you CPU and memory statistics. These are unlikely to
be the bottleneck given the scenario, but it's important to check
anyway. Mind memory swap space usage (you can also see it with the
"free" command) which should be small and stable; if the machine is
thrashing it will be *very* slow.

In addition, it sounds like things might have been working better with
Cornerstone. I would recommend observing the systems while using it too
just to compare the activity patterns (having a baseline is important,
otherwise all these numbers sometimes seem meaningless).

I know you've sort-of ruled out the Windows machine where Orthanc runs
as it seems to have worked fine in the past, but I would observe it
just the same anyway. Sometimes there are problems so small they don't
seem to manifest but then get aggravated under specific conditions (I
think Raúl hinted at that).

If you can change a few variables like you did setting up another
Orthanc instance with SQLite, do it (e.g. set up Postgres on another
host, another operating system, use another Postgres version, another
filesystem, etc). The goal is to compare, the setups don't have to make
perfect sense.

All of these tools should only take a few minutes to try and will give
you high-level live feedback on what's happening (although in aggregate
these things do take some time to figure out unfortunately). Try
running them all at once during a test, there's a good chance you'll
get a feel for the bottleneck.

Hopefully you'll then know which direction to dig forward to with more
specific tools. For example, if the network doesn't look right, use
tcpdump and wireshark to look at network captures, you should have all
the answers there. If the storage doesn't look right, use I/O benchmark
tools on the relevant filesystems. If the CPU usage is oddly high,
tracing the processes to look at flamecharts will point you to the
offending code.

One last note: When looking at network traces, don't just focus on the traffic between Orthanc and Postgres, also look at the traffic between the viewer and Orthanc to know what requests cause the effects you're seeing (Wireshark will parse DICOM).

Good luck,

Many thanks Thibault for giving me detail guideline on how and where to start checking for bottleneck. I will start on your guideline and will get back to you with my results.

Warm regards,

Okay, I did something different. I monitored all SQL statements that goes to Postgresql when OHIF Viewer is launching/running. Please find my unbelievable result-set.

Started Empty Postgresql database
Inserted 2 studies (MRI with 54 images and CT 89 images).
Created new Log file for Postgres...
Ran OHIF Viewer to display study list with only 2 studies. Log file attached "2StudyCount_3065Execute.log". It sent 3,065 execute statements to Postgres - WOW!
Inserted 2 more studies (US 35 and 38 images).
Created new Log file for Postgres...
Ran OHIF Viewer to display study list with only 4 studies. Log file attached "4StudyCount_4587Execute.log". It sent 4,587 execute statements to Postgres - Unbelieveable!
Created new Log file for Postgres...
Ran OHIF Viewer to go directly to the study by supplying studyInstanceID in the URL. Log file attached "4StudyCount-directURL_6651Execute.log". The study was MRI. I inspected all the images. Execute statement in the log file was 6,651.
Created new Log file for Postgres...
Ran OHIF Viewer to go directly to the study by supplying studyInstanceID in the URL. Log file attached "4StudyCount-directURL-withoutscroll_5776Execute.log". The study was MRI. I "did not" inspected any image. Execute statement in the log file was 5,776.

What do you think if I have 10k studies what kind of number in execute statement be? Could this be the issue in performance?

Best regards,

4StudyCount-directURL_6651Execute.log (1.33 MB)

4StudyCount-directURL-withoutscroll_5776Execute.log (1.15 MB)

2StudyCount_3065Execute.log (620 KB)

4StudyCount_4587Execute.log (927 KB)

It looks like OHIF viewer is trying to pre-load all studies that are listed. So, indeed, if you try to open 10k studies in OHIF viewer, it might try to pre-load all instances => probably millions of instances => millions of requests to the postgresql server and this can take a while !!!.

I would suggest that you check with the OHIF team what is the pre-loading policy of the OHIF viewer.

Thank you for the reply. I have created a post in Cornerstone google forum. I hope someone will reply.

I will keep you posted.

Many thanks!

Kindly note my post in OHIF google group.