High Availability Orthanc on Ubuntu Server 20.04 LTS (manual build)

Greetings,

I found out about Orthanc last Friday and found that it worked as a solution to route tomo images from an old mammo modality to our PACS system in such a way that the total process takes 8 minutes opposed to taking over an hour when trying to send direct over a WAN connection. My thanks to Sébastien Jodogne for creating this solution!

The version we tested was 1.7.2 on a Windows server. However, we want a robust solution using Linux. Ubuntu has 1.5.8 in its repository and I was able to setup 2 proxy servers and 2 Orthanc servers to provide an HA solution.

My next step is to create another server and manually compile 1.7.2.

Is it typical to move the compiled files or leave them in the Build folder? For 1.5.8 Ubuntu package, the main Orthanc program was in /usr/sbin/Orthanc. I have no idea what in the “Build” folder would need to be moved or where.

Regarding the update of newer versions, I assume it would be a matter doing the same steps over but deleting the contents of the “Build” folder.

Thanks,
LHammonds

You can probably ignore everything below if Sebastian or someone else already has a script to move things to the standard locations after building from source, or there is at least a summary of where everything belongs, but see below:

I’ve compiled from source quite a few times. I don’t think doing so automatically moves files to the standard locations like the repo does. I don’t do that for now while developing. I actually am using the Desktop version of UBUNTU 18.04 with GNOME, and then I install all of the other tools and server apps afterwords. I’ve been just putting all of the Orthanc stuff on my Desktop with separate folders for the:

  1. Build (contains the Orthanc executable called “Orthanc”, uppercase, as well as some of the standard .so plugins with symlinks also (e.g. MWL, ConectivityChecks, ServerFolders I think you can put the Config.json in there, but you still might have to specify the location of the Config file when you startup Orthanc since it just uses some sort of default config otherwise. To start up from the terminal just use

/path/to/Build/Orthanc /path/to//OrthancConfigs/Configuration.json --verbose.

You can put the config in a different directory if you want.

  1. The Plug-ins you can probably just leave in the Build Directory and if you want to Build MySQL and/or Postgres you can do that and just put those also in the main build, or just specify the path to them in the Configuration.json file, while leaving them in their build directories.

  2. You can pretty much put things where you want and specify the location for the Storage Directory in the Config File, as well as the MWL folder if you are using that.

I guess another option would be to map out where the repo puts everything by default and write a script to move everything there.

There is an link UNIX daemon scripts here, which is very helpful.

https://book.orthanc-server.com/faq/debian-daemon.html?highlight=init%20script

That will work on UBUNTU, with some minor modifications if you compile from source and do not move stuff to standard locations. I actually have that setup on UBUNTU with some mods to make it work with my own BUILD. I can provide that or explain if you want. I would not doubt that there is a script that’ll just move stuff to the standard locations, but there are some advantages to keeping everything in the BUILD folders.

-HTH

/sds

Hello,

Thanks for your positive feedback.

The easiest way to publish the results of a manual compilation, consists in running “sudo make install” after having called “make”.

This will populate the subfolders of “/usr/local” the same way as the official Debian/Ubuntu packages (without however the configuration files and the service, that are only part of Debian/Ubuntu).

This is actually what is done in the Debian package, as can be seen at this link:
https://salsa.debian.org/med-team/orthanc/-/blob/master/debian/rules

You might also have an interest in the Docker images:
https://book.orthanc-server.com/users/docker.html

HTH,
Sébastien-

Thanks. I will do the install step and work on creating a systemd equivalent to the init script.

I will publish what I find at these locations:

How to setup dicom routers on Ubuntu 20.04 - https://hammondslegacy.com/forum/viewtopic.php?f=40&t=285
How to compile Orthanc from source code on Ubuntu 20.04 - https://hammondslegacy.com/forum/viewtopic.php?f=40&t=286

LHammonds

Is that an open WIKI that you run ? I might want to work with you a bit to contribute to that.

I thought there was a way to move the build files automatically. I’m actually attaching my cmake_install.cmake file. You should have one also in your Build.

That is probably what runs when you run sudo make install.

cmake_install.cmake (6.36 KB)

Checked out your forum, https://hammondslegacy.com/forum/viewforum.php?f=40

Very nice.

I don’t know if you are available, but I’m interested in getting some help with Docker Containers / Images, and just otherwise configuring out set up. Would be willing to pay something if necessary, and I’m sure you would have general suggestions.

If you are interested in freelancing just let me know. I found a profile on LinkedIn (LRHC). Not sure that is you. This group also has a list of freelancers. Looks like you are kind of guru with some things. I am using UBUNTU as a server for production, and we are wanting to use an in-house server rather than a cloud service. Solo practice. Not even sure if it is going to all work out, but we are moving from a dev environment to a production server. We’ll be getting dedicated hardware for the server server soon.

Thanks.

@Stephen, thanks, glad you like the content. I looked at Dockers briefly when they 1st came out but was not very interested in them for production-quality deployments. Sure, they allow you to deploy them exactly how the developer intended but they tend to be self-contained units whereas my systems are inter-connected…such as a MariaDB cluster for all my various apps…not a separate MariaDB install for every application. That is also the main reason I do not use “apt install” for web apps that bundle a database such as NextCloud, MediaWiki, etc. I simply download the PHP code and point the config files to my DB cluster.

@all, I have 1.7.2 compiled, installed, documented and running under systemd. Now I just need to figure out how to compile plugins…specifically the AutomatedJpeg2kCompression plugin. :wink:

Thanks,
LHammonds

@all, I have 1.7.2 compiled, installed, documented and running under systemd. Now I just need to figure out how to compile plugins…specifically the AutomatedJpeg2kCompression plugin. :wink:

Actually, the “AutomatedJpeg2kCompression” is a Lua sample that only requires the “gdcmconv” and “dcmodify” command-lines tools to be installed (from the Ubuntu packages “libgdcm-tools” and “dcmtk”), so technically this is not a “plugin”:
https://hg.orthanc-server.com/orthanc/file/Orthanc-1.7.2/OrthancServer/Resources/Samples/Lua/AutomatedJpeg2kCompression.lua

You’d better have a look at the transcoding feature (new in Orthanc >= 1.7.0) that provides a built-in way to transcode to JPEG2k, together with the GDCM plugin:
Transcoding of DICOM files — Orthanc Book documentation (check out option “IngestTranscoding”)

https://book.orthanc-server.com/plugins/gdcm.html

Sébastien-

@Sébastien, I was not aware of the Lua script in the resources area, thanks for pointing that out. I was looking at the 1.7.2 source code and found the Plugin.cpp here: OrthancServer/Plugins/Samples/AutomatedJpeg2kCompression/Plugin.cpp

Looking over the Lua script, it changes the SOP. Would that cause issues if the modality tech was to push the same study a 2nd or 3rd time? Such as a 4-image study becoming a duplicated 8-image or 12-image study?

The UID for our source image (Implicit VR Little Endian) is1.2.840.10008.1.2. If the UID is changed to JPEG 2000 Lossless in the Lua script, would it change to 1.2.840.10008.1.2.4.90?

This is sample code (both the plugin and Lua). You’ll have to adapt them to your scenario.

Regarding UID, the DICOM rule is the following: If using lossless (non-destructive) compression, the SOP Instance UID tag can stay the same after transcoding. On the other hand, If using lossy (destructive) compression, the SOP Instance UID tag must change (for medical traceability, as the information has changed).

Again, check out the transcoding support that is now built in Orthanc >= 1.7.0, as it takes care of this:
https://book.orthanc-server.com/faq/transcoding.html#transcoding-in-orthanc

Ok, seems like I don’t need to use any plugin. Since I don’t have access to the destination PACS server console, I’m going to setup the destination to another Orthanc server and let the images collect there. That way I can see how the images are sent and stored. Once its all tested out and working as expected, I can then trash the test destination and configure the route to the PACS server.

I noticed that when I use the Delete(Send) commands to auto-route the incoming DICOM, I do not have much in terms of seeing what went through the router if I ever need to trace a problem.

It seems like it would be a good idea to have the router retain everything sent to it for a few days and then purge anything over a certain amount of days. Is there a good way of doing this besides just running the following Linux command via a crontab which purges any image directories over 3 days old?

/usr/bin/find /var/lib/orthanc/db-v6 -maxdepth 1 -type d -mtime +3 -exec rm -rf {} ;

Thanks,
LHammonds

Such workflow can be done by using the REST API of Orthanc from an external script:
https://book.orthanc-server.com/users/rest.html

Once you have the external script working, you can consider implementing it as a plugin to make it run server-side. Python plugins are particularly well suited to such automation:
https://book.orthanc-server.com/plugins/python.html