I’ve been working with an older Docker version of the Osimis-orthanc build that was built on top of the Trusty Ubuntu Docker image.
I’ve developed a fairly sophisticated Lua server side script and in the process have installed additional Lua models vai the Debian apt-get system.
Unfortunately, I’m running into problems with the latest Osimis-Orthanc (I’m looking at osimis/orthanc:18.1.4) that’s built on top of the Xenial Ubuntu image.
I can apt-get my additional lua modules (lua-sql-postgres and lua-socket) and they appear to install without problems.
However, at run time, when Orthanc attempts to load the modules (ex. socket.smtp), it is searching in the wrong path and generates a lot of errors:
no field package.preload[‘socket.smtp’]
no file ‘./socket/smtp.lua’
no file ‘/usr/local/share/lua/5.1/socket/smtp.lua’
no file ‘/usr/local/share/lua/5.1/socket/smtp/init.lua’
no file ‘/usr/local/lib/lua/5.1/socket/smtp.lua’
no file ‘/usr/local/lib/lua/5.1/socket/smtp/init.lua’
no file ‘./socket/smtp.so’
no file ‘/usr/local/lib/lua/5.1/socket/smtp.so’
no file ‘/usr/local/lib/lua/5.1/loadall.so’
no file ‘./socket.so’
no file ‘/usr/local/lib/lua/5.1/socket.so’
no file ‘/usr/local/lib/lua/5.1/loadall.so’
Now, according to both the Trusty and Xenial package descriptions, those files are not installed in those locations. I can confirm that on working Orthancs built on Trusty Docker images, the lua files are all installed in the locations indicated here.
I’m left wondering whether something has changed perhaps in the environmental search paths. The Orthanc Lua engine cannot apparently see the additional lua modules I’ve installed under Xenial, but it had no problem seeing them under the Trusty system.
Any ideas? I can’t tell if this is a Xenial configuration issue or whether I’m misunderstanding something about running Orthanc on Xenial.
After this, Orthanc finds the socket module but, unfortunately, refuses to load it:
E0219 16:44:36.176965 LuaContext.cpp:580] Error while executing Lua script: error loading module ‘socket.core’ from file ‘/usr/lib/x86_64-linux-gnu/lua/5.1/socket/core.so’: dynamic libraries not enabled; check your Lua installation
I then realized that, in previous orthanc images, lua was linked dynamically while it is now built statically. This probably explains the difference.
I’ll try to patch the docker images and keep you informed.
Thanks for checking that. Playing with the LUA path was going to be my next step.
Regarding the static linking, I began to suspect that was the case. In another thread, I knew we said the Windows version of Orthanc was statically linked and that had me wondering whether the same cmake setup was used on linux platforms.
I’ll wait to test moving my setup to Xenial when you have a chance to patch the build.
Sébastien has patched Orthanc such that the static builds can now call lua external modules and this has been included in the osimis/orthanc:orthanc-mainline images. LUA_PATH and LUA_CPATH are configured correctly in this image.
I haven’t checked the description of those images. Do they include the plugins? I think at some point, you switched to having all the plugins in the docker image and leaving it up to the user to copy which ones they wanted into the working plugin directory path.
I’m still getting used to the new setup since I worked with Orthanc 1.2 for so long.
Thanks. Lua appears to be working and my use of socket is working as well.
Now that my Lua script is getting farther than before, I’ve currently run into an unrelated problem and am addressing it. It appears that the native Dicom dictionary no longer includes the human readable tag, OtherPatientIDs, so I’m either going to modify the dictionary or switch to using the numeric tag descriptor for that field.
Either the dictionary is missing that tag, or the embedded dcmtk routines aren’t pointing to the tag dictionary. OtherPatientIDs is the first tag in my Remove list. I wouldn’t be surprised if the dcmtk libraries are missing the pointer to the dicom dictionary in order to translate the human readable tags into numeric addresses.
As a followup, if I use the web interface to drill down to an individual DICOM instance, the DICOM tag OtherPatientIDs is displayed with the human readable text.
So, at least for the display of existing tags, the dcmtk back end is working. For some reason, it is rejecting my human readable tags during the anonymize/modify step.
Actually, when upgrading DCMTK, some tag names have changed. It seems that OtherPatientIDs is now considered as obsolete by the DICOM standards and is therefore now named “Retired_OtherPatiendIDs” in the dico.
I’m a little confused that Orthanc displays “OtherPatientIDs” as the nickname in the browser, but that behind the scenes my Lua script generates an Unknown DICOM Tag error when using the same nickname.
In the Orthanc Explorer, I can see:
0010,1000 (OtherPatientIDs)
along with all the other tags embedded in a particular DICOM file.
If DCMTK has deprecated “OtherPatientIDs”, why doesn’t Orthanc’s browser also generate an error? Or at least display the tag as “Retired”?
I’m going to have to add this nickname into my local dictionary because I make extensive use of it in my anonymization Lua script.
Do you have a link to documentation describing how to add to the local dcmtk dictionary used by Orthanc?
Is this a DICOM 2008 vs 2017b problem? I did upgrade from Orthanc 1.2 to 1.3 and see that 2017b was introduced in Orthanc 1.3.
I tried setting the DicomVersion in my calls to anonymize/modify to 2008, but I still encounter the error:
Unknown DICOM tag: “OtherPatientIDs”
I also confirmed that the API call to instances/####/simplified-tags also returns “OtherPatientIDs” in the same way that the Orthanc Explorer does. I suspect the Orthanc Explorer is calling the simplified-tags API anyway.
I’m going to run a test where I get rid of these nicknames that used to work in my old Orthanc 1.2 Lua setup.
I have switched now to using numeric DICOM tags in the anonymize/modify steps and this seems to solve the “Unknown DICOM Tag” problem.
However, this seems to point to an asymmetry between the anonymize/modify API POST calls and the GET methods for retrieving meta data (ex. instances/####/simplified-tags).
The latter GET methods still seem to return human readable DICOM tags that are no longer recognized by the POST API calls.
When Orthanc receives a DICOM file, it pre-computes a JSON summary of its DICOM tags, and caches this JSON file as an attachment to the DICOM instance (accessible at the “/instances/{…}/attachments/dicom-as-json/” URI): http://book.orthanc-server.com/faq/features.html#metadata-attachments
When Orthanc Explorer displays some DICOM instance, it accesses the cached JSON file, in order to avoid parsing again the source DICOM instance. In your case, you have updated the DICOM dictionary that is used by Orthanc, which implies that the cached JSON file does not perfectly match the new dictionary.
Since Orthanc 1.2.0, you can force the re-generation of the cached JSON file by DELETE-ing it, for instance:
The next time you open this particular instance with Orthanc Explorer, you will see messages in the Orthanc logs (in verbose mode) stating that the Orthanc server has reconstructed the JSON summary, which will match the new content of the dictionary:
I0222 08:56:00.923070 FilesystemStorage.cpp:155] Reading attachment “2309c47b-1cbd-4601-89b5-1be1ad80382c” of “DICOM” content type
I0222 08:56:00.923394 ServerContext.cpp:401] Reconstructing the missing DICOM-as-JSON summary for instance: 301896f2-1416807b-3e05dcce-ff4ce9bb-a6138832
I0222 08:56:00.929117 ServerContext.cpp:540] Adding attachment dicom-as-json to resource 301896f2-1416807b-3e05dcce-ff4ce9bb-a6138832
I0222 08:56:00.929425 FilesystemStorage.cpp:118] Creating attachment “3c830b66-8a00-42f0-aa3a-5e37b4a8b5a4” of “JSON summary of DICOM” type (size: 1MB)
What would trigger a similar update of the meta data at the series and study level? Is that only possible by forcing all the instances to regenerate their dicom-tags attachments?
Returning to an old topic, I recently rebuilt my Docker image based on the latest osimis/orthanc:orthanc-mainline image and ran into this same LUA dynamic library linking problem again.
I suspect things were set up statically in the latest image. I can see that the module is properly located, but the embedded Lua cannot dynamically link to it.
In addition to the socket module, I’ve added the luasql-postgres module and it’s this one that is failing to load now. I have not checked whether the socket module will load, though I suspect that it would not due to similary dynamic link problems.
The CMake option “-DENABLE_LUA_MODULES=ON” was introduced in Orthanc 1.3.2, in order to allow the (statically-linked) Lua engine of Orthanc to be used in conjunction with system-wide Lua modules.
Here is a minimalist command-line session to call Lua modules installed on Ubuntu 16.04, from a statically-linked version of Orthanc compiled with “-DENABLE_LUA_MODULES=ON”:
Your problem stems from the fact that the Docker images for Orthanc 1.4.x are now using the official LSB binaries (Linux Standard Base), that were not compiled with this CMake option.
The “jodogne/orthanc” and “jodogne/orthanc-plugins” Docker images have just been updated with LSB binaries compiled with “-DENABLE_LUA_MODULES=ON”. Similar Osimis images should be automatically generated soon.