Web Viewer for Orthanc is available

Dear all,

We have just released an official Web viewer for Orthanc under the AGPL license:

This Web viewer takes the form of a plugin that can be loaded into Orthanc. Internally, the plugin uses GDCM by Mathieu Malaterre to decode the medical images. This notably enables the proper display of JPEG2k. The Web viewer uses Cornerstone by Chris Hafey on the client-side to display the images.

We hope that this development will ease the access to medical images.


Congratulations, excellent work!

Today’s scenario solved:

I needed to present patient images in a clinico-radio-pathology conference in the pathology department - using a PC connected to the projector - which does not have any access to PACS - or any DICOM viewer installed…

Initial impressions:
Load times are awesome, keeping in mind that my “server” is another desktop PC located 2 floors below, running 0.8.6 and PostgreSQL, and it had to pass through a proxy because pathology department and radiology department subnets can’t talk to each other… :stuck_out_tongue:
Response was great for the CT stack.
Response was poorer for the stack of “photographed hard copies”, possibly due to each instance was about 9 megapixels of RGB data, but still manageable. (On a side note, is it possible to set a default WL 127 WW 256 for RGB data? It’s currently defaulting to 127/127. Those instances were created using tools/create-dicom so they don’t have WindowCenter / WindowWidth. That being said, I’m going to upgrade my PHP script to include those tags in the future.)

  • Yes, we are still transferring images between hospitals using hard copies in my country. *

Just sharing pics of the new web viewer in action!


Picture 003.jpg

Picture 001.jpg

Picture 002.jpg

Congratulations and great work Sébastien Jodogne for Orthance & Chris Hafey for Web Viewer. The Youtube link and email from Emsy Chan sound very promising to me. I am exploring is there an option to Integration a Third party web based Radiology workflow Solution (RwS) in to Orthance to call the Web Viewer. The use case is as follows.

  1. Orthance will be used as a DICOM Archive and Images stored on a Raid iSCSI partition.
  2. Our RwS via Restful Api will sync the recently added studies.
  3. We have already intergrated Osrix with Orthance via WADO.
  4. For Web Viewing can we call the Webviewer directly like Weasis PACS Connector, so that the images that are stored in Orthance can be pulled directly on the Web Viewer.

Any suggestion is highly appreciated.

with regards


The Youtube video is Sebastien’s, btw. :slight_smile:

I’ve worked on integrating Weasis into Orthanc, so I can tell you now that calling Orthanc’s webviewer is easier because you don’t have to build the XML needed by Weasis. That being said, you can get Weasis to load multiple series using 1 XML but at the moment, you can only call 1 series using Orthanc’s intergrated viewer. Of course (correct me if I’m wrong), since Cornerstone does support multiple series, I assume that it’s only a matter of time before Orthanc’s webviewer can load multiple series. (Browser memory permitting… of course)

All you need to is call the Series UUID (http://orthanc:8042/web-viewer/app/viewer.html?series=563487d1-b4babab9-a5be4548-6ad95e9c-2f33643b).

Since your RwS has already synced with Orthanc and knows the UUIDs, I don’t think there’s gonna be a problem for you to call the webviewer directly.



p.s. Orthanc doesn’t have an “e” at the end… :wink:

Hi Emsy,

Thanks for sharing your experience with PostgreSQL and the Web viewer!

I also observe reduction in speed when displaying color images (even of moderate resolutions such as 1000x1000), by contrast with grayscale images of similar size. As this slowdown can be seen when adjusting the windowing with the mouse, it is related to the way the Web viewer uses Cornerstone, and not to the decoding of the images by the C++ plugin. Chris, would you have any hint to speed up the “cornerstone.renderColorImage” function that is used by Orthanc to display color images?


PS: Regarding the window width, I have just set it to 256 by default:

Unfortunately there is no easy way to improve the W/L performance for color images at this time. The W/L transformation is currently about 9x slower for color images than grayscale images for the following reasons:

  1. Color Images have 3x as much input data to transform - R, G and B (3) vs grayscale (1).
  2. Cornerstone has a special optimization for grayscale where it writes the luminance to the alpha channels requiring a single byte write per pixel. This cannot be done with color data because it has 3 channels (RGB(3) vs luminance(1)) so three bytes must be written for each color pixel.

3x the data to read times 3x the data to write = 9x performance slowdown compared to grayscale. I have heard of people using OpenGL shaders to apply the W/L transformation and that is something that may improve the performance for systems where WebGL is available. Note that using a W/L value of 256/127 for color images is an identity mapping and cornerstone is smart enough to detect that and skip the transformation for that case which improves the initial rendering performance. You should use 256/127 by default on all color images unless you have a good reason not to.

On the plus side, the W/L tool is very rarely applied to color images by clinical users. In fact, many PACS systems do not even have the ability to adjust the W/L of color images at all! So while it is slower than grayscale, it may not be experienced in normal use (if someone has use cases that counter this, please let me know!)

Another thing to keep in mind is that I have made several optimizations specifically for displaying and color images cine loops (e.g. US) in the cornerstoneWADOImageLoader. I haven’t looked at how you integrated cornerstone with Orthanc, but if you are not using the WADO image loader, you may be missing out on these.


Thanks Chris for this explanation! I will have a look at cornerstoneWADOImageLoader.

Emsy, the “9x” performance slowdown should directly explain the “poorer performance” for the stack of photographed hard copies you mentioned.


Whoops, did my math wrong, I believe it is actually 3x slower, not 9x (3 reads + 3 writes vs 1 read + 1 write = 6:2 or 3x). I also think there may be a way to make color w/l performance match grayscale performance by using the alpha channel - not sure why I didn’t think of this before. I’ll add this to the backlog to look at the next time I am in that part of the code.


Hi all,

Thanks for the explanation!

Clinical opinion:
I agree with Chris that we don’t often adjust WL in RGB images. Nevertheless, images viewed on different screens (the US vs workstation vs mobile vs projector) tend to have different brightnesses. Even though I know that 256/127 shows all the possible data available in the image, when the image appears “a little dark”, the automatic thing I do (and all my clinical colleagues do) is to adjust the WL. I can adjust the screen brightness, etc (which is the proper method), but it’s “instinctual” to adjust the WL on an RGB image.

In my clinical use scenario, where I presented the images on a (crappy) projector yesterday, I wanted to emphasize certain areas of the image, which may be darker or lighter. Considering that the most “comfortable” value (output value) to look at is in the mid-grey range, I may use a WL of 120/60 to show darker structures and WL 120/170 to show brighter structures.

Technical opinion:
I’ve also been having the same issues with the RGB images. Not sure how much this applies, but:

  • Just did a quick experiment where I skipped the re-reading input RGB data (for the same instance) and tied WL to the alpha channel - it rendered about 2x faster, but the visual results were not that great (it’s only adjusting transparency, right? :stuck_out_tongue: )

  • Another quick experiment where I skipped re-reading input RGB data, and just modified output RGB channels directly. It also renders 2x faster, but too much image information is lost (it saturates or desaturates too fast and too much).

I’m sure that Chris will have better luck than me. :wink: Failing this, then maybe just 3 preset WLs or something like that.



  • On a side note, I just found out that my colleagues were using this pathology room for their breast conferences and they’ve been lugging a 5-year-old MacBook up there every other week… Time to start telling them the benefits of Orthanc & Cornerstone! :slight_smile:

Thank you Emsy for your input.

with regards


Are the color images you are displaying actually color or are they scanned grayscale images encoded using color? If they are in fact color images, what resolution are they and what are they pictures of? Most color images that I have encountered are US and they are small enough to easily adjust the W/L in realtime. If you do need to adjust the W/L of large color images frequently, a preset like you are talking about might be the best option. If you actually have grayscale images encoded as color, we could add a mechanism to cornerstone to display color images as grayscale and then we would have the faster w/l performance.

On further thought about using the alpha channel - I don’t think you can achieve true w/l type effect through the alpha channel alone. You could use the alpha channel to decrease the brightness of the image, but thats not really helpful since the main use case is to increase the brightness of color images which requires modifying the original RGB values. There is a feature in the backlog to drop the resolution of images during times of high interactivity (like w/l) so it is more responsive - I think I will implement that later this year (unless someone contributes the feature before i get to it). WebGL Is also an option, but I don’t know much about it so probably won’t be pursuing this myself. If someone else out there knows WebGL and wants to do the work, let me know and we can coordinate efforts (I readily accept external pull requests)


Hmm… Now that you mention it, I’m feeeeeeeling really sheeeeepish: Why didn’t I just convert the initial JPEG to 8-bit monochrome?!!


The hardcopies I deal with are all grayscale (because they’re printed on film). So there’s no need to include colour data into it! I will test the uploading of monochrome JPEGs using the create-dicom tool, see the response and post the update here.

Regarding alpha channel, one thing I didn’t have time to try was to add a white rectangle over the color image and controlling the alpha channel of the white rectangle.

But as you mentioned, US images are small enough to adjust the RGB channels in real-time. So, if Orthanc outputs a large monochrome JPEG as single-byte-per-pixel, then it should be as fast as CR windowing.


Hi Sebastien,

Thanks for this amazing WebViewer. I checked it with samples on http://www.osirix-viewer.com/datasets/. But somehow webviewer does not shown my DICOM file, that I can see very well with Osirix viewer. Is there any option or tip from you side to help me understand why WebViewer does not shown my file?

Hi Vladimir,

I am able to display the OsiriX files. This is demonstrated on the following video, whose sample files come from OsiriX:

Are you sure that you have properly installed the OrthancWebViewer plugin (check your logs to be sure)? And are you sure that you click on the “Orthanc Web Viewer” button instead of the “Preview” button (the latter does not support JPEG2k)?