I have a very basic question. What are the recommended hardware specs to set up an Orthanc server that will be accesed by at least 10 computers at the same time?
Also, I wanted to know if there was a way to access dicom files stored in Orthanc via web, with just a link.
Can you elaborate on the simultaneous usage? Also, are those instances going to serve tons of images (a very rough estimate is welcome)? Last but not least, will you be running any plugins (custom or otherwise) and what those plugins will do (in case you can answer that as of now)?
To put it simply, Orthanc is an image server. Its basic needs for memory and CPU are minimal. Unless, for example, it will be receiving lots of images. Or you run many scripts and plugins.
I’ve ran an instance in development conditions in a quad-core CPU and it needed less than 30MB. Sure enough, your mileage will vary. So these questions are more of an exercise.
Overall, it’s more of an exercise in balancing estimates and requirements than a fixed number. So please bear with me.
Let’s take a basic use case: all 10 computers will be sending images on a constant basis and you’ll use the AutomatedJpeg2kCompression.lua from the official repositories, unmodified.
You’ll need a good amount of CPU. I’d say 4 is minimal and 8 is good. You don’t need 1 CPU per client, but you certainly need more than 1. Bear in mind that even though there’s bound to be tons of processing, there’s also lots of IO; You’ll need to read and write about 1.5 times the size of the images (not counting database/storage backend).
This begs the question: what is the average size of images those computers will be sending? Multiply that value by your expected simultaneous client count (10). Then by 1.5 (by line 27 of the aforementioned script, both the compressed and uncompressed versions of the image are loaded in memory. The actual compression program might need more memory, but we don’t need to go into that much detail.
So: 10MB * 1,5 * 10 clients equals 150MB for a heavy (ish) load.
Now factor in some 30% for good measure and you have 195MB.
As explained by Mr. Luis, the Orthanc system does not need a powerful server to work, however, if you want to have a good transfer speed between the server and the 10 workstations the most important thing is the transfer speed in the network and the type of hard drives you use on the server. The ideal combination is 1GB network cards and SSD hard drives.
In my personal case, I have an HP server with dual Xeon processor, 128GB ram, four 1GB network cards and 40TB SSD hard drives. I have that server virtualized with VMware and I work with four servers that use Windows Server 2012. I have almost 40 workstations that query the information stored on the four servers using the Radiant program. I process an approximate of 1000 studies monthly with an average of 3000 images per study, that is to say about 3 million images monthly. The medical equipment have them separated into groups to export the data to a specific server according to the type of study they perform. All this allows me to search for studies of 4 years ago in a time no longer than 4-5 seconds of delay.
On that same server I have generated other four servers where I run the rest of the clinic information (worklist, administration, accounting, personal data, etc.), in short, I have eight virtual servers within the same HP server and believe me that it works fine.
Everything is summarized in a good design and structuring of the data, the equipment should be the least of your concerns.