Exporting Cine Loops as MP4 Videos from Orthanc (API)

Hello Orthanc Community,

I’m building a service to export dicom videos or cine loops from Orthanc as MP4 videos.

Would this be the best approach, where I export the frames of the video and then reassemble them into a video?

  1. Using the REST API to retrieve multi-frame DICOM instances (via /instances/{instanceID}/file or /frames/{frameNumber}/image-uint8).

  2. Extracting frames with Python (pydicom) or directly using the /frames/{frameNumber} endpoint.

  3. Compiling frames into MP4 using FFmpeg or opencv-python.

  4. Is there a preferred Orthanc workflow for efficiently exporting cine loops as MP4s, especially for large datasets?

  5. Should I rely on frame-level extraction (/frames/{frameNumber}) or download the full DICOM file and process it on my server ?

  6. Are there existing plugins, best practices, or alternatives to enhance performance and scalability for this task?

Thank you!
Kyle

Hello,

Your approach sounds valid, and I think that relying on the ability for Orthanc to extract the frames is probably a better option that re-implementing this in your Python script with pydicom, unless you happen to find that doing that is more efficient than what Orthanc does for some reason (optimization,…)

I would make sure to not write a purely sequential single threaded script for this and instead rely either on threads or asynchronous functions, though.

(if you’re not used to async requests, a script has been posted there in another unrelated thread that you might want to get inspiration from : it performs concurrent async upload requests)

Other than that, the “extract frames” → “ffmpeg” → “movie export” workflow sounds good.

_ 1. Is there a preferred Orthanc workflow for efficiently exporting cine loops as MP4s, especially for large datasets?_

Maybe someone will be able to comment on this… I personally cannot see much more to it than what you described.

HTH

1 Like

I just prototyped this up internally a few weeks ago.

The pseudo code I use is:

  1. Check if the video has been generated previously and cached. If it has return the cached video. If not continue.
  2. call /instances/:id/frames to get the number of frames in the instance
  3. call instances/:id/frames/:frameId/rendered to get a png of the frame. I use png to make it as close to lossless as possible. Write the frames to a temporary directory
  4. Call /instances/:id/tags to get the FrameRate for the instance
  5. Use ffmpeg to create a h265 video from the frames, taking into account the frame rate
  6. Cache the video as an attachment to the instance so that it can be stored alongside the instance.

The ffmpeg flags I use are below for info:

/**
 * Disable interaction with stdin, as ffmpeg will not be able to read
 * from it.
*/
"-nostdin",
/**
 * Stop and exit on error
*/
"-xerror",
"-framerate",
frameRate || "30",
"-i",
pathToImages,
"-c:v",
"libx265",
// "-crf",
// "26", // default is 28 - lower is better quality
// "-x265-params",
// "profile=main10",
/**
 * Specifies the profile to be used for H.265 encoding. In this case, it's
 * set to Main profile.
*/
"-profile:v",
"main",
/**
 * Sets H.265 level to 5.1, as per the DICOM H.265 Main Profile
 * specification.
**/
"-x265-params",
"level=5.1",
// Lossless encoding
"-x265-params",
"lossless=1",
/**
 * limit x265 library to only warning logging
*/
"-x265-params",
"log-level=warning",
/**
 * Sets the pixel format to yuv420p, which represents a 4:2:0 chroma
 * subsampling format
*/
"-pix_fmt",
"yuv420p",
/**
 * set the FourCC (Four Character Code) tag for the video stream. Required
 * so Quicktime plays the video. The "hvc1" tag specifically indicates
 * that the video stream is encoded using the H.265/HEVC (High-Efficiency
 * Video Coding) standard.
**/
"-tag:v",
"hvc1",
/**
 * Rearranges the MP4 file to facilitate progressive download, allowing
 * the video to start playing before it is completely downloaded. This is
 * beneficial for web playback.
*/
"-movflags",
"+faststart",

HTH

1 Like

Hello,

As a complement to Benjamin and James’ answers, note that you could invoke ffmpeg from a Python plugin in order to have a solution that is fully integrated within Orthanc. Using the ffmpeg-python library could be a possible starting point.

Regards,
Sébastien-

1 Like