Your approach sounds valid, and I think that relying on the ability for Orthanc to extract the frames is probably a better option that re-implementing this in your Python script with pydicom, unless you happen to find that doing that is more efficient than what Orthanc does for some reason (optimization,…)
I would make sure to not write a purely sequential single threaded script for this and instead rely either on threads or asynchronous functions, though.
(if you’re not used to async requests, a script has been posted there in another unrelated thread that you might want to get inspiration from : it performs concurrent async upload requests)
Other than that, the “extract frames” → “ffmpeg” → “movie export” workflow sounds good.
_ 1. Is there a preferred Orthanc workflow for efficiently exporting cine loops as MP4s, especially for large datasets?_
Maybe someone will be able to comment on this… I personally cannot see much more to it than what you described.
I just prototyped this up internally a few weeks ago.
The pseudo code I use is:
Check if the video has been generated previously and cached. If it has return the cached video. If not continue.
call /instances/:id/frames to get the number of frames in the instance
call instances/:id/frames/:frameId/rendered to get a png of the frame. I use png to make it as close to lossless as possible. Write the frames to a temporary directory
Call /instances/:id/tags to get the FrameRate for the instance
Use ffmpeg to create a h265 video from the frames, taking into account the frame rate
Cache the video as an attachment to the instance so that it can be stored alongside the instance.
The ffmpeg flags I use are below for info:
/**
* Disable interaction with stdin, as ffmpeg will not be able to read
* from it.
*/
"-nostdin",
/**
* Stop and exit on error
*/
"-xerror",
"-framerate",
frameRate || "30",
"-i",
pathToImages,
"-c:v",
"libx265",
// "-crf",
// "26", // default is 28 - lower is better quality
// "-x265-params",
// "profile=main10",
/**
* Specifies the profile to be used for H.265 encoding. In this case, it's
* set to Main profile.
*/
"-profile:v",
"main",
/**
* Sets H.265 level to 5.1, as per the DICOM H.265 Main Profile
* specification.
**/
"-x265-params",
"level=5.1",
// Lossless encoding
"-x265-params",
"lossless=1",
/**
* limit x265 library to only warning logging
*/
"-x265-params",
"log-level=warning",
/**
* Sets the pixel format to yuv420p, which represents a 4:2:0 chroma
* subsampling format
*/
"-pix_fmt",
"yuv420p",
/**
* set the FourCC (Four Character Code) tag for the video stream. Required
* so Quicktime plays the video. The "hvc1" tag specifically indicates
* that the video stream is encoded using the H.265/HEVC (High-Efficiency
* Video Coding) standard.
**/
"-tag:v",
"hvc1",
/**
* Rearranges the MP4 file to facilitate progressive download, allowing
* the video to start playing before it is completely downloaded. This is
* beneficial for web playback.
*/
"-movflags",
"+faststart",
As a complement to Benjamin and James’ answers, note that you could invoke ffmpeg from a Python plugin in order to have a solution that is fully integrated within Orthanc. Using the ffmpeg-python library could be a possible starting point.