AI Training Workflow: Rasterizing Stone GSPS vs. switching to OHIF (DICOM SEG)?

Hello Orthanc community,

I am developing a pipeline to train a Neural Network (segmentation model) using medical images stored in Orthanc.

The Context: Initially, we planned to use the Stone Web Viewer for clinicians to annotate Regions of Interest (ROI). However, I realized this saves data as DICOM GSPS (vectors), which forces us to build an external ETL script to “rasterize” these vectors into binary masks (bitmaps) for the AI.

The Potential Solution (OHIF): I have recently been reading about the OHIF Viewer integration. My understanding is that OHIF’s segmentation tools can save annotations directly as DICOM SEG (Segmentation) objects, which essentially contain the pixel-level mask/bitmap data we need, avoiding the need for complex vector-to-pixel conversion scripts.

My Questions:

  1. Validation of Stone/GSPS: Am I correct that if we stick with Stone Web Viewer, there is no server-side way to get a bitmap mask, and we must handle the vector rasterization (GSPS → Bitmap) externally using libraries like highdicom or pydicom?

  2. Validation of OHIF/SEG: Does the standard Orthanc+OHIF Docker container support saving directly as DICOM SEG out-of-the-box? If so, is the resulting DICOM SEG file easily accessible via the Orthanc API as a pixel array?

  3. Recommendation: For an AI training pipeline, is it considered “best practice” to migrate to OHIF to leverage DICOM SEG, rather than trying to parse and rasterize Stone GSPS files?

Thank you for your guidance on this architectural decision.