Our team has been using Orthanc with a PostgreSQL backend (Ver 13 and 14). Our servers have been programmed to do scheduled backup via Linux scripts using the pg_dumpall utility.
We have noted that the size of the backup grows exponentially when we start using Orthanc with roughly 100 images mainly X-Ray and Ultrasound dicom images daily though some sites have double that traffic.
The size of the backups when server runs for about 3 months grows to over 300GB and continues growing steeply and makes it very difficult to maintain free hard drive space and in some cases exceeding 3TB where we have regular CT traffic. Incidentally the size of PostgreSQL database on disc is about 30 GB or less than 100GB even for busy sites.
Why would this be the case and is there any way of doing global PostgreSQL backups while ensuring all images are backed up. We are forced to use the --no-blobs option for PostgreSQL dump which means we may not be able to restore the PostgreSQL backup with Orthanc dicom images should we need to restore the databases.
Thank you all for the replies, we will try to adjust the current settings that are set to EnableStorage=true. The only issue we have is when doing backups which generate very huge files otherwise the disk storage space for the databases are relatively much smaller. May be it could be something to do with the way PostgreSQL pg_dumpall tool handles backups for databases with huge or many blobs.
Note that, when you upgrade a system from EnableStorage=True to EnableStorage=False you will actually lose access to the DICOM files already stored in PG.
Therefore, you actually need to start a new system with the new configuration and then, replicate the data from the old setup to the new one.
This migration be performed while both systems are running.