We are using Transfer plugin to send a study to a peer orthanc server. Following are the details of our setup
Hardware Setup : aws ec2 instance
AMD EPYC 7571 2.20 GHz with 2 Core VMs with 16 GB of RAM with Windows Server 2022 Datacenter, SSD 1 GB HDD with increase IOPS
Transfer.json settings
{
/**
* The following options control the configuration of the
* transfers accelerator plugin for Orthanc.
**/
"Transfers" : {
"Threads" : 2, // Number of worker threads for one transfer
"BucketSize" : 4096, // Optimal size for a bucket (in KB)
"CacheSize" : 128, // Size of the memory cache to process DICOM files (in MB)
"MaxPushTransactions" : 4, // Maximum number of simultaneous receptions in push mode
"MaxHttpRetries" : 3 // Maximum number of HTTP retries for one bucket
}
}
Lua Script - OnStableStudy
local transfer = {}
transfer['Resources'] = {}
transfer['Resources'][1] = {}
transfer['Resources'][1]['Level'] = 'Study'
transfer['Resources'][1]['ID'] = studyId
transfer['Compression'] = 'none'
transfer['Peer'] = 'TestSimpleHub'
local job = ParseJson(`Preformatted text`RestApiPost('/transfers/send', DumpJson(transfer, true)))
We tried both compression types in transfer/send none and gzip the results are copied below.
Job completion state with Compression=none
EffectiveRuntime: 117.934
ErrorDescription: Success
State: Success
Type: PushTransfer
Detailed information
- CompletedHttpQueries: 118
- Compression: none
- NetworkSpeedKBs: 9386
- Peer: DocPanelTestSimpleHub
- TotalInstances: 22
- TotalSizeMB: 909
- UploadedSizeMB: 909
Job completion state with Compression=gzip
EffectiveRuntime: 165.238
ErrorDescription: Success
State: Success
Type: PushTransfer
Detailed information
- CompletedHttpQueries: 118
- Compression: none
- NetworkSpeedKBs: 5649
- Peer: DocPanelTestSimpleHub
- TotalInstances: 22
- TotalSizeMB: 909
- UploadedSizeMB: 796
In both the transfer/send we see the compression=none performs better given we have good bandwidth between two peers. What is common is the high utilization of CPU time.
From the documentation on Transfers Plugin (link) there is a line states regarding the high CPU usage
- Buckets can be individually compressed using the gzip algorithm, hereby reducing the network usage. On a typical medical image, this can divide the volume of the transmission by a factor 2 to 3, at the price of a larger CPU usage.
In the current network speeds we have and the medical image data size approx 1GB, applying some kind of compression is inverting its usefulness. Please help us to understand what creates the high CPU usage and is there a way to reduce to bring this down. We tried to increase the CPU capacity but still the CPU utilization is pretty high.
with regards
Rady