When I try to create an archive/zip of a large study using an async job a 0 byte file is sent by Orthanc when I retrieve the archive/zip. The log file shows the following:
I1004 10:32:12.101482 JobsRegistry.cpp:504] Job has completed with success: dde9f288-37df-4f4b-817a-eda32ac0ebcf
I1004 10:32:13.087452 HttpServer.cpp:1248] (http) GET /jobs/dde9f288-37df-4f4b-817a-eda32ac0ebcf
I1004 10:32:17.083378 HttpServer.cpp:1248] (http) GET /jobs/dde9f288-37df-4f4b-817a-eda32ac0ebcf/archive
E1004 10:32:21.198303 HttpOutput.cpp:71] This HTTP answer has not sent the proper number of bytes in its body
It works fine for small studies by always fails on large studies.
After further testing the issue occurs on studies that are around 1.5gb+ and hence take a while for the archive to generate. Any help would be appreciated.
Do you have a reverse proxy in front of Orthanc ? It probably has a timeout or size limit.
No, I dont have a reverse proxy in front of Orthanc.
So please share a full minimal reproducible setup (orthanc config file) + the commands that you use to download the files.
Note that your HTTP client can timeout as well and also check this config:
// Maximum number of ZIP/media archives that are maintained by[](https://hg.orthanc-server.com/orthanc/file/tip/OrthancServer/Resources/Configuration.json#l723)
// Orthanc, as a response to the asynchronous creation of archives.[](https://hg.orthanc-server.com/orthanc/file/tip/OrthancServer/Resources/Configuration.json#l724)
// The least recently used archives get deleted as new archives are[](https://hg.orthanc-server.com/orthanc/file/tip/OrthancServer/Resources/Configuration.json#l725)
// generated. This option was introduced in Orthanc 1.5.0, and has[](https://hg.orthanc-server.com/orthanc/file/tip/OrthancServer/Resources/Configuration.json#l726)
// no effect on the synchronous generation of archives.[](https://hg.orthanc-server.com/orthanc/file/tip/OrthancServer/Resources/Configuration.json#l727)
"MediaArchiveSize" : 1,
Regards,
Alain
I have “MediaArchiveSize” set to 5 and have tried setting it as high as 10 without any luck.
I will post the orthanc config file and the commands shortly.
Commands are given below…
curl --request POST
–url http://localhost:8042/studies/737c0c8d-ea890b4d-e36a43bb-fb8c8d41-aa0ed0a8/archive \ <<<<< to start the async job
–data ‘{“Asynchronous”:true}’
curl --request GET
–url http://localhost:8042/jobs/17a77a33-8055-4803-8222-ba9d93f999b3 \ <<<<< to check the status of the job
curl --request GET
–url http://localhost:8042/jobs/17a77a33-8055-4803-8222-ba9d93f999b3/archive \ <<<<< to get the generated archive
–header ‘Content-Type: application/zip’
orthanc.json (15.3 KB)
I had this issue in the past and it was the reason why Sébastien added the HttpRequestTimeout configuration setting (see the reference config at https://hg.orthanc-server.com/orthanc/file/Orthanc-1.11.2/OrthancServer/Resources/Configuration.json).
Basically, if too much time goes without bytes being transferred, the embedded http server closes the connection (or sth along those lines)
Maybe set it to 3600 to rule this out.
HTH
Already tried that and it made no difference.
OK. There is an important distinction to be made between HttpTimeout and HttpRequestTimeout , so I wanted to double check.
This is weird.
Does the error always occur after a well-defined period, regardless of the size , as long as it’s above the threshold that you determined?
The process as you know consists of 3 steps…
-
curl --request POST
–url http://localhost:8042/studies/737c0c8d-ea890b4d-e36a43bb-fb8c8d41-aa0ed0a8/archive \ <<<<< to start the async job
–data ‘{“Asynchronous”:true}’
-
curl --request GET
–url http://localhost:8042/jobs/17a77a33-8055-4803-8222-ba9d93f999b3 \ <<<<< to check the status of the job
-
curl --request GET
–url http://localhost:8042/jobs/17a77a33-8055-4803-8222-ba9d93f999b3/archive \ <<<<< to get the generated archive
–header ‘Content-Type: application/zip’
Step 1 and 2 complete just fine for studies of all sizes and don’t generate any errors. Step 3 when executed downloads a 0 byte zip/archive when the size of the study is very large ie 1.5gb+. So basically I don’t see any timeouts occurring atleast based on the info I’m getting from the logs.
Did you try executing step 3 with -v, --verbose and/or --trace options ? That might give you a little bit more information to work with for debugging purposes.
/sds
Yes, the log file snippet I provided in my opening post is with the - verbose flag enabled.
Hi Rana,
I think Stephen was suggesting to use the “curl -v” verbose option.
Just tested with a 4GB study here without any problem (but, that’s a test study with 2 very large black instances that compresses into a 4MB zip).
Please share a test study if you want us to investigate.
Best regards,
Alain.
Alain I have sent you a link to one of the studies that is causing the issue.
Just chiming in about this issue. Rana sent me a link to download the study in question. I uploaded to my dev instance of Orthanc and tried to download it as a job using the sequence of curl commands with the CLI.
I can confirm that for that large study that it times out almost immediately.
In the orthanc.log I only see:
“This HTTP answer has not sent the proper number of bytes in its body”, which apparently thrown in: OrthancFramework/Sources/HttpServer/HttpOutput.cpp
HttpOutput::StateMachine::~StateMachine()
{
if (state_ != State_Done)
{
//asm volatile (“int3;”);
//LOG(ERROR) << “This HTTP answer does not contain any body”;
}
if (hasContentLength_ && contentPosition_ != contentLength_)
{
LOG(ERROR) << “This HTTP answer has not sent the proper number of bytes in its body”;
}
}
The curl CLI output has:
- SSL certificate verify result: self signed certificate (18), continuing anyway.
GET /jobs/52dce9f0-d819-4bf2-914e-b07710ef4ccb/archive HTTP/1.1
Host: localhost:8042
User-Agent: curl/7.64.1
Accept: /
0 0 0 0 0 0 0 0 --:–:-- 0:00:01 --:–:-- 0< HTTP/1.1 200 OK
< Connection: keep-alive
< Keep-Alive: timeout=1
< Content-Disposition: filename=“archive.zip”
< Content-Type: application/zip
< Content-Length: 3836660466
<
0 3658M 0 0 0 0 0 0 --:–:-- 0:00:03 --:–:-- 0* TLSv1.2 (IN), TLS alert, close notify (256):
{ [2 bytes data]
* transfer closed with 3836660466 bytes remaining to read
0 3658M 0 0 0 0 0 0 --:–:-- 0:00:03 --:–:-- 0
- Closing connection 0
- TLSv1.2 (OUT), TLS alert, close notify (256):
} [2 bytes data]
curl: (18) transfer closed with 3836660466 bytes remaining to read
For smaller studies, not a problem.
/sds
Hi Rana,
Thanks for sharing the large study (the zip had to be larger than 2 GB to trigger the issue which did not happen with my test study).
This is now fixed in this commit: https://hg.orthanc-server.com/orthanc/rev/d842e4446e63
Best regards,
Alain.
Awesome. Thank you. Looking forward to the next release.