So I have a script which runs out and downloads pdfs from a local web server. Script works great, detects if the cURL fails and notifies you. I have some files that will constantly fail, reading 1kb on disk, but the cURL doesn’t come back with an error.
I find it a little curious that it’s not intermittently failing on different files, but always the same files. On all files that fail, I can use the same link that the script is using and open the PDF in the web browser without any problem. These are not https:// addresses either, I had already addressed an issue with that before…
Any pointers on things to check / ideas what might be going on
EDIT: The error checking I have in place grabs file size before cURl and after from local source, if difference != 0 then it throws an error. It is capturing my problem files just fine. I cannot find a reason or any notification that these are failing other than my file size check.