cURL download constantly fails

So I have a script which runs out and downloads pdfs from a local web server. Script works great, detects if the cURL fails and notifies you. I have some files that will constantly fail, reading 1kb on disk, but the cURL doesn’t come back with an error.

I find it a little curious that it’s not intermittently failing on different files, but always the same files. On all files that fail, I can use the same link that the script is using and open the PDF in the web browser without any problem. These are not https:// addresses either, I had already addressed an issue with that before…

Any pointers on things to check / ideas what might be going on

EDIT: The error checking I have in place grabs file size before cURl and after from local source, if difference != 0 then it throws an error. It is capturing my problem files just fine. I cannot find a reason or any notification that these are failing other than my file size check.

Have the file size check report the file sizes it’s getting in that check, might give you a hint as to which side is going wrong (local filesize = 0, or remote filesize = 0?). (bad filenames for local storage? File already exists but is unwritable?)

It’s always the local file that is smaller / corrupt (near 1kb). Also, each time the script is run it makes a new folder based on datetime hash, so the file doesn’t exist already.