Using cURL on a project to contact a webservice to pull in some data for use in the client's content management system.
Data isn't extremely large. Just pulling basic page content. Data is requested via a curl/rest call. One authentication step, followed by sending session key within the header for each subsequent call (for the life of the session).
The authentication usually works without a hitch. But when calling on the actual data, sometimes I'm getting the data, and other times, I'm just getting nothing. It works about 1/3 of the time.
I like to consider myself a fairly competent programmer, but I'm just having a mental block trying to look for whether the problem is on my end (client side) or theirs (server side). And from there, whether it might be more of a code thing or a communication thing. Where would you start looking for the source of the problem?
Use Firefox and install LiveHttpHeaders extension. Then you can watch what is happening during a normal browser request and you might get a clue as to what is happening. You can then study the output and have cURL emulate just about all the requests. Figuring out which of those requests can then be dropped for the sake of brevity and simplicity is then a matter of trial and error. Sometimes the amount of redirects on their server can be an eye opener.
You do not go into detail, but I would definitely separate fetching the data (and then caching it) from doing any post-fetch analysis.
Anything going over the wires is prone to all kinds of latency issues and so can be a real nightmare to identify where a fault lies.
This topic is now closed. New replies are no longer allowed.