Here is the philosophical question, a question of process and flow...
I have a web application used for monitoring hundreds, maybe thousands of items non-stop using server-side functions; but currently the process uses a somewhat serial non-stop loop of individual ajax requests for each item being monitored and the loop is now taking too long.
Reworking towards a multi-threaded nature or sending multiple requests asynchronously in a better way; but here is the question... How to handle the main looping function?
I do not want to overload the server, asking questions when it's plate is still full or I can see creating a situation where the requests compound quickly and become a very, very bad thing. I also want to take into consideration some requests failing (hopefully not; but don't want them to be a wrench in the system).
So a user may... actually will... have their browser open 24/7 displaying a live view of everything being monitored at one-time on one-page and another concern is memory leakage and the like, so auto-refreshing the whole page every once in a while is also being implemented and looked into as well.
YOUR thoughts are appreciated.
(The technology and experience should not be key in this question but if interested, knowledgeable in and using elements such as php, jquery, json, apache, and mysql)
You should probably add some additional capability to both the client and server. I think when your server responds to a client request, you might want to have the server provide some kind of info for the client to use to determine how rapidly it can make future requests, so that it can throttle up or down as needed and permitted. A failed request should probably make the client sharply decline its request rate. The server is in a good position to inform the client of it's current load, but not network load. You can probably ignore network load though.
I assume the client will behave and follow your protocol, and isn't something that would be abused.
As far as failed requests, some things to think about:
was there a problem with the request body, such that retrying it is likely to fail again?
Would the next request in the queue maybe not have this problem?
Is it ok to just put this failed request to the back of the queue, and try it again sometime in the future, or do I really need the servers response for it right now?
I think those kinda questions should direct your strategy for how to handle it.