Set maximum execution time to unlimited

Is that possible? If so, how?

Just curious why would you want to do this? A script that runs for very long could either crash you server or eat up a huge amount of memory. Most people want to kill the script if for some reason it doesn’t completed after small amount of time.

Trying to make a script that will check if files exist in a directory and if so process them, if not sleep and come back to check again.

Dont want to use a cronjob because I dont want 2 instances of the script to be working at the same time. As in I set the cronjob for every 3 mintues and the first instance takes 5 minutes to complete the second instance will kick in at 3 minutes. (if my understanding of how cronjobs work is correct)

It’s possible to set PHPs execution time to 0 (limitless) but you still have to get past server limits etc.

I have a script with a 3 minute time limit (which doesnt include database queries that happen constantly, and will increase this time). Then a cron job runs every 10 minutes to run a Haskell program and start them again. That way I get around any limits.

This is for a monster processing application. It takes files with millions of results each generated on-the-fly by Haskell [a functional programming language] and does mass calculations on them. I’m running out of storage space quite quickly though… At the current rate I estimated I can store about 2 billion more results which isn’t enough for the final goal.

Is there a guaranteed way not to create a second instance of the a script with a cron job? AFAIK cron job’s will only launch scripts and wont check instances.

Create a file called “is_running.txt” and delete it on job completion. Check if the file already exists each time the cron starts and if it does, stop running.

Or use a database, but checking a file would be easier.

Making sure I got this right:

At the beginning of my script, check if a “is_running.txt” file exists,

if so,
exit;
if not,
create “is_running.txt”
run script

after script is done,
delete “is_running.php”

Wow, thats genius haha. Thanks.

Regardless, you need a way to start this service(and maintain that it keeps running). What if it crashes? You gotta restart it. Cron job is a great way. If you only want a single instance of this running, have the script talk to a database. Have it continually tell a database that it’s still running, every few minutes or whatever. Start another instance of it every few minutes via cron(or other means), and the first thing it does is check with the database to see if another instance has been active in the last few minutes. If so, it just die()'s before it begins any processing because its not needed. That way if an instance crashes, another one will be spawned to take its place within minutes.

I’ve never made a forever running php script. People warn that php isn’t suited for this, suggesting memory leak issues and I’ve also heard things about eventually running out of something required to open a file handle. Even if this isn’t php’s fault, you would need to be careful with user created memory leaks.

Since you’re going to need to implement a way to restart this service if it crashes anyway, personally, I would just have the worker do work for a while, and then exit by itself after its done some work, and die(). Let the next instance which will be spawned shortly pick up with the next job.

If you do the is_running file check, make sure to also check its filemtime(). If the script crashes, then is_running.txt will exist forever. You need a way to detect this, and you could assume that a very old file is stale.

Keep in mind these are not atomic operations. It’s possible, although unlikely, that you could still end up with 2 instances running due to the race conditon. Using a database, and locking the table while performing this logic would solve the race conditon.