DangerMouse

Hi all,

What is the best method for firing one

PHP

  script off as another dies?

Basically I'm looking to avoid huge web request scripts that have the potential to exceed the standard 30sec

PHP

  execution time. This may be completely the wrong way to go about solving this problem, but I had hoped to execute one script from another (after the first successfully completes). Is this the best method for stringing

PHP

  scripts together? I'm awre that I could use cron timing and have the second script check if theres any suitable "work" to do before continuing, but this strikes me as an unneccessary execution if the first script fails.

Additionally, whats the most appropriate way to pass data between scripts? Currently attempting to use the session variable rather than having loads of db calls, or having to gr

apple

  with file reading / writing.

I've only really started web dev the last few months, so if this is a really stupid question, my apologies.

Cheers,

DM

mrsdf

The best thing you can do is turn off that 30 sec limit if you can.

When the

php

  script times out, it usually flushes the output and all data up to that point is sent to the browser. Put a js redirect somewhere inside that data, telling the browser to go to another script.

Also for long running scripts, run

php

  from the console, so you don't have to keep the browser open.  '

php

  ./script_name.

php

 ', also try 'exec', 'popen' for opening one script from inside another, best used when running on command line.

perkiset

DM as you're describing it, I don't think that chaining will work the way you want it to.

Here's the problem:
Even if you exec('/anotherscript.

php

  > /dev/null &'); the time it takes for <that script> will be attributed to the calling script because it is a child ie., there is no for-reals fire and forget like chaining shell scripts.

You could, however, do multiple execs in the background and have them all run concurrently - this takes more processor, but if there are great lags in your script (waiting for a web page for example) then this is an excellent way to maximize your time. I use this method for both my eblaster and my crawler - I'll fire off up to (a db-table-config-record-value)'s number of scripts, and in the primary script I'll busy wait on the DB watching for a free handle - when there is 1..<> free I'll fire off 1..<> more.

Personally I use exactly what you described - the cron job and work to do table idea, although I have a variety of ways that I do it. I like this method because for my larger jobs (deliver 20K emails, spider huge site) I don't want a single script running where it is difficult to get progress out of it - the only way I can do that effectively is to either write a status file out, or talk to a DB which, if I'm there already...

So I agree with MrS - either turn off the time limit and run from a shell, or use your technique.

About talking between scripts:
I use sempahores, db records, command line params and variables stored in the APC cache, depending on my need. I do not use pipes, although I have heard of folks writing daemons and piping data between them.

/p

DangerMouse

Wow - information overload there! Thanks for the responses both, invaluable tips.

From reading the

php

 

.net

  pages on exec() etc. I suspected that it might fall within the standard execution time, was just about to start testing it.

Although i've changed execution time locally, i've no idea if i have that level of control over my server - only just started putting a site up there lol; but i shall certainly give it a go.

In the meantime I think i'll go with the db solution, will probably be more scalable in future anyway as i suspect i could set flags to lock records etc and then do some kind of multi-threading (way beyond my skill at this stage though!).

If you dont mind me asking, what is the APC cache?

Cheers,

DM

perkiset

quote author=DangerMouse link=topic=421.msg2753#msg2753 date=1185984920

In the meantime I think i'll go with the db solution, will probably be more scalable in future anyway as i suspect i could set flags to lock records etc and then do some kind of multi-threading (way beyond my skill at this stage though!).

Excellent way to go IMO - even though it might be a bit more work on the front end, it will pay off big in the end as you

learn

  and create scalable and replicable solutions. Don't be shy about posting here for help.


quote author=DangerMouse link=topic=421.msg2753#msg2753 date=1185984920

If you dont mind me asking, what is the APC cache?

One of my favorite tools... APC is the Alternative

PHP

  Cache - it is essentially a block of memory that is persistent to the

PHP

  instance - meaning, that you can store stuff there and it will remain between page calls. Also, it stores compiled

PHP

  code, so that

PHP

  does not have to recompile every time a script is run. I get at least 10x the performance boost over standard

PHP

  and having persistent variables is just hot. For example, I can simply say:

apc_store('myArrayName', $anArray)

and the array will be stored into the memory cache as is... no need to serialize or anything. Then in another script I can

$anArray = apc_fetch('myArrayName')

and $anArray will instantly have a handle to that array. CAVEAT: This does not work with shell scripts - it is tied to the current instance of

PHP

  - meaning, that if you run it from

Apache

 , the instance stays alive and you can access persistent things in it - but if you run

PHP

  as a shell script, then each instance of

PHP

  is unique and the cache will only work for <that> script while it is running. Hot benefit: the cache watches the mtime of a file, so if you modify a script, it realizes that the cache is "dirty" and will recompile it next time around and cache it automatically. It's just a huge win all the way around, unless you're severly limited on memory.

Confusing a bit, but ping back if you need more assistance.

/p


Perkiset's Place Home   Politics @ Perkiset's