A PHP Application server implementation

A few weeks back, @Michael_Morris posted a topic where he suggested that creating/destroying objects on each HTTP request was wasteful. I posted a way of getting around that in the thread, I’ve built on that a little and come up with what I think is quite a nice implementation

The basic thought process behind this is that:

  • When you connect to the webserver all that does is run a very minimal script, it doesn’t create database connections, construct large object graphs, render HTML or do anything complicated at all
  • It connects to the running Application server which already has all that stuff running in memory. This forgoes all that bootstrap code for each request and allows the request to connect to an existing, running application.

Currently this supports:

  • Multithreading (A server will get run on each thread)
  • Load Balancing (When someone connects they get placed on a specific server instance, the first one available to handle the request)

It’s up on github here:

In its simplest manner you can create an application by implementing the \Aphplication\Aphplication interface and then starting the server:

require_once '../Aphplication/Aphplication.php';

class MyApplication implements \Aphplication\Aphplication {
	private $num = 0;

	public function accept($appId, $sessionId, $get, $post, $server, $files, $cookie) {
		$this->num++;
		return $this->num;
	}
}

$server = new \Aphplication\Server(new MyApplication());
if (isset($argv[1]) && $argv[1] == 'stop') $server->shutdown();
else $server->start();

The accept method takes all the superglobals from the current request as arguments as well as a session ID to allow you to identify specific users.

The return value of the accept method is what is returned to the client (in most cases this will be the HTML). In this case, all the application is doing is acting like a hit counter (remember those?) and tracking the number of times the server has been accessed using the $num variable. You’ll notice it doesn’t need sessions, databases, memcached or any other storage mechanism to achieve this, it’s just all stored in the running PHP server process.

The server can only be run from command line presently as apache doesn’t allow forking processes! The client, however can be run from a normal web server.

> php example1-persistence.php

Once the server is running, you can connect to it using the client script (From command line use the Client-CLI.php rather than Client.php) by running the client script from the same directory the server was started from:

> php ../Aphplication/Client-CLI.php

In this example, all the application is doing is incrementing the number, but the output is useful:

> php ../Aphplication/Client-CLI.php
1
> php ../Aphplication/Client-CLI.php
2
> php ../Aphplication/Client-CLI.php
3
> php ../Aphplication/Client-CLI.php
4

To stop the server, re-run the server script with the stop argument:

php example1-persistence.php stop

If you make any changes to the code in the server, you must restart it before the changes show.

Obviously this a proof-of-concept so it’s not production ready, use at your own risk, yada yada!

This was quite a fun exercise but the result is quite cool. In a non-trivial application with database connections, template systems, lots of includes, etc I get a 500% performance increase (obviously without any kind of caching layer).

5 Likes

In the month since I’ve heard of ReactPHP and although I haven’t delved into it I wonder how or if this compares to it?

Nice stuff Tom.

Just to throw some more technology into the discussion, there is Appserver.io, which is also a pretty good attempt to solve the “damn constant (re)loading of a whole bunch of crap, when I don’t really need it (the reloading), causing my PHP app to be slower than it needs to be” issue of most PHP frameworks and larger applications have. I like what the TechDivision team did with Appserver, but now have to learn Aspects, Advices, Join Points and all the other AOP concepts. And I was just getting used to OOP. LOL! :smiley: Oh, [edit], they also support Design by Contract, which, tell me if I am wrong, is what Laravel 5 is good at?

Scott

Reading through appserver.io it looks like this can be accomplished with a servlet. Interesting stuff indeed.

sounds like fork to me

sure fork will give you a new process but if you need more than just a few worker processes they might end up eating too much ram

another possibility might be to use sockets and “select” and make a process handle concurrent tasks in little chunks in a non-blocking loop. — React is more along those lines (but using libevent) - though so far I didn’t manage to figure out how get it to do what i wanted (handle multiple streams in little chunks) - ended up taking the easier option and just used php sockets directly.

Yeah, I’ve pretty much found that the sweet spot is 2-3 processes per available thread. Since the processes that get run by Apache don’t actually do anything complex their memory usage is tiny. The application server forces garbage collection each time it’s connected to which helps keep memory down. In my tests so far on a medium sized application it uses ~10mb per process which, on my hex-core server uses 240mb of ram (12 threads, 2 processes per thread), which isn’t unreasonable for a medium-large website. The reason for more than one per thread is that some things e.g. database queries cause a pause where another request could be handled.

Interesting. I did try sockets but I was going for maximum performance. Sockets were well slower using unix sockets and 10. times slower using IP based sockets compared with message queues.

There’s an overhead caused by starting a new process that is also avoided with this method, as you say, at a cost to memory.

FIFO pipes were also around the same speed at a root level, but having to continually poll them adds an unnecessary overhead.

edit: To throw some numbers out there, using a light application with ~40 includes, without an app server I could handle 1,200 requests per second. Using IP based sockets it was around 600 requests per second, using unix sockets it was about 2000 requests per second, message queues, however give me 6000 requests per second :slight_smile: This was on my desktop machine, xeon x5670 @ 3.8ghz, 12gb ram.