This is a story of how setting up some game servers turned into an exploration of the seemingly-uncharted depths of FastCGI.

If you’re impatient and just want to know how to implement nginx and PHP-FPM on Docker yourself, skip to the end.


I’ve recently been doing some upgrades to my homelab. The main issue with my homelab for a long time has been that my only compute server was running Windows Home Server 2011. At the time I set up the server I was doing mostly .NET development (long before .NET Core) so this decision kind of made sense at the time, but what I didn’t realise until later is that Windows Home Server artificially limits your system’s memory to 8GiB. Given that due to the bloat of Windows this server used 3.5GiB at idle, this really didn’t leave much useable memory.

I’ve recently obtained a new compute server with (after a small upgrade) 2x E5-2643 v2 processors and 84GiB of memory. In order to utilise this capacity without paying exorbitant licensing fees, Linux is the only option.

Another issue I had with my old server was organisation. I had lots of small projects (often unfinished) lying around and working on one would often break another. So I figured the new server was the perfect time for a fresh start, and a good opportunity to learn Docker.

The first thing I wanted to get running on my new server was Pterodactyl. Pterodactyl is a modern control panel for game servers and it boasts lots of trendy technology. PHP 7.2, nginx, MariaDB, Redis, NodeJS, and yes, Docker. But as I started following the setup instructions I was dismayed! They recommended installing almost everything on bare metal, with only the game servers running in Docker. This simply wouldn’t do, I refused to have my dreams of a tidy server crushed so quickly.

I did a bit of searching for other people who’d set up Pterodactyl in Docker and I found this project which seems easy to use. But it has a flaw that the perfectionist in me could not ignore: it runs php- fpm and nginx within the same container. This breaks the golden rule of Docker, “one process per container”.

And so I realised I would have to build my own solution. (Or really, I convinced myself I would have to build my own because I wanted to get my hands dirty.)


First, I’d like to introduce the technologies I’m going to be talking about.

PHP 7.2

PHP is something of an infamous language. It has a reputation for (among other things) being fairly slow. Most of this reputation probably comes from improperly optimised apps, since it’s a language often taught to beginners and so many junior developers use it extensively. Some of it probably also comes from misconfigured infrastructure, since some vital performance features (such as OPcache) used to be disabled by default. And some of it probably came from Zend Engine (the software responsible for running PHP) itself. (Facebook famously found Zend Engine so slow that they built their own software, HHVM, to run PHP instead.)

Many of these issues have been fixed in PHP 7. OPcache is enabled by default, and the new Zend Engine III provides substantial performance improvements. For example, WordPress 5.0 runs slightly faster in PHP 7.2 than it did on HHVM, and on PHP 7.2 it runs more than 2.5 times as fast as it did on PHP 5.6. (Source)


The traditional way to run PHP is using the mod_php Apache module. This module means that the PHP interpreter is contained within every Apache worker process. This works fairly well, but it has the significant drawback that the PHP interpreter is loaded when responding to every request, even requests for non-PHP files such as images. Additionally it’s not compatible with mpm_event, a module for Apache that improves performance when serving static files.

PHP-FPM solves these problems. PHP-FPM runs PHP processes separately to the web server. This means that the web server doesn’t need to include a PHP interpreter, instead when a PHP file is requested the web server can talk to PHP-FPM via the FastCGI protocol and request for PHP-FPM to do the necessary processing. The FastCGI protocol means that the web server and PHP can communicate over TCP/IP, so they don’t even need to be on the same physical server.


Another advantage of using FastCGI instead of mod_php is that you are no longer tied to Apache. nginx (pronounced engine-x) has a reputation for impressive performance. In modern times Apache is very close to nginx’s performance, but in many workloads nginx is still just a touch faster. nginx’s configuration system also feels a lot cleaner to me than Apache’s.


Docker is a platform for creating and running containers. A container is kind of like a VM, except that it shares the host OS’s kernel. This reduces overhead by eliminating the need for the hypervisor to provide emulated hardware for the guest kernel to interact with.

Docker also includes tooling to automatically build images, which are basically snapshots that containers can be started from. An image can contain your entire application, as well as any dependencies it needs, so that when you start a container from that image your application is immediately useable.

As of recently, Docker also includes Swarm, a way to manage Docker containers across clusters of servers. Swarm mode has a number of limitations when compared to standalone containers, such as only being able to use prebuilt images. Bind mounts are also difficult to work with on Swarms and should be avoided.

Alpine Linux

Alpine Linux is a tiny, efficient Linux distro based on musl and BusyBox. It’s perfect for Docker because its small size allows for producing small images.

With the introductions out of the way, let’s dive in!

File accessibility

Most guides for setting up PHP-FPM and nginx assume that both will be running on the same server/in the same container. This means you don’t really need to understand which process accesses which files, you can just give both processes access to them.

Diagram of PHP-FPM and nginx in the same container

When you’re running across multiple containers, it’s helpful to understand what the exact requirements are. I had guessed that nginx would need access to static files, but I wasn’t sure if it would send the PHP files to PHP-FPM via the FastCGI connection, or if PHP-FPM needed access to the files itself. After some research and experiments, here’s a table which shows which files each process needs access to:

static filesrequirednot required
PHP filesit dependsrequired
files accessed by scriptsnot requiredrequired

If your nginx config checks for the existence of PHP files before passing the request to FastCGI then nginx needs access to the PHP files.

if (!-f $document_root$fastcgi_script_name) { # nginx requires access to the PHP files to check this
  return 404;

Otherwise, nginx does not require access to PHP files.

Of course, the easiest solution is simply to share all the files between both containers. This can easily be accomplished in Docker with a named volume.

Diagram of PHP-FPM and nginx in separate containers, but with the same storage layout


Be wary when sharing volumes between containers that are based on different Linux distributions. On Debian the user www-data has a uid of 33 and the group www-data has a gid of 33. On Alpine these id numbers correspond to the xfs (X Font Server) user and group. Alpine has, as far as I can tell, a www-data group with gid 82 and may have a www-data user with uid 82. (The www-data user is present in the php:7.2-fpm-alpine image but not in the nginx:stable-alpine image.)

File locations

While working out the specifics of which process needs access to which files, I noticed something in the nginx config that offended me. Nginx recommends setting SCRIPT_FILENAME to be $document_root$fastcgi_script_name. Notice the problem? SCRIPT_FILENAME specifies to PHP-FPM where it should look for the PHP file, but $document_root$fastcgi_script_name is the absolute path to where nginx found the PHP file. If you’re running nginx and PHP-FPM on the same server then this will be the same location (unless you’re doing something funky with your filesystem like using bindfs to put the files in different places for different users) but if nginx and PHP-FPM are on different servers (or containers) then there’s no guarantee that this location would be the same.

Now of course, the easy solution here would’ve been to ensure that the locations are the same. Then everything would’ve worked just fine. But I would’ve known, deep down, that it was wrong.

So I spent the next few hours trying to resolve it. The general idea of what I was trying to do was to have nginx simply pass $fastcgi_script_name and to have PHP-FPM prepend the absolute path to the application.

Initially I saw the tantalising prefix property within PHP-FPM.conf, with the vague but promising description “Specify prefix for path evaluation”. Unfortunately after looking through the PHP source code I found this was completely irrelevant for my purposes, as instead of being added as a prefix to PHP files it is added as a prefix to other config options, such as listen and slowlog.

Next I took a look at chroot, but this prevents PHP from being able to access any files outside of the directory. With Pterodactyl there is a main application directory, and within it a public directory that nginx should serve. So if I used the chroot property then PHP would be unable to access the application files outside of the public directory.

And so I turned to chdir, which essentially sets the working directory for the PHP-FPM worker processes. But this wasn’t working either. To help me work out why, I used a tool called strace. strace is a debugging tool and it provides the useful ability to view all the files a process is reading. In order to use strace in Docker, the container must have the SYS_PTRACE capability. Installing strace on Alpine is easy, apk add --no-cache strace. In order to attach strace to a process you will first need to know the PID of that process, which can be found using ps. At the time I only had two nginx worker processes, so I picked one at random. strace could then be attached using strace -p <PID>. Once attached, I used a browser to request a PHP page a few times until nginx decided to handle a request using the worker I was attached to.

With this, I discovered the reason why chdir was ineffective: nginx’s $fastcgi_script_name variable includes the / at the beginning of the request (e.g. “/index.php”). Paths beginning with a slash are absolute, meaning that the working directory is ignored! I tested this theory by temporarily hardcoding SCRIPT_FILENAME to “index.php” (with no leading slash) and, lo-and-behold, the script loaded.

Now my goal was to remove the leading slash from $fastcgi_script_name. In principle this seems easy, as according to the documentation the $fastcgi_script_name is equal to the result of the first capture group from fastcgi_split_path_info. I changed the fastcgi_split_path_info to ^/+(.+\.php)(/.+)$ to remove any leading slashes. But still PHP-FPM was receiving a leading slash.


If the regular expression you’ve specified for fastcgi_split_path_info fails to match the path, nginx will silently ignore it and instead use some kind of internal default value. I wasted a lot of time due to this, I thought my changes were for some reason not taking effect or the directive was being overriden somewhere. It’d be nice if nginx returned an error in this scenario.

Anyway, what I found is that it doesn’t matter what you do in fastcgi_split_path_info, there will always be a leading slash in $fastcgi_script_name. If one isn’t captured by the RegEx, something within the FastCGI module just prepends one. If you want to remove it, you have to use other means:

set $filename "index.php";
if ( $fastcgi_script_name ~ "^/+(.*)$" ) {
   set $filename $1;
fastcgi_param SCRIPT_FILENAME $filename;

Overwriting the $fastcgi_script_name variable directly with set is not allowed, you’ll get an error saying [emerg] the duplicate "fastcgi_script_name" variable


According to this comment, PHP-FPM’s behaviour around SCRIPT_FILENAME breaks the RFC 3875 standard. This would explain why it’s clunky to get nginx to send the right data, since nginx expects FastCGI responders to conform to the standard.

And now it works! Nginx and PHP now only have to care about their own files, not each others.

Diagram of PHP-FPM and nginx in separate containers, with the app stored in different locations on each

Was it worth spending so much time on this? Probably not. But at least I learned some things along the way.


It turns out that (once you’ve figured out the above) it’s possible to create a very generalised nginx image. So I’ve done just that! I have published docker-nginx-for-PHP-FPM on DockerHub. If you’d like to see an example of how it can be used, check out the docker-compose.yml file included in my docker-pterodactyl-panel repository.

This image also works perfectly in Swarm Mode, I’m using it to host my own production Pterodactyl setup.