In this blog, I will look into the details of the HTTP status codes: “499 Client Closed Request” client error and “504 Gateway Timeout” server error, how they work and how they can be avoided.
At the end of the article, I’m showing an example of how PHP can be processed in the background, giving the user an improved user experience, while not having to implement a complete queue system.
Let’s get started
Before looking into the first HTTP 499 client error status code, I’d like to take a minute to share what kind of setup I’ll be using in my examples. The request chain will look like this:
Web Browser -> Load Balancer -> Reverse Proxy (FastCGI) -> PHP-FPM
This setup should look familiar and it’s probably how most production stacks are set up. I’ll be using Nginx for load balancing, but this could be AWS ALB or HAProxy or whatever really. I’ll also be using Nginx for the reverse proxy, running FastCGI which communicates with PHP-FPM running on another instance. Apache should work as well, as long as it’s running FastCGI and not mod_php.
499 Client Closed Request
Now that we have gotten that out of the way, what is the 499 client error? It is often explained as ”[…] indicates that the client has closed the connection while the server is still processing the request.”
The definition of “client” in this case is a bit vague and confusing. You’d think that the client means end-users browsing your site in their web browser. However, in many cases, it’s not the end-user that closes its browser tab.
Rather, it’s the load balancer that gave up waiting for the reverse proxy, therefore the “client” from the reverse proxy’s perspective, is the load balancer and since it went away, it’ll log that request as a 499 client error in its access log.
To find out what happened, we’d have to check the access logs of both the reverse proxy and the load balancer. In the case where the client closes the browser, both the load balancer and the reverse proxy will log the request as 499. But, if the load balancer timed out waiting for the reverse proxy, the load balancer will log 504 but the reverse proxy will log the request as 499. So it will look something like this:
The end-user closes the browser:
Web Browser (closed) -> Load Balancer (499) -> Reverse Proxy (499)
Load balancer times out waiting for the reverse proxy
Web Browser (504) -> Load Balancer (504) -> Reverse Proxy (499)
How about PHP-FPM, then?
So, what happens to PHP in these scenarios? Well, nothing. PHP-FPM is completely ignorant that the end-user has closed its browser. It also doesn’t know that the reverse proxy timed out. So, the execution of the PHP script will continue even in the case of 499 and 504 in the load balancer and/or reverse proxy.
PHP-FPM first realizes that there’s no one present when it tries to send something back to the client. This happens when the ini_get('max_execution_time')
is met or when the script normally completes its execution. There’s another third case, and that is when you echo something out, followed by a flush()
call, this will flush the output buffer immediately and send data back to the client (in this case the reverse proxy.)
This means that as long as the script doesn’t output anything and doesn’t flush its output buffer after a 499 or 504, it will continue executing in any of these scenarios:
1: Web browser closes (Load Balancer: 499, Reverse Proxy: 499)
2: Load balancer times out (Load Balancer: 504, Reverse Proxy: 499)
3: Reverse proxy times out (Load Balancer: 504, Reverse Proxy: 504)
But, what if we flush our output buffer, what will happen? Let’s think of an example of some logic that runs for a long time, the end-user gets tired of waiting and closes its browser, or one of the servers in the request chain times out.
// We echo and flush the output buffer, this is the first time PHP-FPM realizes that the client (or server) is gone and terminates the php script at this point.
echo 'ping?';
flush();
// If you like to continue execution of the script regardless, then you can call this native php function before flushing and the php script will happily continue even though the client (or server) is gone.
ignore_user_abort(true);
As written in the comments, PHP-FPM will stop the execution of the script at the point where the output buffer is flushed (unless ignore_user_abort(true);
is called somewhere before the flush).
How can we avoid this from happening?
So there are two scenarios here, the first one is where the end-user closes its browser, and the other one is where we experience a timeout in one of our servers due to long request processing times.
For the first one, the most obvious choice would be to provide a better user experience, like pushing heavy processes to a separate queue system and processing it in the background instead of doing it upfront while keeping the end-user waiting. There’s also another nifty way of solving this while not needing a complete queue system, which I’ll demonstrate in the latter part of this article.
For the second one, where we encounter timeouts in the request chain, it’s solvable by setting timeouts in an incremental order from your inner layer and out. This way you’ll not experience timeouts in the middle of the request chain and won’t experience 499 errors unless it’s caused by the end user.
PHP-FPM (set "max_execution_time" to n)
Reverse Proxy (set "fastcgi_read_timeout" to n+1)
Load Balancer (set "proxy_read_timeout" to n+2)
504 Gateway Timeout
We touched briefly on this server error in the previous section talking about 499 errors. It occurred when the load balancer timed out because the reverse proxy server wasn’t responding.
But, what does it mean that the server is not responding? Let’s look at the Nginx documentation to find the definition of what “not responding”, or read_timeout, really means.
fastcgi_read_timeout / proxy_read_timeout
-----------------------------------------
Defines a timeout for reading a response from the [FastCGI / proxied] server. The timeout is set only between two successive read operations, not for the transmission of the whole response. If the [FastCGI / proxied] server does not transmit anything within this time, the connection is closed.
The key thing to take away here is “two successive read operations, not for the transmission of the whole response“.
This means that if we manage to give the reverse proxy something back, it will be happy and reset its read_timeout clock to zero again. Let’s dig a bit deeper and see how passing data through the request chain works.
Web Browser -> Load Balancer -> Reverse Proxy (FastCGI) -> PHP-FPM
PHP-FPM, outputs something, this usually happens when the script finishes its execution, meets its “max_execution_time”, or echoes and flush its output buffer before the script finishes.
The reverse proxy can see that its FastCGI buffer is filling up with something, so it resets its “fastcgi_read_timeout” and starts to count from 0 again. But, the load balancer is not getting anything in its proxy buffer from the reverse proxy, so it times out.
To pass the data all the way to the load balancer, we’ll have to turn off the buffer on the reverse proxy in FastCGI by setting fastcgi_buffering off;
in the Nginx config file or if only disabling for one request, setting the header header('X-Accel-Buffering: no');
in PHP.
Now, the data will flow from PHP-FPM to the reverse proxy which has “fastcgi_buffering” disabled, passing the output on to the load balancer which still has its buffer turned on. Since the load balancer buffers the output, the end-user won’t see anything in its browser until the script finishes its execution.
That means, as long as we output and flush something before the fastcgi_read_timeout
and proxy_read_timeout
times out, the PHP script will prevent both the reverse proxy and load balancer from timing out until it reaches its end or until the “max_execution_time” setting is met.
Background Processing
Mission accomplished! We’ve managed to let data flow from PHP all the way to the load balancer, preventing any server timeouts.
However, wouldn’t it be better if we didn’t let the end-user stare at a blank screen while processing all of this logic?
There’s the obvious choice of pushing heavy and long processes to a queue system. However, setting up a whole queue system, a publisher, and a subscriber can be a bit cumbersome. It can also create a lot of overhead and a few seconds’ time lag for tasks that aren’t that heavy but still slow enough for the end-user to start rolling their eyes.
I believe there is a middle ground between processing everything up-front and pushing things to the queue.
What if, we could let the end-user have something, letting them know that we’re working on it, while we still execute the script that we would normally do?
It turns out that there’s a nifty function in PHP-FPM called, fastcgi_finish_request();
. What it essentially does is that it closes the connection to the web browser, letting the browser have what has been outputted to the buffer up until that point, while it lets the PHP script continue to execute.
Think about it, that’s exactly what the PHP script does anyway, like in our previous examples with the 499 and 504 error codes. The difference is that the end-user now has something on the screen while waiting.
Demonstration
I’ve put together a simple Laravel app, which redirects the end-user to a waiting screen while the script is still processing.
It works by putting a unique request ULID into the cache as a key and a BackgroundRequest object as the value. BackgroundRequest is a simple data object that contains three properties: $requestId, $progress and $completed
.
class BackgroundRequest extends DataObject
{
public function __construct
(
public string $requestId {
set {
if (strlen($value) !== 26) {
$msg = 'ULID must be 26 characters long';
throw new InvalidArgumentException($msg);
}
$this->requestId = $value;
}
},
public int $progress = 0 {
set {
if ($value < 0 || $value > 100) {
$msg = 'Progress need to be between 0 and 100';
throw new InvalidArgumentException($msg);
}
$this->progress = $value;
}
},
public bool $completed = false
) {}
}
The $progress and $completed properties of this object are then incremented and changed in the cache as the script continues its execution.
Once the script finishes, or as a safety measure, the cache expires, the end-user is redirected back to the original page. In this example, it’s the index page.
Here are the contents of the routes/web.php
file.
// routes/web.php
Route::get('/', function ()
{
return view('index');
});
Route::post('some-logic-that-takes-forever-to-process', function ()
{
$requestId = str()->ulid()->toBase32();
$requestHandler = new BackgroundRequestHandler;
$request = new BackgroundRequest(
requestId: $requestId,
progress: 0,
completed: false
);
// the requestHandler stores the request in the cache, like so:
// cache([
// $this->buildCacheKey($request->requestId) => (array) $request
// ], now()->addMinutes(10));
$requestHandler->storeRequest($request);
// redirect the end-user to the waiting page located at
// background-request-progress/{request} while continuing to process
if (function_exists('fastcgi_finish_request')) {
redirect("background-request-progress/{$requestId}")->send();
fastcgi_finish_request();
}
// process logic while the end-user is waiting at the waiting page
foreach ([25, 50, 75, 100] as $progress) {
sleep(2); // increment progress by 25% every 2 seconds
$request->progress = $progress;
$request->completed = $progress === 100;
$requestHandler->storeRequest($request);
}
});
// waiting screen for the end-user while the process is executing
// redirect back to original page once script finishes or cache expires
Route::get('background-request-progress/{request}', function (BackgroundRequest $request)
{
// the BackgroundRequest object is injected and automatically fetched
// for the given requestId, so we can immediately check if
// it's completed without re-fetching it
return $request->completed
? redirect('/')
: view('progress', compact('request'));
});
The logic for storing the request in the cache and the waiting screen logic are universal so they can be used for any request. If you’re not processing something that takes forever and you feel that you don’t need a whole queue system, I believe this is a valid option that is fairly easy to set up.
We’re reaching the end of this blog, but as always, if you’d like to check this demo out in more detail, the whole repo is available [here]. I’d recommend checking the app/ folder for the background request data object and cache logic, the routes/ folder for more info on how the request flow works with the waiting screen, and the docker/ folder for Nginx config files for the load balancer and reverse proxy - the rest is just a standard Laravel 11.x repository.
As a closing side note, I found that PHP-FPM has this convenient function called fpm_get_status()
. You can use this to keep track of the status of your running PHP-FPM process. For tuning your configuration settings and to keep track of your child processes not flying through the roof, I’ve found that it’s good to keep an eye on how many times you’ve maxed out your child processes by checking fpm_get_status()['max-children-reached']
. You can use this together with fpm_get_status()['start-time']
to check when the master process started and calculate an average of how many times the child processes are maxed out daily.
That’s it for this time, I hope that this will be useful for someone out there looking to get more insight into the HTTP 499 and 504 errors and how you can use PHP-FPM for background processing.
Until next time, have a good one!