Faster MySQL with HTTP/3
In this article, we explore how our HTTP/3 API compares to the latency of a traditional MySQL client.
Over here at PlanetScale, we offer you a MySQL database. As a part of this offering, it is critical that we offer you a MySQL protocol-compatible interface to access. This enables using
mysql-client as well as any MySQL-compatible driver for your favorite language.
But what if we weren’t constrained by this? Could we provide an alternative interface and API?
Most of what I will be discussing is not publicly documented and is entirely experimental.
As a part of some of our infrastructure initiatives, we demanded new APIs and connectivity features for our database. To support features that weren’t available over the MySQL protocol, we decided to start bolting on a publicly accessible HTTP API. This API is not documented for public consumption just yet (it will be, I promise), but it is gRPC compatible.
In serverless compute contexts, your code is fundamentally not able to open up a TCP socket and speak the MySQL binary protocol to us. The platforms require communication through HTTP(S), so this ended up being a nice fit.
Having this API now opens the door to my question:
Can HTTP be faster than the MySQL protocol?#
Our new APIs aren’t just gRPC. Specifically, on our end, we use
connect-go, which is gRPC-compatible and gives us a bunch of other features. One of these features is the ability to potentially use HTTP/3 as a transport. HTTP/3, to me, started making things very interesting. If you’re not familiar with HTTP/3, I suggest taking a detour and doing a bit of research, then come back. But the gist is that HTTP/3 is built on top of UDP rather than TCP, using a different transport called QUIC.
My theory was that the benefits of our new API would start to yield tangible benefits in most scenarios when compared to a traditional MySQL client. The results were even pretty surprising to me!
The experiments here are confined to Go. I developed a proof of concept
database/sql-compatible driver that speaks to our HTTP API using protobuf for the encoding and Snappy for the compression. I then compared this with the standard MySQL driver.
For HTTP/2, we use the stdlib Go HTTP client, and for HTTP/3, we use the experimental
Our PlanetScale database is provisioned in
us-west, which is AWS region
I tested in a bunch of scenarios, but the two that we’re going to highlight here are:
- High latency, low bandwidth, geographically far from my database. This is from my personal laptop, which is in Reno, NV.
- Low latency, high bandwidth, geographically close. This is from an EC2 instance in AWS
We chose these environments to attempt to prove when, where, and if using HTTP, and even HTTP/3, become beneficial over the MySQL protocol.
Running the tests#
From each environment, I run seven different tests with three different clients:
- MySQL binary protocol client
- an HTTP client speaking HTTP/2 (psdb)
- an HTTP client speaking HTTP/3 (psdb + h3)
We chose these because we wanted to see, fundamentally, if HTTP can compare to MySQL and if HTTP/3 yields any tangible benefits on top of HTTP/2. We’re ignoring HTTP/1.1 since it’s going to be objectively worse than both HTTP/2 and HTTP/3.
Each client runs the following tests:
- Connect +
SELECT 1. This is attempting to test a "cold start". It establishes a connection to us and runs a
SELECT 1, using a new connection each run serially.
SELECT 1. This test simply warms up a connection pool ahead of time, then runs
SELECT 1in parallel.
SELECT. This test reads 250 rows from a table with 2 columns, with a
SELECT * FROM medium. The total result size is approx 50kb.
INSERT. This test is doing the inverse and writing the same dataset in a bulk
INSERT INTO medium (...) VALUES (...).
SELECT. This test reads from a much larger table, 10000 rows with 11 columns. Total result size is approx 27.5mb.
INSERT. Similarly, this is the inverse of the select, but instead, inserting 2000 rows each time.
These tests were chosen to test a decent spread of results without trying to actually benchmark PlanetScale and the underlying mysqld processes we use.
The raw data and all of the graphs are contained in this Google Sheet.
This test result genuinely surprised me in both scenarios. Results significantly favor HTTP over MySQL, with marginal improvements between HTTP/3 and HTTP/2.
A few highlights:
From my laptop, I expected a major improvement, but the min went from 162ms to 35-ish ms over HTTP, while the max also stays steady for HTTP and jumps up quite a bit for MySQL.
I’d suspect the biggest win here is fundamentally because of the difference in TLS. Both HTTP/2 and HTTP/3 in this case are using TLS 1.3, which supports a 0-RTT handshake. While, in theory, MySQL clients could also support TLS 1.3, TLS support in clients is typically not great and, in this case, negotiated with TLS 1.2. This saves a full round trip when establishing a new connection. This could also be the case with HTTP/2 as well, though support for TLS 1.3 is much more plentiful there. HTTP/3 requires TLS 1.3.
What was surprising to me, I expected this to only be reflected on higher latency networks and geographic distances. But this also was surprisingly better on our EC2 instance in
us-west. From 11ms down to 3-4ms.
Overall, it’s very clear that HTTP, both HTTP/2 and 3, are substantially better for a cold start.
While these results all look relatively similar, to me that’s a good thing. We can see some improvements with higher latency, but over low latency aren’t statistically significant. What this does prove is that we aren’t adding anything measurably worse to the connection in the fastest scenarios. We’d expect the weight of the protocol itself to overpower the actual data being transferred.
At this test size, we can start to see the tail-end latency start to improve while maintaining a relatively consistent average across the board.
My hypothesis is that the dataset isn’t large enough for compression to start to make an impact, and the reliability of the transport protocol and network are what make up the upper percentiles.
On the low latency network, we started to bottleneck on the underlying mysqld, which is also a good result since, again, it indicates that there’s no tangible overhead in using HTTP.
SELECT case, this is an opportunity for our HTTP API to use client-side compression, which we cannot do with the MySQL protocol. The effects of this can be drastically seen in the high latency network case since we are uploading a decent amount of data per query.
On the extremely large queries, both
SELECT have some interesting characteristics. As predicted, these excel over high latency networks. On top of HTTP/2 bringing some minor improvements, HTTP/3 starts to pull a big lead. I suspect this is because, with this size of a payload, we’re potentially able to exhibit packet loss and other warts of TCP, which QUIC smooths over.
I also suspect these are a bit skewed here with bottlenecking on mysqld and the underlying disks. This might be worth revisiting again with a more capable backend so we’re not as limited.
The results that stand out, oddly, are that on a low latency network, the HTTP/3 variants are measurably slower than HTTP/2. I haven’t dug into this, but my hypothesis is that this is a performance limitation in the underlying
quic-go implementation due to it being a bit more immature and less battle-tested. At these larger payloads, we might be beginning to stress the underlying QUIC implementation as well as the underlying mysqld and hardware. All around, I think this test is pushing limits elsewhere and isn’t fully testing our protocol. But I still think these are valid conclusions that both show what we can work on to improve and see how the protocol handles the stresses.
These results are rather interesting and prove a few things:
An HTTP API is actually really good. In most tests, any version of HTTP was superior compared to the binary MySQL protocol. The higher the latency and less reliable your network, the greater the benefits will be amplified. In best-case scenarios, the new APIs aren’t measurably slower, which is about the best we can ask for. In the larger payloads, though, HTTP still stands out as a winner due to the ability to compress the data over the wire.
Cold starting is where the improvements really shine without a doubt, which is super critical for anything that isn’t backed by a long-running process, such as serverless, runtimes like PHP, and periodic jobs. This is a bit amplified since the HTTP API multiplexes many traditional MySQL connections over a single HTTP connection, reducing the need to open many connections and maintain a connection pool.
HTTP/3 is even more interesting. While the benefits of using even HTTP/2 are the biggest tangible improvements, I think there are some additional benefits HTTP/3 yields that weren’t fully tested. HTTP/3 being over UDP solves a few issues with TCP when it comes to unreliable networks, which I didn’t explicitly test for. HTTP/3 support is also rather rare and immature, so I think fundamentally, there’s a lot of improvement that can be squeaked out in the libraries being used here. Overall, I think HTTP/3 is an objective improvement over the worst-case HTTP/2 scenario, and best-case, similar performance. Comparing both HTTP/2 and HTTP/3 to MySQL though, and it’s pretty clear that both have the potential to be very competitive.
As of right now, we support HTTP/3! If you use the in-product PlanetScale web console and a modern browser, you will connect over HTTP/3 and not even be aware of it. Unfortunately, HTTP/3 lacks a lot of adoption outside of web browsers due to being radically different, but we hope that we may inspire and help drive adoption where possible. We’ll continue to figure out how we may best leverage HTTP/3 transparently wherever we can.
While this is just one experiment focusing on latency and comparing the protocols, there are other benefits that aren’t discussed here that come with an HTTP API that I will be talking about when we start to publicly document these in the coming months.