Skip to content

How PlanetScale Boost serves your SQL queries instantly

Learn how we use a partial materialization engine for database caching alongside your database.

How PlanetScale Boost serves your SQL queries instantly

Today, we’re launching the limited beta of PlanetScale Boost, which improves the performance and throughput of your application’s SQL queries by up to 1,000×. PlanetScale Boost is a state-of-the-art partial materialization engine that runs alongside your PlanetScale database.

If you saw our PlanetScale Boost launch blog post, you may have thought: “Big deal, that’s just a cache” or maybe you thought, “That’s just a materialized view — SQL Server had those back in the 90s.” But as it turns out, PlanetScale Boost is neither of those, yet it is a little bit of both. It is a new design for a caching architecture that is more performant than a cache and more efficient than a materialized view. It is something much more powerful and much more exciting. And I’d like to explain to you how it works.

To make this explanation simpler, let’s ground it with an actual example. We will investigate the internal workings of a PlanetScale Boost instance caching a single, specific query. Say we’re running a version control hosting website (let us call it GitHat) where people can push their code repositories and star them. The tables in our SQL schema could be something straightforward, such as this:

SQL
CREATE TABLE repository (
id bigint NOT NULL AUTO_INCREMENT,
owner_id bigint,
name varchar(255),
PRIMARY KEY(id)
);
CREATE TABLE star (
id bigint NOT NULL AUTO_INCREMENT,
repository_id bigint,
user_id bigint,
PRIMARY KEY(id)
);

To get the star count for a specific repository, we’d craft a SQL query that joins the count of stars per repository with the actual information for the repository:

SQL
SELECT repository.id, repository.owner_id, repository.name, starcount.stars
FROM repository
LEFT JOIN (SELECT star.repository_id, COUNT(star.user_id) AS stars
FROM star GROUP BY star.repository_id) AS starcount
ON (repository.id = starcount.repository_id) WHERE repository.id = ?;

And yes, I know what you’re thinking. “This query is bananas, man. I would never run this in production!”. I know! I agree. The query is not particularly hard to understand, but if you try running it in a standard relational database, you’ll see that as soon as you have a few thousand star rows, the execution time of the query balloons out of control.

Although this query uses the relational model cleanly to fetch the data you require for your application, this is not a query you can run in a relational database. It’s just too expensive. You’d have to do something else to figure out the star count for your repositories. Maybe de-normalize the data. Migrate your database to keep a star count in the repository table. Or maybe keep the count in a Redis instance and update your application so that starring a repository writes to your database and to Redis.

Or, maybe, hear me out: Why not press a button and make the query instantaneous? That’s a compelling alternative to re-architecting your data model and/or application. And that’s what we offer with PlanetScale Boost. Let’s see what it takes to make this happen.

A query execution plan

A plan is the most fundamental abstraction behind serving a query in a database — doubly so if it’s a relational database. Vitess, the open-source database that powers PlanetScale, is definitely acquainted with the concept of planning. Every time you send a SQL query to your PlanetScale database, its internal gateway nodes plan it first. We parse the SQL syntax of your query to understand what data is being asked for, which tables that data is stored in, and how the data must be processed or aggregated. This process results in a tree of primitive operations and sub-queries that will be routed to one or many of the shards in the database. This is what we call a plan.

Since each shard in your database is backed by a separate MySQL instance, once the SQL for each sub-query is sent to a shard, it is planned again! MySQL goes through the same process, but this time trying to figure out how to most efficiently fetch the requested data from the tables on-disk and the cheapest way to process or aggregate it. It really is plans all the way down!

PlanetScale Boost’s most fundamental operation also revolves around a plan. When you are using Insights and pick a query to be optimized, the first thing we do is plan it. Since our goal is to be highly compatible with your PlanetScale database, our planning is performed using the same technology that we use to plan normal queries in Vitess, the gen4 planner. It is, however, a slightly modified version of this planner because the resulting plan we’re looking for is quite different than the one Vitess would generate. It is, in fact, an inverted version of the plan.

When Vitess’ gen4 plans a query you’ve just sent to your database, the goal of the plan is to pull data. We figure out in which shards the data lives, what’s the most efficient way to fetch it, and what’s the most efficient way to process it to generate the actual results of your query. But no matter how much we optimize this process, this is a very expensive process.

With PlanetScale Boost, we’re trying to do the opposite. Instead of planning how to fetch the data every time you read from your database, we want to plan how to process the data every time you write to your database. We want to process your data, so that reading from it afterward becomes essentially free. We want a very complex plan to apply to your writes, so there’s no planning to be done for your reads; a plan where we push data on writes instead of pulling data on reads.

Observing readers will notice that our push plan looks suspiciously similar to a dataflow plan, like the one that a materialization engine would use to create a materialized view. It’s true! But there’s much more to it, and it all boils down to when we run our plan.

A materialized view and a cache

Suppose PlanetScale Boost was just a materialized view. In that case, we’d have to run this plan immediately, using all the data in your database as input. That’s how materialization engines work: You define a view using SQL. The engine materializes the view by transforming all the rows in the underlying database to generate a fully materialized table representing your query. Then, the engine keeps it constantly updated as writes happen to the database. The main advantage of this approach is that the table is quite efficient when queried, unlike the virtual Views that all relational databases support. But there are also disadvantages: these views are not very ergonomic when developing you application, and most importantly, they are wildly expensive. You need to materialize the results for every possible row in your database, even though most of these rows will never be read. Doing this for many different views that apply to many different queries is not sustainable.

So no, we don’t do that because PlanetScale Boost is not a materialized view. But what is it, then? A cache? Let’s see. If PlanetScale Boost was just a cache, it wouldn’t run any plans. Caches don’t care about planning at all! They sit there idly, waiting for a request to come in. When the request arrives, if they know the answer, they return it immediately. Otherwise, they fall back to a slow backend (in this case, your main database) to fetch the answer and keep a copy before returning it to you.

Then, the big question is: How long do they keep that copy of the answer? At some point, the data from the underlying database will change, and the cache needs to stop returning the old answer because it is stale. As many of you know, knowing when to invalidate data from a cache is a hard problem. In fact, some people call it “one of the hardest problems in computer science.” Right. That sounds ominous. So… let’s just not do that. Let’s do something else instead.

Here’s what we’re going to do: we will start listening to changes from our underlying database, much like a materialized view would. But we’re going to start from right now, not since the beginning. We will only replay some of the data in our database.

In a traditional relational database, following all the changes that happen in real-time means subscribing to a replication log (e.g., MySQL’s binlog). But PlanetScale is not a traditional database! Your PlanetScale database is powered by Vitess, and can be composed of many small shards. These shards are individual MySQL instances that keep independent copies of your data and allow you to horizontally scale as much as you need. Because of this design, to follow along with the changes to your whole PlanetScale database, we need to use an abstraction called VStream, which correctly and efficiently aggregates and unifies all the events amongst all the nodes in your database cluster.

Now that we’re correctly following all the change events, we can execute our plan! We’re going to run our plan — which looked a lot like a dataflow plan to create a fully materialized view — but only with the most recent changes to your database. “Running” the plan in our new system implies deploying it to a set of nodes (a Boost cluster). The Boost cluster lives alongside your database’s shards and continuously processes the events relayed by the VStream. The result of processing these events is a partially materialized view that can be accessed by the database’s edge tier. This view contains some, but not all, of the rows that could be queried.

A peek into the flow

Constructing a partially materialized view is similar to constructing a fully materialized one with some caveats. As new rows are written to your database, they are relayed via VStreaming to your Boost cluster and processed sequentially by the different operator nodes we previously planned. However, these “rows” do not flow as simple raw data.

If you think about it, the change events we receive from our database are not only INSERTs. You can also UPDATE and DELETE rows from your database! Hence, a removal is a negative delta, and an update is a pair of a negative and positive deltas with the previous and new versions of the value. This elegant design applies to all levels of the stack. The state changes from the underlying database we receive as events from the VStream are tracked as deltas, and the state changes in the internal operators, too!

Let’s break down how these deltas flow exactly. Say a user has starred a repository in our web application. In our relational schema, that implies inserting a new row into the star table that represents such a star, such as:

SQL
INSERT INTO star (repository_id, user_id) VALUES (4, 23)

As soon as your PlanetScale database has processed the write and stored it into whichever shard it belongs to, the new row is reflected in the VStream that the Boost cluster is subscribed to.

Because of the shape of our original query, our plan has decided that all new row events for the star table must flow directly to an aggregation operator. That makes sense! Our query was, in essence, a join between an aggregation (the count of stars grouped by the repository) and the full row data for each repository. So when a new star is added to the database, we must first aggregate it with a dedicated operator in our dataflow.

Now, what does an operator actually do? That really depends on the operator, of course. PlanetScale Boost has implementations for more than a dozen different SQL constructs. The only thing they have in common is that they process deltas to update their internal state and emit deltas that reflect the changes in their internal state.

The behavior of the aggregation operator is quite intuitive. It can perform most of the aggregation functions that you can find in MySQL’s SQL syntax (SUM, COUNT, MIN, MAX, etc.), and it does so by keeping the intermediate state of the aggregation in memory and updating it incrementally.

In our example, since our aggregation query has a GROUP_BY star.repository_id, our intermediate state is keyed by the repository_id of the star. Updating the count requires finding the row of the intermediate state that maps to our star’s repository and increasing its counter. All that’s left is reflecting that our operator’s state has changed, so the downstream operators are aware of this fact. We’ve seen how to do that: PlanetScale Boost relays deltas inside its dataflow precisely because they represent state changes. At the edge of our system, the VStream emitted a positive delta, representing a brand new star being inserted into the database. Much in the same way, our aggregation operator now emits a pair of deltas: A negative one with the old value of the counter and a positive one with the new value. This is all the information that all the operators downstream will require to update their state.

Speaking of which… there are indeed more operators downstream! After aggregating the sub-query, our dataflow plan needs to join each aggregated row with its matching equivalent in the repository table. That’s where the JOIN operator comes in.

In this operator, as the deltas from the left side flow in our direction, we need to match them with a row from the right side. However, the right side of our JOIN is a remote table in our underlying database. To significantly improve the performance of joins in practice, we keep an intermediate cache with the most recently seen rows for that side of the join. Since our target repository row is warm in a cache, we can access it immediately and generate the result by joining all the columns on both sides and emitting a new pair of deltas.

These deltas are critically important because they will update the partially materialized state that the users see. They flow directly into the View node, representing our partially materialized view, and will update its state by replacing the old value for the row with the new one. In practice, the View is nothing but a glorified hash map, and yet it has many interesting technicalities that allow us to update it atomically while users are reading from it — but this is a topic for another day. Once the view is updated, the flow for our initial star row is completed, and we continue processing more events from the VStream.

A “cache” hit and a miss

The process of creating and maintaining a partially materialized view is complex, but this complexity has been our goal since the beginning. We want to make writes hard, so reads can be simple, and I think we’ve succeeded.

Once the partially materialized view is in place, what happens when you read from your PlanetScale database is trivial in comparison: Our database’s edge tier parses and normalizes the SQL syntax of every SELECT query you perform. If its shape matches any of the queries you’ve optimized with PlanetScale Boost, the query’s execution is simplified immensely. Instead of the usual approach of using Vitess’ gen4 planner to fetch and compose data from each shard, we extract the parameters based on the placeholder locations you configured and use those values to perform a lookup in the partially materialized view. That’s it. When reading from your database, we transform complex and expensive SQL expressions into simple lookups in a hash map.

What’s the original SQL query’s asymptotic complexity when running across the multiple shards of your PlanetScale database? I don’t know! I didn’t finish my computer science degree! But I know the asymptotic complexity of a Boosted query: It’s O(1) on average, and that really shows in the benchmarks.

Speaking of averages: We’re missing one last critical detail about this system’s performance. Everything we’ve seen so far has been a best-case scenario. It’s always great when your cache is warm, and you can serve your queries instantaneously. But one of the most critical properties of any caching architecture is how it behaves when the data you’re looking for is not available. Again, let’s try to understand what PlanetScale Boost does in this case by comparing it with our two architectural cousins: Materialized views and caches.

In a materialized view, a miss would never happen because the view is fully materialized! That’s the point of it. It is expensive to maintain and occupies a lot of memory, but it always has all the data available. So if the results you’re looking for don’t exist in the materialized view, they do not exist in the underlying database.

A cache, on the other hand, can definitely miss. Misses in a cache are expensive because you must fall back to the underlying system to compute and fetch the expected result. Sometimes this computation is very expensive — if it weren’t, it would be pointless to cache it in the first place! So, for the first time the value is fetched, it’s very likely that it will cause a massive spike in the 99th percentile of your application’s request times or maybe even a timeout.

A query in PlanetScale Boost can also miss, but we try to be much smarter about the way we resolve these misses. As we’ve just seen, updating the partially materialized state accessible to the clients implies keeping a bit of intermediate state for most SQL operations. Aggregations must be, huh, aggregated. Rows for both sides of a join must be remembered to make their computation more efficient. ORDER BY state must be kept, and sorted. The list goes on and on…

This intermediate state, which may seem wasteful at first, has one huge advantage. By keeping it, we can do something that a traditional materialization engine cannot do: We can run the dataflow in reverse. The same state that helps us maintain our partial view more efficiently also helps us resolve misses much more efficiently by querying into it.

Let’s look at our example query again: Say we’re looking for the star count of :repository_id = 7.

In this example, the result for repository 7 is missing from the final materialized view, but it is not missing from our aggregation operator because somebody has starred the repository recently. When trying to resolve our miss from the final view, we do not simply fall back to the database to perform the full query. Instead, we ask our view’s parent for the specific data we’re missing.

Here the parent doesn’t have the data (otherwise, it would be available in our view), but it has a more effective way of calculating it: Recursively asking its parent for the data again. Our JOIN operator in the dataflow has two parents (as you’d expect from a join), so we first ask the operator on the left side, and receive an aggregation result that was already warm in memory. All we’re missing is the other side of the join, which we query for in our right parent. Since the right side of the join is the repository table, we have to fall back to our underlying database to fetch its data, namely, the single row we’re missing for our repository. When the row is returned from our database, our JOIN operator can perform the join against the aggregated result from the left side and store it in the final view so the client can access it.

The performance implications of this are fascinating. Yes, just like a traditional cache, we have to fall back to the underlying system when we miss on a query. But unlike a traditional cache, the data we fetch from the database is not the original query the client asked for. Instead, we can really cheaply and efficiently figure out the minimal amount of information required to compute the final result and ask for only that. In this specific example, instead of a complex JOIN with a nested aggregation, we filled our miss with a point query that runs very quickly and looks like this:

SQL
SELECT repository.id, repository.owner_id, repository.name FROM repository WHERE repository.id = 7

In practice, falling back to the underlying database is not trivial when the database is a large collection of disjoint shards. Which shard contains the data we’re looking for? What if the data we’re looking for is an aggregation instead of one or more rows? These are hard questions to solve. We will talk about this in future posts.

The future of PlanetScale Boost

There are so many more details I’d like to share about PlanetScale Boost and all the technologies that power it, but this post is already long enough as it is. You can look forward to a series of blog posts about the evolution of the system and the many challenges we’ve faced while bringing partially materialized views to our multi-shard database engine.

I’d like to finish by highlighting that this technical miracle would not be possible without the seminal paper on which we based our implementation: Noria: dynamic, partially-stateful data-flow for high-performance web applications (Gjengset et al., OSDI ’18). The paper describes a standalone database engine that can only be queried, with amazing performance, via partially materialized views.

Re-architecting this brilliant academic idea to act as an optimization layer on top of a production-ready distributed database like Vitess has been an incredibly difficult and interesting task. Still, today’s release is just the start of a long journey for PlanetScale Boost.

We have a very ambitious roadmap that includes supporting more query patterns, more SQL constructs, and even better performance (Is that even possible?). All this while continuously focusing on reliability to ensure that the results from your Boosted queries are always consistent with those returned directly from your PlanetScale database.

Starting today, we’re allowing a select group to preview PlanetScale Boost in a limited beta. Sign up for the waitlist today and we will reach out with more information. We hope to find any slow query patterns you currently use in your applications that we don’t support and hear any other feedback.

Sign up for the PlanetScale Boost limited beta