Navigation

PlanetScale system limits

Table limits

Database schemas are limited to a total of 2048 tables, including views.

Individual tables are limited to a maximum of 1017 columns each.

Disk limits

By default, the database disk size for Scaler Pro customers is limited to 4 TB. This includes user data as well as system data required to run database operations.

If your database is near this limit, some deploy request operations may not be available as they require adequate disk space to complete.

When near this limit, we recommend either sharding your data or distributing tables across multiple keyspaces. Please contact us to discuss options.

Connection lifetime limits

Database client connections that are held open longer than 24 hours may be terminated unexpectedly. We recommend that long running database connections are closed and reconnected at least once per day.

Simultaneous transaction limits

Each database has a limit on the number of simultaneous transactions it can process, also known as the transaction pool. If you exceed the transaction pool setting for your database, you may encounter this error:

vttablet: rpc error: code = ResourceExhausted desc = transaction pool connection limit exceeded

This often can be mitigated by trying the one of the following solutions:

A) Reduce the amount of parallelism in the requests being sent to the database. B) Shorten lengthy transactions by reducing batch sizes or making some other application-level adjustment.

If you cannot make such changes to your application, consider choosing a larger instance type with a larger transaction pool. The exact limit varies depending on the instance type of your database. For details, see the plans page.

Query limits

PlanetScale has enforced some system limits to prevent long-running queries or transactions from:

  • Piling up and consuming all available resources.
  • Blocking other important, short-lived queries and transactions from completing.
  • Overloading the database to the point where it is not recoverable without a restart.
  • Blocking planned failovers and critical upgrades.

The following table details these limits:

TypeLimit
Per-query rows returned, updated, or deleted100k
Per-query result set total size64 MiB
Per-query autocommit timeout900s
Per-transaction timeout20s

Recommendations for handling query limits

These limits are enforced for the safety of your database. However, we do understand you may run into a situation where the limits are a blocker. Here are some best practices for solving common issues presented by the limits:

What should I do if I have a query that returns more than 100,000 records?

We recommend trying to break up large queries, e.g. through pagination.

What should I do if I have a query that returns more than 64 MiB of data?

If your schema currently relies on storing large amounts of variable length data within JSON, BLOB, or TEXT type columns that is regularly over a few MiB in size you will want to strongly consider storing that variable length data outside of your database, such as within an object storage solution, instead.

If large values are stored within variable length data columns it can limit the number of rows you can return.

For example, above we described a 100K limit for result sets, but if your result set's size exceeds the 64 MiB limit then you may receive an error message like the following: resource_exhausted: grpc: received message larger than max (<RESULT_SET_SIZE_IN_BYTES> vs. 67108864) when retrieving much less than 100K rows.

Similarly, when generating a large INSERT or UPDATE query for these types of columns you may run into the grpc: trying to send message larger than max (<QUERY_SIZE_IN_BYTES> vs. 67108864) error message which would require you to reduce the query's size.

What should I do if I have a transaction that runs longer than 20 seconds?

For transactions that are running for longer than 20 seconds, we recommend breaking these up into multiple shorter transactions. For analytics queries that are not possible to break up, see "How should I handle analytical queries?" below.

Keep your transactions small and short-lived. For transactional workloads that are inherently long-lived, consider replacing or complementing MySQL transactions with other techniques:

  • For simple workloads (e.g. a SELECT, followed by an HTTP request, followed by an UPDATE), consider using optimistic locking instead of transactions.
  • For complex workloads, consider using sagas to coordinate a series of small, short-lived transactions that possibly span multiple microservices.

What should I do if I have a select query that needs to timeout faster than 900 seconds?

For select statements only, you can provide an inline comment to kill the query if it has not returned results within the specified time in milliseconds. select /*vt+ QUERY_TIMEOUT_MS=2000 */ sleep(3); Please note, you can set this value shorter than 900 seconds but not greater than the 900 seconds default.

How should I handle analytical queries?

We recommend using PlanetScale Connect (via Airbyte or Stitch) to extract your data to any compatible destination (e.g. BigQuery, Redshift, etc.). The analytics queries should then be performed at those destinations.

OLAP mode

While it is possible to bypass these safety limits using OLAP mode (SET workload = OLAP), we do not recommend this for the reasons listed at the beginning of this doc. In OLAP mode, you may return more than 100k rows from a single query, but the 100k row limit will still apply to updates and deletes.

When the use of OLAP queries is strictly unavoidable, we recommend:

  • Where possible, sending those queries to a replica. Every PlanetScale database comes with at least two replicas. Learn how to send queries to replicas using global replica credentials.
  • Carefully and regularly reviewing the performance of those queries with PlanetScale Insights.

Note

Please note that if you choose to use OLAP mode, you need to be prepared to handle errors if the connection to PlanetScale gets interrupted for any reason.

Reducing table size without OPTIMIZE TABLE

If you delete several rows in a table, you may wish to reclaim that storage space. The MySQL OPTIMIZE TABLE statement allows you to reorganize the physical storage of table data to reduce its storage requirements.

However, OPTIMIZE TABLE is a locking operation, so you cannot use it unless you disable safe migrations in PlanetScale, which is not recommended.

Any time you create and deploy a deploy request with safe migrations on, our schema change process uses online DDL, and in doing so, recreates the table that you are modifying — thus reclaiming the storage space.

But there are often cases where you do not need to make a schema change for a while, but you'd like to free up the space. In those cases, we suggest you do the following:

  1. Create a new branch.
  2. Run the following command:
ALTER TABLE <TABLE_NAME> COMMENT 'Optimize table size via DR - <YYYY-MM-DD>';

Where <TABLE_NAME> is the name of your table and <YYYY-MM-DD> is the current date.

  1. Create a deploy request, and merge it into production.

Need help?

Get help from the PlanetScale Support team, or join our GitHub discussion board to see how others are using PlanetScale.