PlanetScale for Postgres is here. Request early access
Navigation

PlanetScale vs Neon / Lakebase Benchmarks

This page includes benchmarks that compare the performance of Postgres on PlanetScale with Postgres on Neon / Lakebase, along with all of the resources needed to reproduce these results. We also recommend reading our Benchmarking Postgres blog post, which covers the methodology used in these benchmarks and the steps taken to maintain objectivity. We invite other vendors to provide feedback.

Benchmark configuration

All benchmarks described here were run with the following configuration:

Provider & InstanceRegionvCPUsRAMStorageIOPS
PlanetScale M-320us-east-1432GB929GBunlimited
Neon / Lakebase SCALE 8cuus-east-18**32GBauto scaleN/A

** Note that Neon / Lakebase gets double the CPUs so we could match the RAM. We cover the reasoning in the TPCC results section.

TPCC Benchmarks

TPCC is a widely-used benchmark to measure general-purpose OLTP workload performance. This includes selects, inserts, updates, and deletes.

Benchmark data: A TPCC data set generated with TABLES=20 and SCALE=250 using the Percona sysbench-tpcc scripts. This produces a ~500 gigabyte Postgres database. You can replicate the data following these instructions.

Benchmark execution: Using the Percona tpcc scripts running a load with 100 simultaneous connections. We run the load on each database for 5 minutes (300 seconds).

Queries per second

Our first benchmark measures queries per second (QPS) at 32 connections and 64 connections, revealing that PlanetScale performs much better:

Click the graphs in the sidebar to toggle the number of connections. The PlanetScale database averaged ~18,000 QPS. Neon / Lakebase averaged ~12,500 QPS. Neon / Lakebase does not offer memory-optimized RAM:CPU ratios (8:1), so we bumped up Neon / Lakebase to 8 compute units to give it 8 vCPUs and 32 GB of RAM. This gives Neon / Lakebase the same RAM capacity as PlanetScale, while giving it double the vCPUs.

p99 latency

We also measured the p99 latency for the duration of the benchmark run (lower is better):

Despite both being in us-east-1, PlanetScale shows much lower latency and less variability due to locally-attached NVMe drives with unlimited IOPS, 8th-generation AArch64 CPUs, and high-performance query path infrastructure.

OLTP benchmarks

In addition to TPCC, we run the OLTP Read-only sysbench benchmark. OLTP workloads tend to be 80%+ reads, and this benchmark allows us to isolate performance for such queries.

Benchmark data: A simple OLTP data set generated with TABLES=10 and SCALE=130000000 using standard sysbench. This produces a ~300 gigabyte Postgres database. You can find instructions for replicating this data here.

Benchmark execution: Using the standard sysbench tool using the oltp_read_only and oltp_point_selects benchmarks. You can find instructions for replicating this benchmark here.

Queries per second

This benchmark contains only SELECT queries, including ones with range scans and aggregations.

The PlanetScale database averaged ~33,000 QPS. Neon/Lakebase averaged a much lower ~27,000 QPS. PlanetScale not only excels in QPS, but provides a much more consistent performance over time, leading to better predictability.

p99 latency

While running this benchmark, we measured the p99 latency of queries (lower is better):

PlanetScale offers significantly lower latency and better consistency, which is desirable for predictable performance.

Query-path latency

We measured pure query-path latency by running SELECT 1; 200 times on a single connection. This tests the overhead of any database query.

Results compare PlanetScale + PSBouncer, standard PlanetScale connection, direct-to-Postgres on PlanetScale, Pooled Neon/Lakebase, and non-pooled. Lower is better.

All connection types on PlanetScale are significantly lower-latency than Neon. (Note: The direct connection to PlanetScale is same-AZ, possibly giving an advantage. We note consistently better latency than Neon/Lakebase even when this is not the case, however.)

Cost

A PlanetScale M-320 with 929GB of storage costs $1,399/mo. This includes three nodes with 4 vCPUs and 32GB RAM each, one primary and two replicas. Replicas can be used for handling additional read queries and for high-availability. The benchmark results shown here only utilized the primary.

This test was run using the Neon Scale plan, which includes 750 compute units (CU) and 50GB storage. Additional CU costs 0.16 per hour, and additional storage costs $1.5 per GB. To sustain 8 CU for a full month would cost ($69 + ((8cu * 730 hours) - 750 hours) * $0.16) = $883.4. To sustain 929 GB storage for a full month would cost (929GB - 50GB) * $1.5 = $1318.5. This totals to $2201.9/mo, and this does not even include any read replicas. Adding even a single 8 CU replica would bring this to $3085.3/mo, more than double the cost of the M-320.

PlanetScale offers better performance at a lower cost for applications that require high availability and resiliency.