PlanetScale vs TigerData Benchmarks
This page includes benchmarks that compare the performance of Postgres on PlanetScale with Postgres on TigerData, along with all of the resources needed to reproduce these results. We also recommend reading our Benchmarking Postgres blog post, which covers the methodology used in these benchmarks and the steps taken to maintain objectivity. We invite other vendors to provide feedback.
TPCC Benchmark configuration
Provider & Instance | Region | vCPUs | RAM | Storage | IOPS |
---|---|---|---|---|---|
PlanetScale M-320 | us-east-1 | 4 | 32GB | 929GB | unlimited |
TigerData Scale plan | us-east-1 | 8 | 32GB | Auto-scale EBS | 16k upgrade |
- Configuration: All Postgres configuration options left at each platform's defaults, except for connection limits and timeouts which may be modified to facilitate benchmarking.
- Benchmark machine: Benchmarks were executed from a
c6a.xlarge
in us-east-1, the same region as both databases.
Note that TigerData gets double the CPUs so we could match the RAM. We cover the reasoning in the TPCC results section.
TPCC Benchmarks
TPCC is a widely-used benchmark to measure general-purpose OLTP workload performance. This includes selects, inserts, updates, and deletes.
Benchmark data: A TPCC data set generated with TABLES=20
and SCALE=250
using the Percona sysbench-tpcc
scripts. This produces a ~500 gigabyte Postgres database. You can replicate the data following these instructions.
Benchmark execution: Using the Percona tpcc scripts running a load with 100 simultaneous connections. We run the load on each database for 5 minutes (300 seconds).
Queries per second
Our first benchmark measures queries per second (QPS) at 32 connections and 64 connections, revealing a significant difference:
Click the graphs in the sidebar to toggle the number of connections. The PlanetScale database averaged ~17,000 QPS. TigerData averaged ~8000 QPS. TigerData does not offer memory-optimized RAM:CPU ratios (8:1), so we bumped up TigerData to give it 8 vCPUs and 32 GB of RAM. This gives TigerData the same RAM capacity as PlanetScale, while giving it double the vCPUs. We also added the 16,000 IOPS upgrade to allow for better IO performance.
p99 latency
We also measured the p99 latency for the duration of the benchmark run (lower is better):
Despite both being in us-east-1
, PlanetScale shows much lower latency with fewer spikes due to locally-attached NVMe drives with unlimited IOPS, 8th-generation AArch64 CPUs, and high-performance query path infrastructure.
Query-path latency
We measured pure query-path latency by running SELECT 1;
200 times on a single connection. This tests the overhead of any database query.
Results compare PlanetScale + PSBouncer, standard PlanetScale connection, direct-to-Postgres on PlanetScale, and a direct connection to TigerData. Lower is better.
Direct connections to PlanetScale are significantly better than TigerData. Though, to be transparent, the direct connection to PlanetScale is same-AZ, which may provide an advantage depending on the AZ of the TigerData node.
Cost
A PlanetScale M-320 with 929GB of storage costs $1,399/mo. This includes three nodes with 4 vCPUs and 32GB RAM each, one primary and two replicas. Replicas can be used for handling additional read queries and for high-availability. The benchmark results shown here only utilized the primary.
For TigerData, a single 8vCPU + 32GB RAM instance costs $1.45 per hour. This comes out to $1.45 * 730 hours = $1058.50/mo. With the IO boost, storage costs 0.41 per-hour * 730 hours = $299.3/mo. This totals to $1357.8/mo for a single node. To match the capabilities and availability of the 3-node PlanetScale M-320, we must add two replicas. At minimum, this requires us to 3x the cost of the compute nodes, bringing the total to $3474.80/mo.
PlanetScale offers better performance at a lower cost for applications that require high availability and resiliency.