Introducing Database Traffic Control™: resource budgets for your Postgres query traffic.Learn more
Navigation

Blog|Engineering

Transparency in benchmarking

Ben Dicken [@BenjDicken] |

Database benchmarks are imperfect. They are also useful.

No benchmark can tell you exactly how a database will perform for your application. Workload shape, data size, region placement, storage, configuration, and cost all matter. But fair benchmarks help customers understand tradeoffs, compare options, and ask better questions before choosing infrastructure.

The DeWitt clause

Many cloud vendors include language in their terms that restricts comparative benchmarking. These restrictions are called "DeWitt clauses", named after database researcher David DeWitt. That is a strange legacy for someone whose work helped move the database industry forward by measuring real systems and publishing results.

Previously, PlanetScale also included a DeWitt clause in our Acceptable Use Policy (AUP). Today, we are removing this in favor of a more open "Benchmarking" section in our AUP.

The new section reads:

You may perform benchmark tests (“Benchmark”) of the Services, provided that the Benchmark is conducted in good faith and uses a fair and transparent methodology. Please refer to PlanetScale's published benchmarking best practices. Except with respect to Beta Features, you may disclose the results of the Benchmark. If you perform or disclose, or direct or permit any third party to perform or disclose, any Benchmark of the Services, you (i) will include in such disclosure, and will disclose to PlanetScale, all information necessary to replicate such Benchmark, and (ii) agree that PlanetScale may perform and disclose the results of Benchmarks of your products or services, irrespective of any restrictions on Benchmarks in the terms governing your products and services.

Any Benchmark must be conducted in accordance with the Agreement, including this Acceptable Use Policy. The Benchmark must not interfere with the Services or misrepresent the configuration, methodology, results, or cost of the Services or any compared service.

Benchmarks have gained a bad reputation because they are frequently conducted poorly. Sometimes this is done with malicious intent, often referred to as "benchmarketing." At other times it is done out of ignorance. Many engineers are not trained in all aspects of fair benchmarking.

A new standard

Anyone benchmarking PlanetScale should follow the best practices outlined in our benchmarking guide. These practices come from our deep experience benchmarking databases in the cloud where topology, server location, region, workload, and instance type differences materially impact the result.

We encourage other vendors, analysts, and practitioners to use the same standard. Benchmarks should be deep, thorough, technically sound, and transparent enough for others to understand and reproduce.

Our ask

We invite other vendors to adopt this same language and standard in their own AUPs. Allow public benchmarking, remove DeWitt clauses, and hold benchmarks to clear expectations for fairness and transparency.

Customers should be able to compare the systems they rely on.