The promises and realities of the relational database model
By Shlomi Noach |
The relational model — tables with a predefined set of typed columns and with cross table references — is one of the oldest surviving models in computer science and in real life deployments. It’s rare to see models that survive decades of software evolution. This survival suggests that the model is sensible, practical, and solid. The relational model makes sense as we organize our data into elements and their properties, and how we associate different elements with each other.
Along comes SQL, a declarative language that is expressive enough to answer the simplest, or very complex questions about our data. It’s well understood by DBAs, developers, data scientists, analysts and more. SQL or its variants are found in all relational databases, and in popular non-relational databases.
But the relational model has drawbacks, and those are more on the operational side: while database systems optimize for read/writes, they do not optimize as much for metadata changes. And most specifically to schema changes. There are various problems with schema changes, but if I look into the most demanding one, it is the fact that schema changes require an operational undertaking, that is outside the hands of the developer.
I believe the issue of schema management is one of the major reasons to push developers away from the relational model and into NoSQL solutions.
There is a historical context. Back in the old days, schema changes were not so frequent. We didn’t have the internet, and products would evolve to the next version over months. DBAs commonly acted as gatekeepers to the database, ensuring the schema is valid, the queries are performant, and they would carefully evaluate any request for a schema change. The flow to make a change necessarily involved discussions between different owners. The change itself was not considered to be in the data path. You’d, for example, take the system down for scheduled maintenance, run a series of schema changes, bring the system back, and repeat every few months.
The times have changed. Products are offered as-a-service over the internet. Taking systems down for maintenance is not as tolerated. We run continuous deployments, and development velocity is increased. Today, it’s not uncommon for popular services to run multiple schema migrations per day. The roles have changed. Today’s DBAs are more of enablers; still ensuring the database service is reliable and performant, but also clearing a path for the developers to do their work and to get what they need.
And while relational database systems have evolved to meet today’s traffic volumes, they have not made similar advancement in meeting today’s developers’ needs. The problem intensifies as we try to give developers the velocity they need, and the RDBMS is still an impediment in their path..
First, it requires a deeper understanding of database behavior, of metadata locking, of operational issues that can arise as result of a migration: locks, resource exhaustion, replication lag.
Then, it requires access or privileges to run schema change operations. Developers need to identify where in production their table is found. What specific servers serve as primaries. In the MySQL world, people will commonly use 3rd party tools such as gh-ost or pt-online-schema-change, which run an online schema change through emulation and replacement. But these require access to your production system. The developer needs to understand how to invoke these tools; how to configure throttling; how to observe and monitor their progress; how to clean up their artifacts.
It requires developers to be able to handle errors. These could be anything from internal database error, to tooling error, to mid-migration failover scenarios.
It requires coordination and scheduling. You normally don’t want to run (deploy) multiple schema changes at once. Developers need to sync with each other, prioritize work. The database system does not provide a flow, or anything similar to the common practices familiar to developers, like version control and conflict resolution for one’s code changes.
The operational expertise illustrated above, along with the need to synchronize changes, reinstates the DBA as the database’s gatekeeper. This time as a forced constraint. As being the single coordinator, the resolver of issues and errors, the scheduler for schema changes. A small, young company is able to get by, but as the business grows the need for a person that coordinates and runs schema changes becomes apparent.
As a result, the developer is no longer the owner for their change. They need to open tickets and to grab someone’s attention, to trust someone to run their schema change, to check for updates. I’ve seen developers going in different routes to avoid this path:
- Stalling development and aggregating multiple changes into single deployments.
- Overloading schema-less JSON columns with more content.
- Avoiding schema changes and tweaking the code in a non-optimal fashion.
- Moving away from relational databases and into document stores, trading off the advantages of RDBMS for faster deployments.
A RDBMS schema change is an alien operation for many developers. It feels nothing like a code deployment. It does not enjoy the level of automation the developers come to expect of code. There is no conflict resolution mechanism to deal with rapid developments across large teams. The risk to production is high and the database does not offer a mechanism to undeploy your changes.
These are some of the things we had in mind while developing PlanetScale. We believe the relational model is solid, and that reducing its operational friction goes a long way. We wanted to create a developer friendly experience that also gives back the developers ownership of their changes. We believe this experience can give developers joy.