Your database should work for you — not the other way around. And yet, many companies are (sometimes unknowingly) losing millions of dollars every year on their databases. What gives?
There are three unexpected, but significant ways that your database might be burning cash:
- Database management
- Database DevOps
The result? You might be bleeding money due to errors (both technical and human), delays, and downtime. Your bottom line suffers. Your users aren’t happy. And it puts unnecessary stress on your engineers.
Let’s dive into exactly how databases can be outrageously expensive in more detail.
Downtime can happen as a result of human error, system immaturity, and application issues. “So we had an hour of downtime — it can’t be that bad, right?” Wrong. Even just minutes of downtime can cost you in various ways:
- There’s the actual cost of lost revenue during the time that you’re down: For example, if you’re a major online retailer, even one hour of downtime can easily translate to millions of dollars in sales lost.
- There’s also the cost of lost contracts: This is especially pertinent for SaaS service providers — for example, a company that provides authentication. If your database goes down, then all of your customers go down with it. Imagine what’ll happen when it comes time for their renewal: They’ll simply find a competitor who didn’t have those performance issues.
The research tells us that even an hour of downtime can add up to an average of $300,000. It might all sound extreme, but we’ve seen it happen before.
You might recall how in 2019, Costco had website issues during the holiday shopping season. The cost? $11 million in sales alone.
In March of 2015, Apple had a 12-hour outage that cost them $25 million.
Facebook experienced a 14-hour outage in March 2019 that ended up costing them $90 million.
Think of the ripple effect, too. If Facebook goes down, sure, its everyday users can’t log on. What about the brands that pay Facebook to run ads? These same brands rely on the social media platform to boost their sales. How much money did they lose during this outage?
It’s difficult to even quantify the full financial impact of these pricey database outages, but one thing is clear: downtime should be avoided at all costs.
In addition to the financial hit, downtime also comes with a severe loss of trust. These days, downtime is completely unacceptable to users. Your site has to be fully functional and accessible 24/7/365. If users open your app and it doesn’t load or you’re constantly scheduling maintenance windows, they’re going to end up frustrated and switch to a more reliable competitor.
Have you ever dropped the wrong table or index? Did your queries dramatically slow or fail, as a result? The result can be site-wide outages. Then, you have to deal with restoring the right backup. To prevent this, PlanetScale warns you if the table to be dropped was recently queried, so you can avoid dropping a table that is in use. And if there is a mistake made, you can revert schema changes while retaining the data in most cases.
PlanetScale is built on top of Vitess, an open-source database clustering system that enhances the scalability and manageability of MySQL. Vitess has been pressure-tested at scale. It is widely adopted among the hyperscalers and is the primary datastore at companies like Slack, HubSpot, and Etsy. In the time it takes you to read this blog post, Vitess clusters will have served 10s of millions of users and 100s of millions of queries across 100s of petabytes of data.
Who’s managing your database? How many hours a week are they spending on it? It’s not uncommon for Engineering teams — especially those belonging to small and medium-sized businesses — to not have dedicated database administrators (DBAs) or anyone else whose primary responsibility is managing the database infrastructure.
While an app or company is small, this may be feasible. But as you grow, your database needs to be able to grow alongside you — and it’s going to require more and more attention. If there’s no one dedicated to managing it, then the burden will fall on your engineers.
The average engineer is typically not a database expert. Learning how to deal with scaling, uptime, backups, monitoring, version updates, security, compliance, and so on becomes time-consuming, especially considering that your database always needs to remain up. This means that for your engineers, managing your database can easily keep them on the clock 24/7.
Most importantly, if your engineers are spending their time managing the database, they have less time to make crucial application updates that actually set your business apart from your competitors, who are busy speeding ahead of you shipping new features that customers actually care about. At the end of the day, the customer only sees the end product, not all the time you’re spending on infrastructure.
What’s the next step, then?
If you’re at the point where you’re actively trying to scale, you might think that the next natural step is to hire a DBA or even a team of people, especially if your database has become very large and performance is starting to become an issue. So, what’s that going to cost you?
Hiring a DBA isn’t cheap. In fact, there was a 6.9% salary increase for this role in 2022. Meanwhile, there was a 50% drop year-over-year in venture funding, as of Q3 of 2022. In other words, DBAs are expecting more, while budgets are shrinking.
It’s not just the salaries you’re paying, either. You still have the astronomical fees you pay just to host your database. Why settle for this?
What if your database provider could do more for you? This is where PlanetScale shines. With the PlanetScale platform, you essentially get a built-in DBA:
- Git-like workflows for schema changes
- Automatic no-downtime version updates
- Automated and pre-tested backups
- Replicas included on every production branch
- Built-in query monitoring
- Revert button to easily undo schema changes with no downtime
- Horizontal sharding with minimal application changes
- Easy-to-configure options for read-only regions, additional replicas, backups, and more
- World-class support options
- Built-in caching for some frequently used queries
Barstool’s CTO, Andrew Barba, knew that if he wanted to scale rapidly and increase velocity, he’d have to hire a lot more engineers. He was also mindful of their many outages — one of which cost the company a couple of million dollars in just 45 minutes. They ended up moving over to PlanetScale completely. “In the end, we saved 20-30% by switching to PlanetScale.”
Making database schema changes safely is a multi-step process that can be prone to errors and downtime if there are no safeguards in place.
This process frequently takes hours, if not days or weeks, to manually review and validate every database schema change. It’s not something you can really avoid, either: Schema changes are included in roughly 57% of all application changes. This tedious review process often becomes a huge blocker to shipping application changes. The more time your engineers are spending on this task, the less time they have to continue innovating on your application.
And, again, not making changes to a database isn’t exactly an option. So, companies need a way to do it safely, quickly, and cost-effectively. The problem is, there isn’t really an easy and universal solution to safeguarding this process.
2019 State of Database Deployments in Application Delivery survey
The ongoing management and deployment of database changes is by far the slowest and riskiest part of the application release process.
Why? Most of these changes are performed manually by database administrators (DBAs) who spend countless hours to create, review, rework, and deploy database changes in support of rapid application delivery. This creates a huge bottleneck for the overall release process, as database changes happen every day.
In software development, processes around continuously deploying application code have evolved and matured to a degree that even hobby developers typically have some kind of robust CI/CD process in place. Surprisingly, this level of maturity hasn’t fully reached the database world. The database is often treated as a separate function of shipping application changes – even though it’s an important and essential part of the process.
2019 State of Database Deployments in Application Delivery survey
Nearly all (92%) report that they face difficulties. The top challenge participants cited is the lack of tools to automate the database deployment process (50%).
This was followed closely by long database change review and approval cycles (49%) and having a very manual deployment process with many steps that can fail (48%).”
PlanetScale offers extra protection with safe migrations. This means that branching enables you to have zero-downtime schema migrations, the ability to revert a schema, and protection against accidental schema changes. With our non-blocking schema change process, we make a copy of affected tables and apply changes to that copy — instead of directly modifying tables when you deploy a request. And importantly, our revert feature allows us to go back in time and revert a schema migration deployed to your database and even retain lost data.
Can databases be exceedingly costly? Yes. Do they have to be? Absolutely not. With the right engine operating behind the scenes — plus the important guardrails you need to keep your site up and running — your database can work swiftly, efficiently, and proactively.