Consensus algorithms at scale: Part 7 - Propagating requests
If you're still catching up, you can find links to each article in the series at the bottom of this article.
We have saved the most difficult part for last. This is where we put it all together. Let us start with a restatement of the requirements for propagation of requests during a leadership change:
Propagate previously completed requests to satisfy the new leader’s durability requirements.
Recap of parts 1-6#
- We have redefined the problem of consensus with the primary goal of solving durability in a distributed system, and are approaching the problem top-down.
- We have shown a way to make durability an abstract requirement instead of the more rigid approach of using majority quorums.
- We have defined a high level set of rules that can satisfy the properties of a consensus system while honoring arbitrary (but meaningful) durability requirements.
- We have shown that conceptualizing leadership change as revocation and establishment leads to more implementation options that existing systems don’t utilize.
- We have also shown that there exist two fundamentally different approaches to handling race conditions, and covered their trade-offs.
- In the previous post, we covered how requests are completed as a precursor to analyzing propagation.
You can find links to each article in the series at the bottom of this article.
The simple case#
For lock-based systems, and for planned changes, we have the opportunity to request the current leader to demote itself. In this situation, the current leader could ensure that its requests have reached all the necessary followers before demoting itself. Once this is done, the elector performs the leadership change and the system can resume.
We will now look at how propagation should work if there are failures.
Discovering completed requests#
If a system has encountered failures, then the elector must indirectly revoke the previous leadership by requesting the followers to stop accepting any more requests from that leader. If enough followers are reached such that the previous leader cannot meet the durability criteria for any more requests, we know that the revocation is successful.
This method, apart from guaranteeing that no further requests will be completed by that leader, also allows us to discover all requests that were previously completed.
All we have to do is propagate those requests to satisfy the new leader’s criteria. But this is not as simple as it sounds.
There are many failure cases that make this problem extremely difficult:
- There may be a request that is incomplete. In this situation, the elector may or may not discover this request.
- An elector that discovers a tentative request may not be able to determine if that request has become durable.
- Propagation of a request can fail before completion.
- An elector that does not discover an incomplete request could elect a new leader that accepts a new request, which may fail before completion.
- A subsequent elector may discover such multiple incomplete requests.
- Another elector may discover only one of the incomplete requests, may propagate it as tentative, and fail before marking it as complete.
- A final elector can discover this durable request, and a newer conflicting incomplete request, and may not have enough information to know which one to honor.
To address the above failure modes, let us first look at what we can and cannot do:
- An elector must be able to reach a sufficient number of followers to revoke the previous leadership. If this is not possible, the elector is blocked.
- An elector need not (and may not be able to) reach all the followers of a leader.
Some more inferences:
- An elector is guaranteed to find all requests that have become durable.
- If a request was incomplete, an elector may not find it. If not found, it is free to move forward without that request. When that request is later discovered, it must be canceled.
- If an elector discovers an incomplete request, it may not have sufficient information to know if that request was actually durable or complete. Therefore, it has to assume that it might have completed, and attempt to propagate it.
- If an elector discovers an incomplete request and can determine with certainty that it was incomplete, it can choose either option: act as if it was discovered, or not discovered.
Let us now discuss some options.
Versioning the decisions#
It is safe to propagate the latest discovered decision. A decision to propagate a previous decision is a new decision.
We can use the following approach:
- Every request has a time-based version.
- A leader will create its request using a newer version than any previous request.
- An elector that chooses to propagate an incomplete request will do so under a new version.
- An elector that discovers multiple conflicting requests must choose to propagate the latest version.
Completed requests do not need versioning.
The above approach solves two difficult corner cases:
- If we discover two conflicting requests, it means that the latest request was created because the previous elector did not discover the old one. This essentially means that the old one definitely did not complete. So, it is safe to honor the new elector’s decision.
- If we propagate an existing request, it is also under a new version. It will therefore need to satisfy durability requirements under the new version without conflating itself with the old version.
Paxos uses proposal numbers to version its decisions, and Raft uses leadership term numbers.
But you can use other methods for versioning. For example, one could assign timestamps for the requests instead of using leadership terms or proposal numbers.
Most large-scale systems have anti-flapping rules that prevent a leadership from changing as soon as one was performed. This is because such an occurrence is usually due to a deeper underlying problem, and performing another leadership change will likely not fix it. And in most cases, it would aggravate the underlying problem.
In one of the systems that I knew of, the payload of the request was so big that it was causing the transmission to timeout. This resulted in a failure being detected and caused a leadership change. However, the new leader was also incapable of completing the request due to the same underlying problem. The problem was ultimately remedied by increasing the timeout.
Serendipitously, anti-flapping rules also mitigate the failure modes described above. Versioning of in-flight requests is less important for such systems.
MySQL and Vitess#
The MySQL binlogs contain metadata about all transactions. They carry two pieces of relevant information:
- A Global Transaction ID (GTID), which includes the identity of the leader that created the transaction.
- A timestamp.
This metadata is faithfully propagated to all replicas. This information is sufficient to resolve most ambiguities if conflicting transactions are found due to failures.
However, the faithful propagation of the transaction metadata breaks the versioning rule that the decision of a new elector must be recorded under a new timestamp.
The Orchestrator, which is the most popular leadership management system for MySQL, has built-in anti-flapping rules. These rules mitigate the above failure modes. This is the reason why organizations have been able to avoid split-brain scenarios while running MySQL at a massive scale.
In Vitess, we use VTorc, which is a customized version of the Orchestrator, and we inherit the same safeties. But we also intend to tighten some of these corner cases to minimize the need for humans to intervene if complex failures ever happen to occur.
Stay tuned for part 8 of the series, where we will pull everything together and conclude the series with some final thoughts.
Read the full Consensus Algorithms series#
- Consensus Algorithms at Scale: Part 1 — Introduction
- Consensus Algorithms at Scale: Part 2 — Rules of consensus
- Consensus Algorithms at Scale: Part 3 — Use cases
- Consensus Algorithms at Scale: Part 4 — Establishment and revocation
- Consensus Algorithms at Scale: Part 5 — Handling races
- Consensus Algorithms at Scale: Part 6 — Completing requests
- You just read: Consensus Algorithms at Scale: Part 7 — Propagating requests
- Next up: Consensus Algorithms at Scale: Part 8 — Closing thoughts