The redux of the fallacies of distributed computing
The fallacies of distributed computing is a topic that is very near and dear to my heart. These are a set of assertions describing false assumptions that distributed applications invariably make.
The first two are:
- The network is reliable.
- Latency is zero.
Whenever I talk about distributed computing, the fallacies come up. And they trip people up, over and over and over again. Even people who should know better.
Which is why I read this post with horror. That was mostly for the following quote:
As networks become more redundant, partitions become an increasingly rare event. And even if there is a partition, it is still possible for the majority partition to be available. Only the minority partition must become unavailable. Therefore, for the reduction in availability to be perceived, there must be both a network partition, and also clients that are able to communicate with the nodes in the minority partition (and not the majority partition).
Now, to be clear, Daniel literally has a PHD in CS and has published several papers on the topic. It is possible that he is speaking in very precise terms that don’t necessary match to the way I read this statement. But even so, I believe that this statement is absolutely and horribly wrong.
A network partition is rare, you say? This reading from 2014 paper for ACM Queue shows that this is anything but. Oh, sure, in the grand scheme of things, a network partition is an extremely rare event in a properly maintained data center, let’s say that this is a 1 / 500,000 chance for that happening (rough numbers from the Google Chubby paper). That still gives you 61 outages(!) in a few weeks.
Go and read the ACM paper, it makes for fascinating reading, in the same way you can’t look away from a horror movie however much you want to.
And this is talking just about network partitions. The problem is that from the perspective of the individual nodes, that is not nearly the only reason why you might get a partition:
- If running a server using a managed platform, you might hit a stop the world GC collection event. In some cases, this can be minutes.
- In an unmanaged language, your malloc() may be doing maintenance tasks and causing an unexpected block in a bad location.
- You may be swapping to disk.
- The OS might have decided to randomly kill your process (Linux OOM killer).
- Your workload has hit some critical point (see the Expires section) and cause the server to wait a long time before it can reply.
- Your server is on a VM that was moved between physical machines.
- A certificate expired on one machine, but not on others, meaning that it can contact others, but cannot be contacted directly (except that already existing connections still work).
All of these are before we consider the fact that we are dealing with imperfect software and that there may be bugs, that humans are tinkering with the system (such as deploying a new version) and mess things up, etc.
So no, I utterly reject the idea that partitions are rare events in any meaningful manner. Sure, they are rare, but a million to one event? We can do million packets per second. That means that something that is incredibly rare can still happen multiple times a day. In practice, you need to be aware that your software will be running in a partition, and that you will need a way to handle that.
And go read the fallacies again, maybe print them and stick them on a wall somewhere near by. If you are working with a distributed system, it is important to remember these fallacies, because they will trip you up.
Comments
You might want to also note that...
https://www.goodreads.com/quotes/229767-million-to-one-chances-crop-up-nine-times-out-of-ten
One question. Since cluster wide transactions are available, could you say RavenDB can work as a CP system?. RavenDB cluster wide transaction might fail under a network partition, so it loses the "A" from AP. But does it gain the "C" and become CP?. I don't think so, because after a cluster wide transaction you can still perform a stale read from a minority node. What do you think?
I couldn't agree more. In modern development there so much more to communication than a simple wire. There are million that can go wrong before client communication attempt (say javascript issuing GET request) reaches the network card and another million things between communication piece landing on the machine and actually reaching that piece of code that will process it. And there is also a journey back, right?
And malloc may not memory at all. This will come when you are actually using (writing) to the memory pointer allocated.
Jesus, This gets a bit complex, because there are several options you can employ and they give different guarantees (sometimes using cluster wide tx and sometimes not, on multiple clients concurrently). However... If for a certain document you only use cluster wide tx, you are guaranteed to see a monotonic progress on that document from the same client. The way it works, we remember the last cluster wide tx id on that client and require that any node we touch will have at least reached that level. In other words, a cluster wide tx followed by a read to a partitioned node will fail (and failover to a live node, hopefully). If you are using separate clients, then you'll need to maintain that last cluster wide tx them, though.
Jorge, There is also: https://blogs.msdn.microsoft.com/larryosterman/2004/03/30/one-in-a-million-is-next-tuesday/
Reading your article made me believe he said that we should not care of network partition. He didn't say that, he only said that due to their scarcety you should probably choose CP over AP, which is a safer model : consistency can be a 100% guarantee, 100% avaibility does not exist, the loss of avaibility % is not high (obviously you are not agree with this statement).
Olivier, Azure had outages on: * Sep 4, 2018 - pretty big, took a lot of stuff down
For full history, see: https://azure.microsoft.com/en-gb/status/history/
They literally had an outage _today_. For fun, the root cause is:
Engineers determined that a network device experienced a hardware fault,
Actually getting accurate history is hard, but here is an AWS from 2018:
https://virtualizationreview.com/articles/2018/03/05/aws-outage.aspx
It took down a lot of people.
I guess that the issue here is what you mean by rare. Yes, errors are rare, but if you are pretty much guaranteed to hit them.at some point. At that point, what do you do?
In the same way that the seat belt isn't comfortable and you never want to actually have to use it, having availability is all about the cost of the error. If you are riding a go-kart at the kiddies playground, a seat belt isn't required. But for a business that may generate most / all of their revenues from the network? That is quite important.
Case in point, the Azure outage? It killed the MS symbol servers. That led to us not being able to properly analyze core dumps from customers and delayed bug fixes.
Comment preview