The redux of the fallacies of distributed computing

time to read 4 min | 612 words

The fallacies of distributed computing is a topic that is very near and dear to my heart. These are a set of assertions describing false assumptions that distributed applications invariably make.

The first two are:

  • The network is reliable.
  • Latency is zero.

Whenever I talk about distributed computing, the fallacies come up. And they trip people up, over and over and over again. Even people who should know better.

Which is why I read this post with horror. That was mostly for the following quote:

As networks become more redundant, partitions become an increasingly rare event. And even if there is a partition, it is still possible for the majority partition to be available. Only the minority partition must become unavailable. Therefore, for the reduction in availability to be perceived, there must be both a network partition, and also clients that are able to communicate with the nodes in the minority partition (and not the majority partition).

Now, to be clear, Daniel literally has a PHD in CS and has published several papers on the topic. It is possible that he is speaking in very precise terms that don’t necessary match to the way I read this statement. But even so, I believe that this statement is absolutely and horribly wrong.

A network partition is rare, you say? This reading from 2014 paper for ACM Queue shows that this is anything but. Oh, sure, in the grand scheme of things, a network partition is an extremely rare event in a properly maintained data center, let’s say that this is a 1 / 500,000 chance for that happening (rough numbers from the Google Chubby paper). That still gives you 61 outages(!) in a few weeks.

Go and read the ACM paper, it makes for fascinating reading, in the same way you can’t look away from a horror movie however much you want to.

And this is talking just about network partitions. The problem is that from the perspective of the individual nodes, that is not nearly the only reason why you might get a partition:

  • If running a server using a managed platform, you might hit a stop the world GC collection event. In some cases, this can be minutes.
  • In an unmanaged language, your malloc() may be doing maintenance tasks and causing an unexpected block in a bad location.
  • You may be swapping to disk.
  • The OS might have decided to randomly kill your process (Linux OOM killer).
  • Your workload has hit some critical point (see the Expires section) and cause the server to wait a long time before it can reply.
  • Your server is on a VM that was moved between physical machines.
  • A certificate expired on one machine, but not on others, meaning that it can contact others, but cannot be contacted directly (except that already existing connections still work).

All of these are before we consider the fact that we are dealing with imperfect software and that there may be bugs, that humans are tinkering with the system (such as deploying a new version) and mess things up, etc.

So no, I utterly reject the idea that partitions are rare events in any meaningful manner. Sure, they are rare, but a million to one event? We can do million packets per second. That means that something that is incredibly rare can still happen multiple times a day. In practice, you need to be aware that your software will be running in a partition, and that you will need a way to handle that.

And go read the fallacies again, maybe print them and stick them on a wall somewhere near by. If you are working with a distributed system, it is important to remember these fallacies, because they will trip you up.