In a previous post, I asked about designing a document DB, and brought up the issue of replication, along with a set of questions that effect the design of the system:
- How often should we replicate?
- As part of the transaction?
- Backend process?
- Every X amount of time?
I think that we can assume that the faster we replicate, the better it is. However, there are cost associated with this. I think that a good way of doing replication would be to post a message on a queue for the remote replication machine, and have the queuing system handle the actual process. This make it very simple to scale, and create a distinction between the “start replication” part an the actual replication process. It also allow us to handle spikes in a very nice manner.
- Should we replicate only the documents?
- What about attachments?
- What about the generated view data?
We don’t replicate attachments, since those are out of scope.
Generated view data is a more complex issue. Mostly because we have a trade off here, of network payload vs. cpu time. Since views are by their very nature stateless (they can only use the document data), running the view on source machine or the replicated machine would result in exactly the same output. I think that we can safely ignore the view data, treating this as something that we can regenerate. CPU time tend to be far less costly than network bandwidth, after all.
Note that this assumes that view generation is the same across all machines. We discuss this topic more extensively in the views part.
- Should we replicate to all machines?
- To specified set of machines for all documents?
- Should we use some sharding algorithm?
I think that a sharding algorithm would be the best option, given a document, it will give a list of machine to replicate to. We can provide a default implementation that replicate to all machines or to secondary and tertiaries.