Ayende @ Rahien

Refunds available at head office

My poor little blameless Voron

I’m currently working on a project using Voron (although only incidentally), and I was horrified to get the FatalExecutionEngineException that was discussed previously. Naturally, I assumed that this is something that I did wrong in Voron.

image

After a lot of work, I managed to narrow it down to… not Voron. To be rather more exact, it is Voron that is causing it, but it isn’t Voron’s fault.

The actual issue is the fact that I’ve accidently using Json.NET to serialize a Stream that I got from Voron. And Voron is giving us an UnmanagedMemoryStream. I kept thinking that I was doing something wrong with releasing memory, but as it turns out, here is a very small repro:

 unsafe static void Main(string[] args)
 {
     JsonConvert.SerializeObject(new Foo { Ptr = (byte*)0 });

 }

 public unsafe class Foo
 {
     public byte* Ptr { get; set; }
 }

And that is enough to get the above mentioned error.

What actually happens is that Json.NET is using dynamic IL generation to optimize accessing properties. And it just doesn’t know how to handle pointer properties. What it ended up doing is to generate invalid IL, which resulted in a crash when we tried to actually use it.

Nothing to do with Voron at all, just a class that has a pointer property that was attempting serialization.  Nasty bug, and very annoying to try to figure out.

Tags:

Published at

Originally posted at

Today’s work is…

It isn’t enough to have just an ExecutionEngineException, I had to have this:

image

Tags:

Published at

Originally posted at

Comments (3)

Everything falls down in its proper place

As we are getting ready for the RavenDB Conference, our team is actually starting to show the fruits of all of this hard work. For example, take a look at the new studio replication UI configuration. There are a lot of things like that, and I am really pumped to see all the disparate things that we have been doing over the past year come together and join up into a single and very pretty picutre.

image

Tags:

Published at

Originally posted at

Comments (7)

Support Triage

We got a call to the office.

We have a huge problem with our system, you need to come and help us right away. This is a critical system and we need immediate response.

That kinda of annoying, of course, but it is all part of the service. So, in order to log the appropriate items into our system, we asked:

What is your order id? And what is your support contract number?

And the answer was:

Oh, that was handled by another department, I’m not sure.

So we asked them to figure that out and send it to us, and waited. The call came at noon, but 7 PM, I sent them an email.

The reply I got back was:

We’ll try to find the order details tomorrow.

I guess it isn’t so huge, critical and immediate problem any longer…

Published at

Originally posted at

Comments (5)

More xUnit tweaks, dynamic test skipping

For a long time, xUnit’s dev has resisted adding support for skipping a test dynamically. You could create your own Fact class and handle that yourself, but that was quite a lot of work, for something very specific.

In RavenDB, we had the need to dynamically decide whatever the test can run based on the actual test situation, so I decided to add this to our xUnit fork. This turned out to be really simple to do. Just three lines of code Smile

https://github.com/ayende/xunit/commit/82accb4c850a3938187ac334fb73d6e81dc921e3#diff-3c2b9f2cb8392f32456d0bf81151b59fR57

RavenDB Conference Status Update

noname

As you probably know, we have the RavenDB Conference coming up in a few weeks. This is the very first conference that we are doing, and to say that we are excited would be a major understatement. We are going to have speakers talk about RavenDB from all angles. From the development team (including yours truly) to the operation guys behind RavenHQ as well as customers and consultants that can share their real world experience about how to best utilize RavenDB.

The idea is to have an intimate gathering, and for the very first time, to actually have direct way to talk with all of our users, as well as share a lot of the stuff that happens behind the scenes. Not to mention, we crave feedback, and several key features of RavenDB were actually proposed by the community and then implemented by us.

I was reviewing the registration numbers for the conference and it looks like we have made a… miscalculation. The idea with the conference was to charge just enough to ensure a commitment to come, to avoid the plague of free conferences where many people sign up and most never come. We even offset things so you can pay 89$ for the conference and you get a 90$ coupon for RavenDB. On the side, since we are already there, we also setup an additional 3 days course for an in-depth look at RavenDB.

But we had a lot more demand for the conference and the course than we thought we would have. We currently have just 2 places open for the course, and about 15 – 20 places open for the conference.

Since the course is more expensive, that was a the kind of surprise that is actually kinda nice to have: “Wait, you mean that they are giving us more money than we thought they would…”

At any rate, I find myself in the strange situation of encouraging people to go and buy the cheaper product, simply because we are really going to have no more room for the course soon.

You can register here, I’m looking forward to seeing you all.

Tags:

Published at

Originally posted at

Comments (1)

Raft, as the Raven flies

The interesting thing about the etcd / go-raft review wasn’t so much what they did, they did things quite beautifully, but the things that I wanted them to do that they didn’t do. The actual implementation was fascinating, and for the purposes of what is required, more than adequate. I, however, have a very specific goal in mind, and that means that this is really not going to work with the given implementations.

I’ve better explain first. I’m not sure if go-raft was written by the same people as etcd, but they do “smell” very much the same. It just occurred to me that yes, they are actually both on github, so I checked it appears that I was correct. xiangli-cmu, philips, benbjohnson and bcwaldon (among many others) have worked on both projects. The reason this is important is that this reinforce my feeling that go-raft was written mainly to serve the need of etcd. Anyway…

The reason that I think that is that the implementation is very much optimized at the level etcd needs it. What I mean is that there is a lot of work there to support multiple concurrent operations that would be batched to all the nodes in the cluster, then be applied in the in memory model. This work because etcd is a set of key/value pairs that are shared among many nodes, but they are usually very small values, and very small number of keys. I would be surprised if there were hundreds of thousands of keys in a single etcd installation. That just isn’t the type of thing that it is meant to do. And that reflects throughout.

For example, there is an inherit assumption through the codebase, that the data set is small, the each value is small, etc. When sending a snapshot over the wire, there is the assumption that it can all fit in memory and that this is a good thing to do so. Even more to the point, the system where we only push entries to the nodes on heartbeats is great when you are trying to batch multiple changes together, with each change being independent of all the others (usually). However, that really doesn’t work when what you actually need is something like the sort of things that we usually deal with.

I want to emphasize that I’m really evaluating the way etcd & go-raft are built and find them wanting for my own purposes, I think that they are doing quite great for the kind of problem that they are meant to solve.

Let us consider the kind of databases that we have. We usually have to deal with much larger values, typical documents are in the KB range, and are frequently hundreds of KB in size. We also tend to have quite a lot of them. Trying to put them all in memory (the etcd way) would be… suboptimal. Now, one option would be to try and do just that, and have each operation in RavenDB (put/delete documents) as a state machine command in Raft. The problem is that this really gets complicated very quickly, leaving aside the fact that it isn’t a general solution, and we really want to try to get to a general solution.

We don’t want to solve this once for RavenDB, and then for RavenFS, and then for… etc. And the reason that working at that level is tricky is that you need to push everything through the state machine, and everything in this case really does mean everything. That lead to a very different database design and implementation. It is also probably not really necessary. We can drop this a level or two down and instead of dealing at the database level, we can deal with that at the storage layer level. This is basically just an extension of the log shipping idea. Instead of using Raft for high level commands, just use Raft to create a consensus around the sequence of writes to the journal. That means that all the nodes will have the same journal, and thus have the same data in the storage.

That in turn means that an “entry” can actually be pretty large (mutli MB range, sometimes). And because the journal is purely sequential, we need to issue that command to all the nodes as soon as it is submitted to the Raft state machine.

Now, the reason that the go-raft implementation waits for the heartbeat is that it batches things up nicely, and really speed up the general case. But we don’t need to do that for what we are going to do. Or, to be rather more exact, we have already done just that. That is basically what Voron’s transaction merging is all about. And since we can only handle writes as the leader, and since we will merge concurrent writes anyway, that is going to end up as a very similar behavior under load, and faster without load. At least that is what the initial thinking is saying.

Snapshots, also something quite important for Raft, can be handle very simply by just doing a backup/restore over the network, by streaming all the data.

And the rest is just implementing Raft Smile.

Published at

Originally posted at

The downsides of going big

One of the things that I found out is that as Hibernating Rhinos grows (and we currently have over a dozen people working full time), I’m seeing two very interesting changes in my own behavior.

The actual velocity is increasing by leaps & bounds. We can do a lot more now, and we can do that faster and with a greater degree of parallelism.

My personal development is growing less, as I am doing a lot more of business type things. One aspect of that is that I do a lot of reading of contracts, legalese, and other stuff that takes time from actual development work.

I try to compensate by running the tests while I’m reading contracts, and then I found the following in a contract I’m reviewing:

image

Nice to know where that stand. And I emphasize.

Published at

Originally posted at

Comments (8)

RavenDB Success Stories: Codealike

I just run into the following case study from Microsoft:

Codealike

*
We were amazed by how simple it was to embed RavenDB into our Windows Azure application. This gave us a huge advantage in getting our solution to market in just six weeks.
*

Federico Andres Lois
Co-Founder, Codealike

*

Tags:

Published at

Originally posted at

Comments (2)

Stack overflow… yeah, right!

My app crashed with stack overflow exception. That would be fine, except:

image

 

To be fair, that was under low memory conditions, since I was leaking virtual memory. So that make sense, still...

Tags:

Published at

Originally posted at

Comments (1)

Reviewing go-raft, part II

In my previous post, I started to go over the go-raft implementation, after being interrupted by the need to sleep, I decided to go on with this, but I wanted to expand a bit first about the issue we discussed earlier, not checking the number of bytes read in log_entry’s Decode.

image

Let us assume that we actually hit that issue, what would happen?

image

The process goes like this, we try to read a value, but the Read method only return some of the information. We explicitly ignore that, and try to use the buffer anyway. Best case scenario, we are actually getting an error, so we bail early. At that point, we detect the error and truncate the file. Hello data loss, nice to see you. For fun, this is the best case scenario. It is worse if we marshal the partial data without an error. Then we have not the case of “oh, we have a node that is somehow way behind”, we have the case of “this node actually applied different commands than anyone else”.

I reported this issue, and I’m interested to know if my review is in any way correct. With that said, let us move on…

getEntriesAfter gives us all the in memory entries. That is quite similar to how RavenDB handled indexing, for that matter, so it is amusing. But this applies only to in memory stuff, and it is quite interesting to see how this will interact with other parts of the codebase.

setCommitIndex is interesting. In my head, committing something means flushing them to disk. But in Raft’s term. Committing something means applying the commands. But the reason it is interesting is that it has some interesting comments on edge cases. So far, I haven’t see actually writing to disk, mind.

And this one gives me a headache:

image

Basically, this mean that we need to write the commit index to the beginning of the file. It is also an extremely unsafe operation. What happens if you crash immediately after? Did you change go through, or not? For that matter, there is nothing that prevents the OS from first writing the changes you made to the beginning of the file then whatever else you wrote at the end. So a crash might actually leave you with the commit pointer pointing at corrupted data. Luckily, I don’t see anything there actually calling this, though.

The truncate method makes my head ache, mostly because it does things like delete data, which makes my itchy. This is called from the server code as part of normal processing of the append entries request. What this does, in effect, is to say something like: I want you to apply this log entry, make sure that your previous log entry is this, and if it isn’t, revert it back to this entry.  This is how Raft ensure that all the logs are the same across the cluster.

Then we have this:

   1:  // Appends a series of entries to the log.
   2:  func (l *Log) appendEntries(entries []*protobuf.LogEntry) error {
   3:      l.mutex.Lock()
   4:      defer l.mutex.Unlock()
   5:   
   6:      startPosition, _ := l.file.Seek(0, os.SEEK_CUR)
   7:   
   8:      w := bufio.NewWriter(l.file)
   9:   
  10:      var size int64
  11:      var err error
  12:      // Append each entry but exit if we hit an error.
  13:      for i := range entries {
  14:          logEntry := &LogEntry{
  15:              log:      l,
  16:              Position: startPosition,
  17:              pb:       entries[i],
  18:          }
  19:   
  20:          if size, err = l.writeEntry(logEntry, w); err != nil {
  21:              return err
  22:          }
  23:   
  24:          startPosition += size
  25:      }
  26:      w.Flush()
  27:      err = l.sync()
  28:   
  29:      if err != nil {
  30:          panic(err)
  31:      }
  32:   
  33:      return nil
  34:  }

This seems pretty easy to follow, all told. But note the call to sync() there in line 27. And the fact that this translate down to an fsync, which is horrible for performance.

There is also appendEntry, which appears to be doing the exact same thing as appendEntries and writeEntry. I’m guessing that the difference is that appendEntries is called for a follower, and appendEntry is for a leader.

The last thing to go through in the log.go file is the compact function, which is… interesting:

   1:  // compact the log before index (including index)
   2:  func (l *Log) compact(index uint64, term uint64) error {
   3:      var entries []*LogEntry
   4:   
   5:      l.mutex.Lock()
   6:      defer l.mutex.Unlock()
   7:   
   8:      if index == 0 {
   9:          return nil
  10:      }
  11:      // nothing to compaction
  12:      // the index may be greater than the current index if
  13:      // we just recovery from on snapshot
  14:      if index >= l.internalCurrentIndex() {
  15:          entries = make([]*LogEntry, 0)
  16:      } else {
  17:          // get all log entries after index
  18:          entries = l.entries[index-l.startIndex:]
  19:      }
  20:   
  21:      // create a new log file and add all the entries
  22:      new_file_path := l.path + ".new"
  23:      file, err := os.OpenFile(new_file_path, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0600)
  24:      if err != nil {
  25:          return err
  26:      }
  27:      for _, entry := range entries {
  28:          position, _ := l.file.Seek(0, os.SEEK_CUR)
  29:          entry.Position = position
  30:   
  31:          if _, err = entry.Encode(file); err != nil {
  32:              file.Close()
  33:              os.Remove(new_file_path)
  34:              return err
  35:          }
  36:      }
  37:      file.Sync()
  38:   
  39:      old_file := l.file
  40:   
  41:      // rename the new log file
  42:      err = os.Rename(new_file_path, l.path)
  43:      if err != nil {
  44:          file.Close()
  45:          os.Remove(new_file_path)
  46:          return err
  47:      }
  48:      l.file = file
  49:   
  50:      // close the old log file
  51:      old_file.Close()
  52:   
  53:      // compaction the in memory log
  54:      l.entries = entries
  55:      l.startIndex = index
  56:      l.startTerm = term
  57:      return nil
  58:  }

This code can’t actually run on Windows. Which is interesting. The issue here is that it is trying to rename a file that is open on top of another file which is open. Windows does not allow it.

But the interesting thing here is what this does. We have the log file, which is the persisted state of the in memory entries collection. Every now and then, we compact it by creating a snapshot, and then we create a new file, with only the entries after the newly created snapshot position.

So far, so good, and that gives me a pretty good feeling regarding how the whole thing is structured. Next in line, the peer.go file. This represent a node’s idea about what is going on in the another node in the cluster. I find the heartbeat code really interesting:

// Starts the peer heartbeat.
func (p *Peer) startHeartbeat() {
    p.stopChan = make(chan bool)
    c := make(chan bool)
    go p.heartbeat(c)
    <-c
}

// Stops the peer heartbeat.
func (p *Peer) stopHeartbeat(flush bool) {
    p.stopChan <- flush
}

// Listens to the heartbeat timeout and flushes an AppendEntries RPC.
func (p *Peer) heartbeat(c chan bool) {
    stopChan := p.stopChan

    c <- true

    ticker := time.Tick(p.heartbeatInterval)

    debugln("peer.heartbeat: ", p.Name, p.heartbeatInterval)

    for {
        select {
        case flush := <-stopChan:
            if flush {
                // before we can safely remove a node
                // we must flush the remove command to the node first
                p.flush()
                debugln("peer.heartbeat.stop.with.flush: ", p.Name)
                return
            } else {
                debugln("peer.heartbeat.stop: ", p.Name)
                return
            }

        case <-ticker:
            start := time.Now()
            p.flush()
            duration := time.Now().Sub(start)
            p.server.DispatchEvent(newEvent(HeartbeatEventType, duration, nil))
        }
    }
}

Start heartbeat starts a new heartbeat, and then wait under the heartbeat function notify it that it has started running.

What is confusing is the reference to the peer’s server. Peer is defined as:

// A peer is a reference to another server involved in the consensus protocol.
type Peer struct {
    server            *server
    Name              string `json:"name"`
    ConnectionString  string `json:"connectionString"`
    prevLogIndex      uint64
    mutex             sync.RWMutex
    stopChan          chan bool
    heartbeatInterval time.Duration
}

And it seems logical to think that this is a remote peer’s server, but this is actually the local server reference, not the remote one. Note that it is actually the flush method that does the remote call.

Flush is defined as:

func (p *Peer) flush() {
    debugln("peer.heartbeat.flush: ", p.Name)
    prevLogIndex := p.getPrevLogIndex()
    term := p.server.currentTerm

    entries, prevLogTerm := p.server.log.getEntriesAfter(prevLogIndex, p.server.maxLogEntriesPerRequest)

    if entries != nil {
        p.sendAppendEntriesRequest(newAppendEntriesRequest(term, prevLogIndex, prevLogTerm, p.server.log.CommitIndex(), p.server.name, entries))
    } else {
        p.sendSnapshotRequest(newSnapshotRequest(p.server.name, p.server.snapshot))
    }
}

The interesting thing here is that the entries collection might be empty (in which case this serve as just a heartbeat). Another thing that pops to mind is that this has an explicitly leader instructing follower to generate snapshots. The Raft paper suggested that this is something that would happen locally on each server on an independent basis.

There is a lot of interesting behavior in sendAppendEntriesRequest(), not so much in what it does, as in how it handles replies. There is a lot of state going on there. It’s very well commented, so I’ll let you read it, there isn’t anything that is actually going on that is complex.

What is fascinating is that while the transport layer for go-raft is HTTP, which is inherently request/response. It actually handles this in an interesting fashion:

  • Requests are synchronous
  • On reply, the in memory state of the peer is updated immediately
  • The response from the peer is queued to be handled by the server event loop

The end result is that a lot of the handling is centralized into a really pretty state machine. The rest of what is going on there is not very interesting, except for snapshots, but those are covered elsewhere.

And now, we are ready to actually go and look at the server code, but… not yet. It is over thousand lines of code, so I think that I’ll go over other stuff first. In particular, snapshotting looks interesting.

image

This is actually quite depressing. Note the State properties here. There is an implicit assumption that it is possible / advisable to go with the entire in memory state like that. I know that I am sensitive to such things, but that seems like an aweful lot of waste when talking about large systems.

Here is one such issue:

image

Let us assume that our state is big, hundreds of MB or maybe a few GB in size.

We currently hold it in memory inside the Snaphsot.STate, then we marshal that to json. Now, I actually had to go and check, but Go’s json package actually does the usual thing and encode a byte array as a base 64 formatted string. What that means, in turn, is that you have an overhead of about 25% that you have to deal with, and this is all allocated in main memory. And then you write it to a file.

This is…. quite insane, to be frank.

Assuming that I have a state that is 100 MB in size, I’m going to hold all of that in memory, then allocate another 125MB just to hold the json state, then write it to a file. Why not write it to a file directly in the first place? (You could do CRC along the way).

The whole thing appear to be assuming small sizes of data. Throughout the entire codebase, actually.

And now, I have no other ways to avoid it, we are going into the server.go itself…

There is a lot of boilerplate stuff there, but the first interesting thing happens when we look at how to apply the log:

image

This says, when we need to apply it, execute the command method on my state. A lot of the other methods are some variant of:

image

Nothing to see here at all.

And we finally get to the key part, the event loop:

image

Let us look in detail on the followerLoop. Inside that function, we have a loop that waits for:

  1. Stop signal, which would lead to us shutting down…
  2. We got an event on our queue…
  3. The timeout for an event has expired…

There is one part there that puzzles me:

image

promotable will return true if the log has any entries at all. I’m not really sure why that is the case, to be honest. In particular, what about the case when we start with an empty server. I’m going to go on reading the code, and we’ll see where it leads us. And it leads us to:

image

So next we need to figure out what is this self join stuff. I am not sure if that is something comes from Raft or from an external source. I found this issue that discusses this, but it isn’t very helpful in terms of understanding who issue the self join command. I tried looking at the etcd codebase, but I didn’t find anything so far. I’ll leave it for now.

The rest of the operations are basically just forwarding the calls to the appropriate methods if they are in the allowed state.

The caniddateLoop method isn’t anything special, it follows the Raft paper pretty nicely, although I have to admit the “candidate becomes follower upon Append Entries command” is buried deep. The same is true for the other behaviors. The appropriate state based responses sometimes are hard to figure out, because you have the state loop, then you have the same apparent behavior everywhere. For example, we need to become follower if we get an Append Entries request. That happens in processAppendEntriesRequest(), but it would actually be easier to see this if we had code duplication. This is a case where getting familiar with the codebase would help understanding it, and I don’t think that this would be a change worth doing, anyway.

Probably the most interesting behavior is in the leadershipLoop when we process a command. A command is added to the server queue using a Do(Command) method. It is then processed in processCommand.

The problem here is that commands are actually appended to the log, and then sent to peers using the heartbeat interval. By default, that stand at 50 ms.

This is great and all, but it does mean that the latency for requests is going to suffer. This doesn’t matter that much for something like etcd. The assumption here is that the requests are all going to be on different things, so we can queue a lot of commands and get pretty good speed overall. It is a problem if in our system, we have to process sequential operations. In that mode, we can’t wait until the heartbeat, and we want to process this right away. I’ll discuss this later,I think. It is a very important property of this implementation (but not for Raft in general).

Looking at the snapshot state, this happens when a follower get this a SnapshotRequest, but I don’t see anywhere that send it. Maybe it is another caller originated thing?

I just looked at the etcd source, and I think that I confirmed that both behaviors are there in the etcd source. So I think that that explains it.

And this is it, basically. I have some thoughts about the implementation of this and of etcd, but I think that this is enough for now… I’ll post them in my next post.

Reviewing go-raft, part I

After going over the etcd codebase, I decided that the raft portion of this is deserving a much stronger look. The project is here, and I am reviewing commit: 30f261bfe873561c2c75b6206ba1f62a42dbc8d6

Again, I strong recommend reading the Raft paper. It is quite good. At any rate, assuming that you understand Raft, let us get cracking. This time, I’m reading this in Sublime Text. As usual, I’m reading in lexicographical order, and I’m starting from append_entires.go

AppendEntries is at the very heart of Raft, so I was pleased to see it here:

// The request sent to a server to append entries to the log.
type AppendEntriesRequest struct {
    Term         uint64
    PrevLogIndex uint64
    PrevLogTerm  uint64
    CommitIndex  uint64
    LeaderName   string
    Entries      []*protobuf.LogEntry
}


// The response returned from a server appending entries to the log.
type AppendEntriesResponse struct {
    pb     *protobuf.AppendEntriesResponse
    peer   string
    append bool
}

However, I didn’t really understand this code. It seemed circular, at least until I realized that we also have a whole lot of generated files. See:

image

The actual protobuf semantics are (excluding a lot of stuff, of course):

message LogEntry {
    required uint64 Index=1;
    required uint64 Term=2;
    required string CommandName=3;
    optional bytes Command=4; // for nop-command
}

message AppendEntriesRequest {
    required uint64 Term=1;
    required uint64 PrevLogIndex=2;
    required uint64 PrevLogTerm=3;
    required uint64 CommitIndex=4;
    required string LeaderName=5;
    repeated LogEntry Entries=6;
}

message AppendEntriesResponse {
    required uint64 Term=1;
    required uint64 Index=2;
    required uint64 CommitIndex=3;
    required bool   Success=4;
}

So, goraft (which I always read as graft) is using protocol buffers as its wire format. Note in particular that the LogEntry contain the full content of a command. That AppendEntriesRequest has an array of them, and that the AppendEntriesResponse is setup separately. That means that it is very natural to use a one way channel for communication. Even though we do request response, there is a high degree of separation between the request & reply. Indeed, from reading the code in etcd, I thought that was the case.

There is something that really bothers me, though. I noticed that in etcd’s codebase as well. This is things like this:

// Encodes the AppendEntriesRequest to a buffer. Returns the number of bytes
// written and any error that may have occurred.
func (req *AppendEntriesRequest) Encode(w io.Writer) (int, error) {
    pb := &protobuf.AppendEntriesRequest{
        Term:         proto.Uint64(req.Term),
        PrevLogIndex: proto.Uint64(req.PrevLogIndex),
        PrevLogTerm:  proto.Uint64(req.PrevLogTerm),
        CommitIndex:  proto.Uint64(req.CommitIndex),
        LeaderName:   proto.String(req.LeaderName),
        Entries:      req.Entries,
    }

    p, err := proto.Marshal(pb)
    if err != nil {
        return -1, err
    }

    return w.Write(p)
}

I’m not sure about the actual semantics of memory allocations in Go, but let us assume that we had a single ,log entry with 1KB for the command data. This means that we would have the command data:

  • Once in the LogEntry inside the AppendEntriesRequest
  • Once in the protocol buffers byte array returned from Marshal

There doesn’t appear to be any way to directly stream things. Maybe it is usually dealing with small amounts of data, maybe they didn’t notice, or maybe something in Go make this very efficient, but I doubt it.

The next interesting part is Command handling. Raft is all about reaching a consensus on the order of executing a set of commands in a cluster. So it is really interesting to see it being handled with Go’s interfaces.

// Command represents an action to be taken on the replicated state machine.
type Command interface {
    CommandName() string
}

// CommandApply represents the interface to apply a command to the server.
type CommandApply interface {
    Apply(Context) (interface{}, error)
}

type CommandEncoder interface {
    Encode(w io.Writer) error
    Decode(r io.Reader) error
}

We have some additional things about serializing commands and reading them back, but nothing beyond this. The Commands.go file, however, is of a little bit more interest. Let us look at the join command:

// Join command interface
type JoinCommand interface {
    Command
    NodeName() string
}

// Join command
type DefaultJoinCommand struct {
    Name             string `json:"name"`
    ConnectionString string `json:"connectionString"`
}


// The name of the Join command in the log
func (c *DefaultJoinCommand) CommandName() string {
    return "raft:join"
}

func (c *DefaultJoinCommand) Apply(server Server) (interface{}, error) {
    err := server.AddPeer(c.Name, c.ConnectionString)

    return []byte("join"), err
}

func (c *DefaultJoinCommand) NodeName() string {
    return c.Name
}

I’m not sure when we have an interface for JoinCommand, then a default implementation like that. I saw that elsewhere in etcd, it might be a Go pattern. Note that the JoinCommand is an interface that embeds another interface (Command, in this case, obviously).

Note that you have the Apply function to actually handle the real work, in this case, add a peer.  There is nothing interesting in config.go, debug.go or context.go but event.go is puzzling. To be fair, I am really at a loss to explain this style:

// Event represents an action that occurred within the Raft library.
// Listeners can subscribe to event types by using the Server.AddEventListener() function.
type Event interface {
    Type() string
    Source() interface{}
    Value() interface{}
    PrevValue() interface{}
}

// event is the concrete implementation of the Event interface.
type event struct {
    typ       string
    source    interface{}
    value     interface{}
    prevValue interface{}
}

// newEvent creates a new event.
func newEvent(typ string, value interface{}, prevValue interface{}) *event {
    return &event{
        typ:       typ,
        value:     value,
        prevValue: prevValue,
    }
}

// Type returns the type of event that occurred.
func (e *event) Type() string {
    return e.typ
}

// Source returns the object that dispatched the event.
func (e *event) Source() interface{} {
    return e.source
}

// Value returns the current value associated with the event, if applicable.
func (e *event) Value() interface{} {
    return e.value
}

// PrevValue returns the previous value associated with the event, if applicable.
func (e *event) PrevValue() interface{} {
    return e.prevValue
}

Why go to all this trouble to define things this way? It seems like a lot of boiler plate code. It would be easier to just expose a struct directly. I am assuming that this is done so you can send other things than the event struct, with additional information as well. In C# you’ll do that by subsclassing the event, but you cannot do that in Go. A better alternative might have been to just have a tag / state field in the struct and let it go that way, though.

event_dispatcher.go is just an implementation of a dictionary of string to events, nothing much beyond that. A lot of boiler plate code, too.

http_transporter.go is next, and is a blow to my hope that this will do a one way messaging system. I’m thinking about doing Raft over ZeroMQ or NanoMSG. Here is the actual process of sending data over the wire:

// Sends an AppendEntries RPC to a peer.
func (t *HTTPTransporter) SendAppendEntriesRequest(server Server, peer *Peer, req *AppendEntriesRequest) *AppendEntriesResponse {
    var b bytes.Buffer
    if _, err := req.Encode(&b); err != nil {
        traceln("transporter.ae.encoding.error:", err)
        return nil
    }

    url := joinPath(peer.ConnectionString, t.AppendEntriesPath())
    traceln(server.Name(), "POST", url)

    t.Transport.ResponseHeaderTimeout = server.ElectionTimeout()
    httpResp, err := t.httpClient.Post(url, "application/protobuf", &b)
    if httpResp == nil || err != nil {
        traceln("transporter.ae.response.error:", err)
        return nil
    }
    defer httpResp.Body.Close()

    resp := &AppendEntriesResponse{}
    if _, err = resp.Decode(httpResp.Body); err != nil && err != io.EOF {
        traceln("transporter.ae.decoding.error:", err)
        return nil
    }

    return resp
}

This is very familiar territory for me, I have to say Smile. Although, again, there is a lot of wasted memory here by encoding the data multiple times, instead of streaming it directly.

And here is how it receives information:

// Handles incoming AppendEntries requests.
func (t *HTTPTransporter) appendEntriesHandler(server Server) http.HandlerFunc {
    return func(w http.ResponseWriter, r *http.Request) {
        traceln(server.Name(), "RECV /appendEntries")

        req := &AppendEntriesRequest{}
        if _, err := req.Decode(r.Body); err != nil {
            http.Error(w, "", http.StatusBadRequest)
            return
        }

        resp := server.AppendEntries(req)
        if _, err := resp.Encode(w); err != nil {
            http.Error(w, "", http.StatusInternalServerError)
            return
        }
    }
}

Really there is nothing much to write home about, to be frank. All of the operations are like that, just encoding/decoding and forwarding the code to the right function. I’m skipping log.go in favor of going to log_entry.go for a moment. The log is really important in Raft, so I want to focus on small chewables first.

If the user don’t provide an encoder for a command, it will be converted using json, then serialized to a writer using protocol buffers format.

One thing that I did notice that was interesting was a bug in decoding from a ptorocol buffer stream:

// Decodes the log entry from a buffer. Returns the number of bytes read and
// any error that occurs.
func (e *LogEntry) Decode(r io.Reader) (int, error) {

    var length int
    _, err := fmt.Fscanf(r, "%8x\n", &length)
    if err != nil {
        return -1, err
    }

    data := make([]byte, length)
    _, err = r.Read(data)

    if err != nil {
        return -1, err
    }

    if err = proto.Unmarshal(data, e.pb); err != nil {
        return -1, err
    }

    return length + 8 + 1, nil
}

Do you see the bug?

It is in the reading of the data from the reader. A reader may decide to read less than the data that was requested. In this case, I’m assuming that it is always sending fully materialized readers to the Decode method, not surprising given how often it will create in memory buffers for the entire dataset. Still.. that isn’t nice to do, and it can create the most subtle and hard to understand bugs.

And now, into the Log!

// A log is a collection of log entries that are persisted to durable storage.
type Log struct {
    ApplyFunc   func(*LogEntry, Command) (interface{}, error)
    file        *os.File
    path        string
    entries     []*LogEntry
    commitIndex uint64
    mutex       sync.RWMutex
    startIndex  uint64 // the index before the first entry in the Log entries
    startTerm   uint64
}

// The results of the applying a log entry.
type logResult struct {
    returnValue interface{}
    err         error
}

There are a few things that we can notice right now. First, ApplyFunc is how we control the application of stuff to the in memory state, I am assuming. Given that applying the log can only happen after we have a consensus and probably fsynced to disk, it makes sense to invoke it from here.

Then, we also have a file, so that is where we are actually doing a lot of the interesting stuff, like actual storage IO and things like that. The in memory events array is also interesting, mostly because I wonder just how big it is, and when it is getting truncated. I think that the way it works, we have the log properties, which likely represent the flushed to disk state, and the entries represent the yet to be flushed state.

Things get interesting in the open method, which is called to create new log or recover an existing one. The interesting parts (recovery) is here:

// Read the file and decode entries.
for {
    // Instantiate log entry and decode into it.
    entry, _ := newLogEntry(l, nil, 0, 0, nil)
    entry.Position, _ = l.file.Seek(0, os.SEEK_CUR)

    n, err := entry.Decode(l.file)
    if err != nil {
        if err == io.EOF {
            debugln("open.log.append: finish ")
        } else {
            if err = os.Truncate(path, readBytes); err != nil {
                return fmt.Errorf("raft.Log: Unable to recover: %v", err)
            }
        }
        break
    }
    if entry.Index() > l.startIndex {
        // Append entry.
        l.entries = append(l.entries, entry)
        if entry.Index() <= l.commitIndex {
            command, err := newCommand(entry.CommandName(), entry.Command())
            if err != nil {
                continue
            }
            l.ApplyFunc(entry, command)
        }
        debugln("open.log.append log index ", entry.Index())
    }

    readBytes += int64(n)
}

This is really interesting, because it is actually sending the raw file to the Decode function, unlike what I expected. The reason this is surprising is that there is a strong likelihood that the OS  will actually return less data than requested. As it turns out, on Windows, this will never be the case, but it does appear (at least from the contract of the API it ends up calling) that at least on Linux, that is possible. Now, I ended up going all the way to the sys call interface in linux, so I’m pretty sure that this can’t happen there either, but still…

At any rate, the code appear to be pretty clear. We read the log entry, decode it, (truncating the file if there are any issues) and if need be, we apply it.

And I think that this is enough for now… it is close to 9 PM, and I need to do other things as well. I’ll get back to this in my next post.

Reviewing etcd

The etcd project is a project that I stumbled upon that looks interesting. It is a a highly-available key value store for shared configuration and service discovery. It is written in Go and is implemented using Raft. I’m reviewing commit  46d817f91b2edf4141081abff7d92a4f71d39248.

I don’t know Go, and I think that this would be a great way to learn both about Raft (which I am very interested about) and about Go (which I peeked at occasionally, but never really studied). Like some of my other posts, this is likely to be a very long and rambling one. For reading the code, I am using LiteIDE, at least for now.

This is what this looks like.

image

I usually like to do a lexicographical read through the codebase, at least at first. That means that in this case, I have to go through the docs first. Probably not a totally bad idea, but a divergence from my usual approach.

Discovery – it looks like etcd handled the problem of initial peers selection by… going to another etcd cluster, that handle membership information. There is a SaaS offering, it appears (discovery.etcd.io). I like the recursive nature of that, and obviously you can set it up with a static list of peers to start with.

The first real code file I saw was this one, bench.go:

   1: package main
   2:  
   3: import (
   4:     "flag"
   5:     "log"
   6:     "strconv"
   7:  
   8:     "github.com/coreos/etcd/third_party/github.com/coreos/go-etcd/etcd"
   9: )
  10:  
  11: func write(endpoint string, requests int, end chan int) {
  12:     client := etcd.NewClient([]string{endpoint})
  13:  
  14:     for i := 0; i < requests; i++ {
  15:         key := strconv.Itoa(i)
  16:         _, err := client.Set(key, key, 0)
  17:         if err != nil {
  18:             println(err.Error())
  19:         }
  20:     }
  21:     end <- 1
  22: }
  23:  
  24: func watch(endpoint string, key string) {
  25:     client := etcd.NewClient([]string{endpoint})
  26:  
  27:     receiver := make(chan *etcd.Response)
  28:     go client.Watch(key, 0, true, receiver, nil)
  29:  
  30:     log.Printf("watching: %s", key)
  31:  
  32:     received := 0
  33:     for {
  34:         <-receiver
  35:         received++
  36:     }
  37: }
  38:  
  39: func main() {
  40:     endpoint := flag.String("endpoint", "http://127.0.0.1:4001", "etcd HTTP endpoint")
  41:  
  42:     rWrites := flag.Int("write-requests", 50000, "number of writes")
  43:     cWrites := flag.Int("concurrent-writes", 500, "number of concurrent writes")
  44:  
  45:     watches := flag.Int("watches", 500, "number of writes")
  46:  
  47:     flag.Parse()
  48:  
  49:     for i := 0; i < *watches; i++ {
  50:         key := strconv.Itoa(i)
  51:         go watch(*endpoint, key)
  52:     }
  53:  
  54:     wChan := make(chan int, *cWrites)
  55:     for i := 0; i < *cWrites; i++ {
  56:         go write(*endpoint, (*rWrites / *cWrites), wChan)
  57:     }
  58:  
  59:     for i := 0; i < *cWrites; i++ {
  60:         <-wChan
  61:         log.Printf("Completed %d writes", (*rWrites / *cWrites))
  62:     }
  63: }

I include the entire file here because it is short, and really quite interesting. This is my first time really reading Go code, so I had to go and read some docs. The first interesting thing is in line 11, when when have “end chan int”, which defines a channel of integers. This allows cross goroutine (for .NET guys, think tasks / TPL, that isn’t actually accurate, but it is close enough) communication, including “waiting” for results.

The write func will write the specified number of requests, then push a number to the channel, signifying that it completed its work. That is really quite nice pattern for doing work, considering that there isn’t a way to await a goroutine.

The watch func is a little hard for me to get. Mostly because as far as I understand, it is setting up a watch for a particular key, then just accumulate the number of changes to it in a local variable, and not doing anything else with it.

The main function is really funny to read. The flag package is fascinating way to handle parameter parsing, and it shows how carefully Go was meant to be a server side language, where command line parsing is really common. The flag package is both powerful and really simple. I think that I’ll probably make use of this approach for configuration in RavenDB.

I love that you define the flag, and then you get a pointer to where that value will be, once you called flag.Parse();

Note that calls to go [expr] are equivalent for Task.Factory.StartNew([expr]) in .NET (not really, but close enough). The syntax <-wChan, for example, means “wait until there is a new value in the channel”. Presumably it also translate to something like “await channel.DequeueAsync()” in C#.

Next was the config package, where etcd is initializing itself. It was interesting to learn that you can have non global flags using the flag package, so you can use the same code for parsing arguments to a method. But that was about it. Nothing exciting there.

I’m skipping the contrib directory because there is no Go code there, and it doesn’t seems relevant for now. I’ll note that there is a mix of shell scripts, Python and json code in there, so I’m feeling good about ignoring that for now.

One thing that I really like in Go is that it is very easy to define “extension methods”, in fact, you usually appear to define structs (data holders), and then you just define a function that takes this as the base argument, and you can call it using method syntax. That makes some things very natural and nice. And it also give you really nice separation between data & behavior.

I also like the multiple return values option, which gives us a good pattern for reporting errors without getting either crazy syntax or throwing. It also make it clear when we want to ignore errors. Look at this:

   1: func (d *Discoverer) findPeers() (peers []string, err error) {
   2:     resp, err := d.client.Get(path.Join(d.prefix), false, true)
   3:     if err != nil {
   4:         return nil, err
   5:     }
   6:  
   7:     node := resp.Node
   8:  
   9:     if node == nil {
  10:         return nil, fmt.Errorf("%s key doesn't exist.", d.prefix)
  11:     }

Trying to do this in C, for example, would lead to an explosion of arrow head return values, or the complexities of “return zero for error, then get the actual issue from GetLatsError()”. Multiple return values results in a much nicer code than that.

The discovery protocol itself is defined in the docs ,but the code implementing it is really nice. Being able to piggy back on etcd itself to implement discovery is really nice.

The fixtures directory seems to be filled with certs files, I am not sure what for, but I’ll go directly to the http directory and see what is going on there. And there doesn’t appear to be anything much, so I’m moving on to the next thing that is actually meaningful.

The metrics section is interesting, mostly because we have been doing the same thing in RavenDB currently. But it all depends on an external package, so I’ll skip this for now. The next interesting thing is in the mod folder, where we have the dashboard (html5 app, not interested for me) module, and the leader & lock modules.

I’ll start with the leader module. Where we actually have interesting things. The leader module is actually very literally just proxying stuff to the lock module. It is getting a request, translating that request to a lock module http request, and execute that. Personally, I wouldn’t bother with doing this server side, and handle this entirely client side, or by calling the lock module methods directly, instead of proxying the request to the lock module. I am not sure why this approach was choosen:

   1: // getHandler retrieves the current leader.
   2: func (h *handler) getHandler(w http.ResponseWriter, req *http.Request) error {
   3:     vars := mux.Vars(req)
   4:  
   5:     // Proxy the request to the lock service.
   6:     url := fmt.Sprintf("%s/mod/v2/lock/%s?field=value", h.addr, vars["key"])
   7:     resp, err := h.client.Get(url)
   8:     if err != nil {
   9:         return err
  10:     }
  11:     defer resp.Body.Close()
  12:  
  13:     w.WriteHeader(resp.StatusCode)
  14:     io.Copy(w, resp.Body)
  15:     return nil
  16: }

The lock module is where real stuff is happening. And at the same time, I am not sure at what level exactly this is happening. What appears to be happening is that the lock module, too, is built directly on top of the etcd client, rather than using it directly. This is strange to me, because that isn’t the way I would architect it, but I am guessing that this make it easier to work with things, having only a single real external API. On the other hand, having a server make http requests to itself seems very strange to me.

One thing that really confused me was a lot of references to things that are actually defined in another repository, the client side of etcd in go. Another interesting thing is the way Go implements interfaces. Instead of using explicit interfaces, if a type has all the methods for an interface, it is implementing that interface.

At this point I decided that I wanted a better IDE and spent some time getting IntelliJ to work with Go. It supports this, and you even get some reference tracking. I couldn’t get all of it to work, in particular, external reference weren’t tracked, and I didn’t really care to see why, so I just left it:

image

At any rate, I was reading the lock module code. In particular, I am not tracking the acquire_handler.go file. It has a major function (acquireHandler)* that actually handle the process of acquiring the lock.

* Sidenote, I like the structure of the code so far, in most files, we have one function, and some supporting functions to help it do some work. It is nice, simple and quite easy to follow.

The first thing that is done is syncing the cluster information. This is done by going to any of the machines that we already know about and asking them about the current state of the cluster. We take the first response, and presumably, since we are running server side, the first response would always be from us (assuming that the requests end up going to the leader). So there isn’t another machine boundary request, but it is still very strange to read it going through so much client operations.

This code is really interesting:

   1:  
   2:     // Setup connection watcher.
   3:     closeNotifier, _ := w.(http.CloseNotifier)
   4:     closeChan := closeNotifier.CloseNotify()
   5:     stopChan := make(chan bool)

In C#, that would be setting up a cancellation token for the request being abandoned by the client or we completing some work.

   1: // If node exists then just watch it. Otherwise create the node and watch it.           
   2: node, index, pos := h.findExistingNode(keypath, value)                                  
   3: if index > 0 {                                                                          
   4:     if pos == 0 {                                                                          
   5:         // If lock is already acquired then update the TTL.                                   
   6:         h.client.Update(node.Key, node.Value, uint64(ttl))                                    
   7:     } else {                                                                               
   8:         // Otherwise watch until it becomes acquired (or errors).                             
   9:         err = h.watch(keypath, index, nil)                                                    
  10:     }                                                                                      
  11: } else {                                                                                
  12:     index, err = h.createNode(keypath, value, ttl, closeChan, stopChan)                    
  13: }                                                                                       

This is interesting, I am not really able to follow what is going on in the happen case (index > 0) yet. Let us lock at what happens with createNode…

   1: // createNode creates a new lock node and watches it until it is acquired or acquisition fails.
   2: func (h *handler) createNode(keypath string, value string, ttl int, closeChan <-chan bool, stopChan chan bool) (int, error) {
   3:     // Default the value to "-" if it is blank.
   4:     if len(value) == 0 {
   5:         value = "-"
   6:     }
   7:  
   8:     // Create an incrementing id for the lock.
   9:     resp, err := h.client.AddChild(keypath, value, uint64(ttl))
  10:     if err != nil {
  11:         return 0, err
  12:     }
  13:     indexpath := resp.Node.Key
  14:     index, _ := strconv.Atoi(path.Base(indexpath))
  15:  
  16:     // Keep updating TTL to make sure lock request is not expired before acquisition.
  17:     go h.ttlKeepAlive(indexpath, value, ttl, stopChan)
  18:  
  19:     // Watch until we acquire or fail.
  20:     err = h.watch(keypath, index, closeChan)
  21:  
  22:     // Check for connection disconnect before we write the lock index.
  23:     if err != nil {
  24:         select {
  25:         case <-closeChan:
  26:             err = errors.New("user interrupted")
  27:         default:
  28:         }
  29:     }
  30:  
  31:     // Update TTL one last time if acquired. Otherwise delete.
  32:     if err == nil {
  33:         h.client.Update(indexpath, value, uint64(ttl))
  34:     } else {
  35:         h.client.Delete(indexpath, false)
  36:     }
  37:  
  38:     return index, err
  39: }

In line 9, we create a child of the key path. Assuming that the key path is foo, this will create an item with foo/1, foo/2, etc. Effectively an auto incrementing number (with no guarantees on the size of the step, mind).

In line 17 we make sure that we keep this alive, the ttlKeepAlive function is really fun to read:

   1: // ttlKeepAlive continues to update a key's TTL until the stop channel is closed.
   2: func (h *handler) ttlKeepAlive(k string, value string, ttl int, stopChan chan bool) {
   3:     for {
   4:         select {
   5:         case <-time.After(time.Duration(ttl/2) * time.Second):
   6:             h.client.Update(k, value, uint64(ttl))
   7:         case <-stopChan:
   8:             return
   9:         }
  10:     }
  11: }

The C# translation for this would be:

   1: public async Task TtlKeepAlive(string k, string value, int ttl, CancelationToken t)
   2: {
   3:     while(true)
   4:     {
   5:         await Task.Delay(ttl, t);
   6:         if(t.IsCancelled)
   7:           return;
   8:         client.Update(k,value, ttl);
   9:  
  10:     }
  11: }

But I really like this Go select syntax. It is very much like Erlang’s pattern matching on receive. At any rate, it seems that the magic happens in the watch method.

   1: // watch continuously waits for a given lock index to be acquired or until lock fails.
   2: // Returns a boolean indicating success.
   3: func (h *handler) watch(keypath string, index int, closeChan <-chan bool) error {
   4:     // Wrap close chan so we can pass it to Client.Watch().
   5:     stopWatchChan := make(chan bool)
   6:     stopWrapChan := make(chan bool)
   7:     go func() {
   8:         select {
   9:         case <-closeChan:
  10:             stopWatchChan <- true
  11:         case <- stopWrapChan:
  12:             stopWatchChan <- true
  13:         case <- stopWatchChan:
  14:         }
  15:     }()
  16:     defer close(stopWrapChan)
  17:  
  18:     for {
  19:         // Read all nodes for the lock.
  20:         resp, err := h.client.Get(keypath, true, true)
  21:         if err != nil {
  22:             return fmt.Errorf("lock watch lookup error: %s", err.Error())
  23:         }
  24:         nodes := lockNodes{resp.Node.Nodes}
  25:         prevIndex := nodes.PrevIndex(index)
  26:  
  27:         // If there is no previous index then we have the lock.
  28:         if prevIndex == 0 {
  29:             return nil
  30:         }
  31:  
  32:         // Watch previous index until it's gone.
  33:         waitIndex := resp.Node.ModifiedIndex
  34:  
  35:         // Since event store has only 1000 histories we should use first node's CreatedIndex if available
  36:         if firstNode := nodes.First(); firstNode != nil {
  37:             waitIndex = firstNode.CreatedIndex
  38:         }
  39:  
  40:         _, err = h.client.Watch(path.Join(keypath, strconv.Itoa(prevIndex)), waitIndex, false, nil, stopWatchChan)
  41:         if err == etcd.ErrWatchStoppedByUser {
  42:             return fmt.Errorf("lock watch closed")
  43:         } else if err != nil {
  44:             return fmt.Errorf("lock watch error: %s", err.Error())
  45:         }
  46:     }
  47: }

I’ve to admit, this makes my head hurt, just a little bit.

But first, the defer syntax Go has is really nice. It is very similar to C#’s using statements, but it isn’t limited to just a specific interface, and it doesn’t introduce a nesting block.

To go routine in line 7 is interesting. It will wait for a notification from the close channel, the stop watch channel or the wrap channel. And it will forward all of those to the stop watch channel. I’m really not sure why this is the case, but let’s go with this for now.

The real interesting work happens in the for loop. We get all the keys in the specified key path. Note that we assume that we only have numeric keys in that “directory”. And we basically try to find if there is any value that is before our value.

The easiest way to think about it is in the same way you do when you wait in line in any government queue. You take a number, and you’re the first is there is no one with an earlier number than you.

The interesting bit is how Watch is handled. It is basically going to do a long poll request from the server, and the stopWatchChan is used to notify the Watch method when the user cancelled the request, so we don’t need this any longer. I’m really not sure why there is a need for stopWrapChan, but… at least now I understand what is going on here. We use the numbering system to effectively join a queue. Then we wait until we are at the head of the queue.

Let us go back to the actual acuireHandler routine, more specifically to the findExistingNode() behavior. If we specified a value, we try to find an existing entry in the path that already have this value. If there isn’t a value, we go back to the “take a number, wait” approach. If there is a value, however, I don’t following the logic. The findExistingNode() has three return values. The relevant node with the value, the index (the queue #, effectively), and the position of the specified node in the queue.

My problem is that I don’t understand the logic here. We find a node with the same value as we want, then we check that it is the first in the queue? What happens when we have two clients issuing the same request at the same time? I understand what happens, I don’t understand what the intention is.

As an aside, I think that I understand why a lot of the internal works in the lock module is done over the HTTP layer. The idea here is to only handle the distribution once. And if you route everything through the http interface, that would be it. This way, you don’t have to worry about how to handle consistencies, or stuff like that. And the idea is that you have a simple HTTP interface for a complex system like locking. My own preference would be to do this entirely client side, with no server side behavior, but that puts a lot of the onus on the clients, and it is easier to implement server side if you have a lot of clients for many environments.

Anyway, I think that I found the two pieces that really interests me:

imageimage

 

Store is probably the on disk storage, something that is very near & dear to my heart. While server is the pieces that I’ll probably learn the most from…

I’ll start with going over the storage stuff, since that is probably the most familiar to me. Here is the interface for the store:

   1: type Store interface {
   2:     Version() int
   3:     CommandFactory() CommandFactory
   4:     Index() uint64
   5:  
   6:     Get(nodePath string, recursive, sorted bool) (*Event, error)
   7:     Set(nodePath string, dir bool, value string, expireTime time.Time) (*Event, error)
   8:     Update(nodePath string, newValue string, expireTime time.Time) (*Event, error)
   9:     Create(nodePath string, dir bool, value string, unique bool,
  10:         expireTime time.Time) (*Event, error)
  11:     CompareAndSwap(nodePath string, prevValue string, prevIndex uint64,
  12:         value string, expireTime time.Time) (*Event, error)
  13:     Delete(nodePath string, recursive, dir bool) (*Event, error)
  14:     CompareAndDelete(nodePath string, prevValue string, prevIndex uint64) (*Event, error)
  15:  
  16:     Watch(prefix string, recursive, stream bool, sinceIndex uint64) (*Watcher, error)
  17:  
  18:     Save() ([]byte, error)
  19:     Recovery(state []byte) error
  20:  
  21:     TotalTransactions() uint64
  22:     JsonStats() []byte
  23:     DeleteExpiredKeys(cutoff time.Time)
  24: }

This is really interesting, because from the interface alone you can see some really interesting things. For example, The Save() method. That doesn’t match any transactional store interface that I can think of. To be fair, it looks to be more in the sense that this is used for snapshots than anything else, but still..

Those appear to be the core elements of the store:

   1: type store struct {                                                     
   2:     Root           *node                                                   
   3:     WatcherHub     *watcherHub                                             
   4:     CurrentIndex   uint64                                                  
   5:     Stats          *Stats                                                  
   6:     CurrentVersion int                                                     
   7:     ttlKeyHeap     *ttlKeyHeap  // need to recovery manually               
   8:     worldLock      sync.RWMutex // stop the world lock                     
   9: }                                                                       
  10:  
  11:  
  12: // node is the basic element in the store system.
  13: // A key-value pair will have a string value
  14: // A directory will have a children map
  15: type node struct {
  16:     Path string
  17:  
  18:     CreatedIndex  uint64
  19:     ModifiedIndex uint64
  20:  
  21:     Parent *node `json:"-"` // should not encode this field! avoid circular dependency.
  22:  
  23:     ExpireTime time.Time
  24:     ACL        string
  25:     Value      string           // for key-value pair
  26:     Children   map[string]*node // for directory
  27:  
  28:     // A reference to the store this node is attached to.
  29:     store *store
  30: }

Again, the ability to return multiple values is really nice, see methods such as:

   1: // Read function gets the value of the node.
   2: // If the receiver node is not a key-value pair, a "Not A File" error will be returned.
   3: func (n *node) Read() (string, *etcdErr.Error) {
   4:     if n.IsDir() {
   5:         return "", etcdErr.NewError(etcdErr.EcodeNotFile, "", n.store.Index())
   6:     }
   7:  
   8:     return n.Value, nil
   9: }

This is quite lovely way to handle things.

However, looking at the Store directory, I am seeing a lot of stuff about modifying the in memory state, but nothing about persistence. I think that this is all handled via Raft. So I’ll just move away into reading the server side code now.

This is the Server interface:

   1: type Server interface {
   2:     State() string
   3:     Leader() string
   4:     CommitIndex() uint64
   5:     Term() uint64
   6:     PeerURL(string) (string, bool)
   7:     ClientURL(string) (string, bool)
   8:     Store() store.Store
   9:     Dispatch(raft.Command, http.ResponseWriter, *http.Request) error
  10: }

And then we have actually handling requests. I choose to look at the simplest thing, just looking at how we process a read request:

   1: func GetHandler(w http.ResponseWriter, req *http.Request, s Server) error {
   2:     vars := mux.Vars(req)
   3:     key := "/" + vars["key"]
   4:  
   5:     // Help client to redirect the request to the current leader
   6:     if req.FormValue("consistent") == "true" && s.State() != raft.Leader {
   7:         leader := s.Leader()
   8:         hostname, _ := s.ClientURL(leader)
   9:  
  10:         url, err := url.Parse(hostname)
  11:         if err != nil {
  12:             log.Warn("Redirect cannot parse hostName ", hostname)
  13:             return err
  14:         }
  15:         url.RawQuery = req.URL.RawQuery
  16:         url.Path = req.URL.Path
  17:  
  18:         log.Debugf("Redirect consistent get to %s", url.String())
  19:         http.Redirect(w, req, url.String(), http.StatusTemporaryRedirect)
  20:         return nil
  21:     }
  22:  
  23:     recursive := (req.FormValue("recursive") == "true")
  24:     sort := (req.FormValue("sorted") == "true")
  25:     waitIndex := req.FormValue("waitIndex")
  26:     stream := (req.FormValue("stream") == "true")
  27:  
  28:     if req.FormValue("wait") == "true" {
  29:         return handleWatch(key, recursive, stream, waitIndex, w, s)
  30:     }
  31:  
  32:     return handleGet(key, recursive, sort, w, s)
  33: }

If we are requiring consistency, and we aren’t the leader, we’ll forward to the leader. Otherwise, we’ll process the request. Let us assume for now that we are doing a simple get, not a watch, this gives us:

   1: func handleGet(key string, recursive, sort bool, w http.ResponseWriter, s Server) error {
   2:     event, err := s.Store().Get(key, recursive, sort)
   3:     if err != nil {
   4:         return err
   5:     }
   6:  
   7:     writeHeaders(w, s)
   8:     b, _ := json.Marshal(event)
   9:     w.Write(b)
  10:     return nil
  11: }
  12:  
  13: func writeHeaders(w http.ResponseWriter, s Server) {
  14:     w.Header().Set("Content-Type", "application/json")
  15:     w.Header().Add("X-Etcd-Index", fmt.Sprint(s.Store().Index()))
  16:     w.Header().Add("X-Raft-Index", fmt.Sprint(s.CommitIndex()))
  17:     w.Header().Add("X-Raft-Term", fmt.Sprint(s.Term()))
  18:     w.WriteHeader(http.StatusOK)
  19: }

As you can see, we are basically just getting the current state from the in memory store, and handle that. It would be more interesting to look at how we handle waiting for a value to change, however:

   1: func handleWatch(key string, recursive, stream bool, waitIndex string, w http.ResponseWriter, s Server) error {
   2:     // Create a command to watch from a given index (default 0).
   3:     var sinceIndex uint64 = 0
   4:     var err error
   5:  
   6:     if waitIndex != "" {
   7:         sinceIndex, err = strconv.ParseUint(waitIndex, 10, 64)
   8:         if err != nil {
   9:             return etcdErr.NewError(etcdErr.EcodeIndexNaN, "Watch From Index", s.Store().Index())
  10:         }
  11:     }
  12:  
  13:     watcher, err := s.Store().Watch(key, recursive, stream, sinceIndex)
  14:     if err != nil {
  15:         return err
  16:     }
  17:  
  18:     cn, _ := w.(http.CloseNotifier)
  19:     closeChan := cn.CloseNotify()
  20:  
  21:     writeHeaders(w, s)
  22:  
  23:     if stream {
  24:         // watcher hub will not help to remove stream watcher
  25:         // so we need to remove here
  26:         defer watcher.Remove()
  27:         for {
  28:             select {
  29:             case <-closeChan:
  30:                 return nil
  31:             case event, ok := <-watcher.EventChan:
  32:                 if !ok {
  33:                     // If the channel is closed this may be an indication of
  34:                     // that notifications are much more than we are able to
  35:                     // send to the client in time. Then we simply end streaming.
  36:                     return nil
  37:                 }
  38:  
  39:                 b, _ := json.Marshal(event)
  40:                 _, err := w.Write(b)
  41:                 if err != nil {
  42:                     return nil
  43:                 }
  44:                 w.(http.Flusher).Flush()
  45:             }
  46:         }
  47:     }
  48:  
  49:     select {
  50:     case <-closeChan:
  51:         watcher.Remove()
  52:     case event := <-watcher.EventChan:
  53:         b, _ := json.Marshal(event)
  54:         w.Write(b)
  55:     }
  56:     return nil
  57: }

The store’s Watch() method is actually interesting, because it exposes some interesting Go concepts (handling full channels, channels for communications, etc). But the important thing is that this will simply wait for a change to happen in the in memory state, and when such a thing happens, it will put a value in the wathcer.EventChan channel. So the logic here goes like this:

  • Setup a watch on the in memory state.
  • Wait for a:
    • Change in the items we watch
    • Or user abandoning the request

There is some interesting stuff here regarding one time watch, or streaming watch, but that appears to be quite easy to figure out and follow what is going on.

One thing that I can tell you, from my own experience, is that I would actually expect this to have serious issues in productions. In particular, web servers can decide that this request takes too long, and just abort it (for example IIS behaves in this manner), or that it timed out. That obviously depends on the server side implementation, and I’m willing to assume that this isn’t the case for whatever http stack etcd uses. However, clients do not. Most clients would give you the benefit of the doubt, but they would abort the request after a while, usually 15 seconds. That might be okay for some purposes, but especially if you want to handling streaming, that isn’t really going to cut it.

More to the point, for long requests, this can cause issues for proxies, firewalls, etc. They’ll decide that the request is closed, and shut it down even if you handled it on both ends properly. With RavenDB, we have a remarkably similar system, but our streaming notifications also incorporate the idea of heartbeat messages. Those are sent every now and then strictly in order to make sure that you’ll get something client side, and that will make all the infrastructure, client side code, etc much much happier.

Enough with the small stuff, let us look at how we handle more complex things. I now intend to take a look at the POST handler. POST operations in etcd has the following format:

curl http://127.0.0.1:4001/v2/keys/queue -XPOST -d value=Job1

The idea is that this will create an automatically named key such as queue/15, queue/853, etc. The POST handler is interesting, because here it is in its entirety.

   1:  
   2: func PostHandler(w http.ResponseWriter, req *http.Request, s Server) error {
   3:     vars := mux.Vars(req)
   4:     key := "/" + vars["key"]
   5:  
   6:     value := req.FormValue("value")
   7:     dir := (req.FormValue("dir") == "true")
   8:     expireTime, err := store.TTL(req.FormValue("ttl"))
   9:     if err != nil {
  10:         return etcdErr.NewError(etcdErr.EcodeTTLNaN, "Create", s.Store().Index())
  11:     }
  12:  
  13:     c := s.Store().CommandFactory().CreateCreateCommand(key, dir, value, expireTime, true)
  14:     return s.Dispatch(c, w, req)
  15: }

The CreateCreateCommand just create a data object that holds the parameters, and then we dispatch it. I’m thinking that we can learn quite a lot from how that works.

Dispatch merely send the command to the leader. This is the relevant code if we are not the leader:

   1: leader := ps.raftServer.Leader()                                         
   2:                                                                          
   3: // No leader available.                                                  
   4: if leader == "" {                                                        
   5:     return etcdErr.NewError(300, "", s.Store().Index())                     
   6: }                                                                        
   7:                                                                          
   8: var url string                                                           
   9: switch c.(type) {                                                        
  10: case *JoinCommand, *RemoveCommand:                                       
  11:     url, _ = ps.registry.PeerURL(leader)                                    
  12: default:                                                                 
  13:     url, _ = ps.registry.ClientURL(leader)                                  
  14: }                                                                        
  15: uhttp.Redirect(url, w, req)                                              
  16:                                                                          
  17: return nil                                                               

So basically, if there isn’t a leader, we error. That can happen if we have a network split and we are in the minority portion, for example. But usually we’ll just redirect you to the right server to use. But here is the interesting part, where we are the leader, and get to do stuff:

   1: result, err := ps.raftServer.Do(c)
   2: if err != nil {
   3:     return err
   4: }
   5:  
   6: if result == nil {
   7:     return etcdErr.NewError(300, "Empty result from raft", s.Store().Index())
   8: }
   9:  
  10: // response for raft related commands[join/remove]
  11: if b, ok := result.([]byte); ok {
  12:     w.WriteHeader(http.StatusOK)
  13:     w.Write(b)
  14:     return nil
  15: }
  16:  
  17: var b []byte
  18: if strings.HasPrefix(req.URL.Path, "/v1") {
  19:     b, _ = json.Marshal(result.(*store.Event).Response(0))
  20:     w.WriteHeader(http.StatusOK)
  21: } else {
  22:     e, _ := result.(*store.Event)
  23:     b, _ = json.Marshal(e)
  24:  
  25:     w.Header().Set("Content-Type", "application/json")
  26:     // etcd index should be the same as the event index
  27:     // which is also the last modified index of the node
  28:     w.Header().Add("X-Etcd-Index", fmt.Sprint(e.Index()))
  29:     w.Header().Add("X-Raft-Index", fmt.Sprint(s.CommitIndex()))
  30:     w.Header().Add("X-Raft-Term", fmt.Sprint(s.Term()))
  31:  
  32:     if e.IsCreated() {
  33:         w.WriteHeader(http.StatusCreated)
  34:     } else {
  35:         w.WriteHeader(http.StatusOK)
  36:     }
  37: }
  38:  
  39: w.Write(b)
  40:  
  41: return nil

The ps variable is something called a PeerServer, and I haven’t check it yet. But all of this code is basically doing is: “send this to raft to Do something about it, then reply to the caller”. So let us look at what we are actually doing there. The Do method merely call the send() method, which looks like this:

   1: // Sends an event to the event loop to be processed. The function will wait
   2: // until the event is actually processed before returning.
   3: func (s *server) send(value interface{}) (interface{}, error) {
   4:     event := &ev{target: value, c: make(chan error, 1)}
   5:     s.c <- event
   6:     err := <-event.c
   7:     return event.returnValue, err
   8: }

Personally, I think that this is very interesting, and again, very much like the way you would structure an Erlang system. Of particular interest is the idea of event loop. That would be the s.c channel, and I assume that this is meant for a separate goroutine that is processing work on top of that. We ended up with transaction merging in Voron using pretty much the same system.

The s.c channel is a channel of ev pointers. And ev is defined as:

   1:  
   2: // An internal event to be processed by the server's event loop.
   3: type ev struct {
   4:     target      interface{}
   5:     returnValue interface{}
   6:     c           chan error
   7: }

The interface{} definition it Go’s System.Object, basically. And the error channel is there to mark when we are done processing the event, I assume. I would structure it so we can send a null error as completion, and I bet that this is how this is done.

I’m currently assuming that this is being read from the loop() method. And while I usually don’t just comment on comments, this one is really nice:

   1:  
   2: //--------------------------------------
   3: // Event Loop
   4: //--------------------------------------
   5:  
   6: //               ________
   7: //            --|Snapshot|                 timeout
   8: //            |  --------                  ______
   9: // recover    |       ^                   |      |
  10: // snapshot / |       |snapshot           |      |
  11: // higher     |       |                   v      |     recv majority votes
  12: // term       |    --------    timeout    -----------                        -----------
  13: //            |-> |Follower| ----------> | Candidate |--------------------> |  Leader   |
  14: //                 --------               -----------                        -----------
  15: //                    ^          higher term/ |                         higher term |
  16: //                    |            new leader |                                     |
  17: //                    |_______________________|____________________________________ |

This comment promises an interesting function to read…

   1:  
   2: func (s *server) loop() {
   3:     defer s.debugln("server.loop.end")
   4:  
   5:     for {
   6:         state := s.State()
   7:  
   8:         s.debugln("server.loop.run ", state)
   9:         switch state {
  10:         case Follower:
  11:             s.followerLoop()
  12:  
  13:         case Candidate:
  14:             s.candidateLoop()
  15:  
  16:         case Leader:
  17:             s.leaderLoop()
  18:  
  19:         case Snapshotting:
  20:             s.snapshotLoop()
  21:  
  22:         case Stopped:
  23:             s.stopped <- true
  24:             return
  25:         }
  26:     }
  27: }

Let us start by looking at the leaderLoop behavior:

   1: func (s *server) leaderLoop() {
   2:     s.setState(Leader)
   3:     logIndex, _ := s.log.lastInfo()
   4:  
   5:     // Update the peers prevLogIndex to leader's lastLogIndex and start heartbeat.
   6:     s.debugln("leaderLoop.set.PrevIndex to ", logIndex)
   7:     for _, peer := range s.peers {
   8:         peer.setPrevLogIndex(logIndex)
   9:         peer.startHeartbeat()
  10:     }
  11:  
  12:     // Commit a NOP after the server becomes leader. From the Raft paper:
  13:     // "Upon election: send initial empty AppendEntries RPCs (heartbeat) to
  14:     // each server; repeat during idle periods to prevent election timeouts
  15:     // (§5.2)". The heartbeats started above do the "idle" period work.
  16:     go s.Do(NOPCommand{})
  17:  
  18:     // Begin to collect response from followers
  19:     for s.State() == Leader {
  20:         var err error
  21:         select {
  22:         case e := <-s.c:
  23:             if e.target == &stopValue {
  24:                 // Stop all peers before stop
  25:                 for _, peer := range s.peers {
  26:                     peer.stopHeartbeat(false)
  27:                 }
  28:                 s.setState(Stopped)
  29:             } else {
  30:                 switch req := e.target.(type) {
  31:                 case Command:
  32:                     s.processCommand(req, e)
  33:                     continue
  34:                 case *AppendEntriesRequest:
  35:                     e.returnValue, _ = s.processAppendEntriesRequest(req)
  36:                 case *AppendEntriesResponse:
  37:                     s.processAppendEntriesResponse(req)
  38:                 case *RequestVoteRequest:
  39:                     e.returnValue, _ = s.processRequestVoteRequest(req)
  40:                 }
  41:             }
  42:  
  43:             // Callback to event.
  44:             e.c <- err
  45:         }
  46:     }
  47:  
  48:     s.syncedPeer = nil
  49: }

To be perfectly frank, this is really code code. I am loving the structure here. It is really fun to go through and figure out. And the code follows really closely the Raft paper. And… it appears that at some point I actually moved off the etcd code and into the go-raft codebase. I think that I’ll skip doing the Raft stuff for this blog post. It is long enough already, and just focus on the etcd stuff for now.

The part that we really care for this blog post about is the processCommand call:

   1: // Processes a command.
   2: func (s *server) processCommand(command Command, e *ev) {
   3:     s.debugln("server.command.process")
   4:  
   5:     // Create an entry for the command in the log.
   6:     entry, err := s.log.createEntry(s.currentTerm, command, e)
   7:  
   8:     if err != nil {
   9:         s.debugln("server.command.log.entry.error:", err)
  10:         e.c <- err
  11:         return
  12:     }
  13:  
  14:     if err := s.log.appendEntry(entry); err != nil {
  15:         s.debugln("server.command.log.error:", err)
  16:         e.c <- err
  17:         return
  18:     }
  19:  
  20:     s.syncedPeer[s.Name()] = true
  21:     if len(s.peers) == 0 {
  22:         commitIndex := s.log.currentIndex()
  23:         s.log.setCommitIndex(commitIndex)
  24:         s.debugln("commit index ", commitIndex)
  25:     }
  26: }

In createEntry, we create effectively serialize the command into JSON, and appenEntry writes it to a file. (I finally found the serialize format, it is JSON for the commands, wrapped in a protobuf envelop). As an aside, if this was a C# code, I would be very worried about the cost of all those allocations. The data is first moved to a JSON buffer, then into a protocol buffer entry, where it is marshaled into another buffer, and only then is it written to the actual file. That is pretty prevalent in the codebase, to be honest. But again, this is Raft stuff that is going on, not etcd stuff. So we’ll ignore this for now and try to see where we actually get to apply the command against our own internal state.

I had to go through some hoops to figure it out. In particular, commands are applied during recovery, or when we are actually committing the state following a quorum, and this is happening in the Log.ApplyFunc, which is setup externally, and… Anyway, what we actually do is this:

   1: // Apply command to the state machine.                                 
   2: switch c := c.(type) {                                                 
   3: case CommandApply:                                                     
   4:     return c.Apply(&context{                                              
   5:         server:       s,                                                     
   6:         currentTerm:  s.currentTerm,                                         
   7:         currentIndex: s.log.internalCurrentIndex(),                          
   8:         commitIndex:  s.log.commitIndex,                                     
   9:     })                                                                    
  10: case deprecatedCommandApply:                                           
  11:     return c.Apply(s)                                                     
  12: default:                                                               
  13:     return nil, fmt.Errorf("Command does not implement Apply()")          
  14: }                                                                      

And that goes all the way back to the CreateCommand’s Apply function, which does:

   1: // Create node                                                                                      
   2: func (c *CreateCommand) Apply(context raft.Context) (interface{}, error) {                          
   3:     s, _ := context.Server().StateMachine().(store.Store)                                              
   4:                                                                                                     
   5:     e, err := s.Create(c.Key, c.Dir, c.Value, c.Unique, c.ExpireTime)                                  
   6:                                                                                                     
   7:     if err != nil {                                                                                    
   8:         log.Debug(err)                                                                                    
   9:         return nil, err                                                                                   
  10:     }                                                                                                  
  11:                                                                                                     
  12:     return e, nil                                                                                      
  13: }                                                                                                   
  14:                                                                                                     

So, basically, we have Raft that does the hard work of getting a Quorum, persistence, etc.  The etcd server is responsible for the in memory state, defining commands, etc.

The really interesting part from my perspective is that we need to process erroneous entries as well, in the same manner. For example, let us say that I want to create a new entry, but only if it isn’t already there. The way it works, even though I know that this would be an error, I have to run this through Raft, get a consensus that we can apply this command, and then we apply the command, see that it is wrong, and return an error. That error leaves no state changes, but it still had to go through the Raft process, it is going to be in the log forever, etc. I’m guessing that the percentage of erroneous commands is low, to be able to tolerate that.

And, at any rate. That pretty much conclude my review of etcd. It comes to about 20 pages or so, according to my math, and that is quite enough. On the other hand, it might have been 7 posts, instead. I would really like to get some feedback on which option you like more, dear reader.

Next, I’m going to go over go-raft, I have some thoughts about this, but I’ll keep them for my next post.

As a side note. I am not, by any means, an experience Go developer. I haven’t even read Go code beyond Hello World before starting reading the etcd codebase. But I can tell you that this is a very nice codebase to look at. It is clear, nicely laid out, it is possible to go through everything and understand what is going on easily.

Usage conventions for using Voron

As we are gearing up to do more & more stuff in Voron, it occurred to me that while we have settled on a good technological system for it, we haven’t settled on a real set of conventions for real use. We’re probably going to see a lot of use in Voron, and we want to see some consistency and best practices there.

Voron is a key/value store that expose a sorted tree abstraction. You can have as many trees as you would like. And the keys & values are both arbitrary byte strings. Given that, let us try to bring some order to the mix.

Don’t use the root tree

The root tree should reserved for handling of other trees, and not for the use of data of any kind. The only case where using the root tree is fine is if you don’t have any other trees. That tend to be a rare occasion, though. See next topic.

Have a $metadata tree

Always have a $metadata tree, that gives you information about the actual database you are using. For example, you’ll want to have things like the storage version, the database id (always a guid, to be able to tell dbs from their backups), etc.

Alpha numeric values only for tree names, please

You can use any value as a tree name, but you really want to limit yourself to the printable ASCII set. This is recommended because dumping the data to any other format (think debugging) would be greatly enhanced if you can actually read the tree names.

Use @tree for super trees

It is very common to have trees where all their data is handled via MutiAdd, MultiRead, etc. We call such trees super trees (they are trees that contains trees, also see big table). While they are usually used for indexing, there are many cases where you want to do that for things like queues, general run of the meal data (this is great for holding edges in a graph, for example), etc.

Prefix ix_ for all indexing trees

I’m not so sure about this, but it is worth mentioning. The purpose here is to distinguish between standard trees and trees that can be rebuild from scratch if needed. That can be of help in diagnostics mode, for example.

Dynamic trees should be explained

It is very easy to create trees in Voron. But like files, you don’t just create some around for no purpose. A tree cost 4KB (minimum), and more importantly, if you are looking at your storage, you want to be able to make sense of things. If you are using a tree as a queue for a particular destination, make sure that you name it appropriately.

Alternatively, use the $metadata tree to keep track of what goes where.

Avoid non printable key names

Just like in tree names, you can use any byte string in a key name, but you want to be able to read debug data or run diagnostics, it would really help if you could actually look at the data .

Remember, sequential writes are best

Voron will deal with random writes just fine, but it would be far  faster to write if you’ll arrange things so they are sequential. It is fine if you have once in a while a random write. But try to keep things sequential if at all possible. On that node, sequential for us means increasing, all of our optimizations assumes that. Decreasing sequential data is currently not as optimized.

Write and end, delete at start

This is a common operation you need for queues. It is generally better to do that with writing of the data at the end and removing from start. If you can, avoid just deleting stuff all over the place. Again, that works just fine, but we’ve optimized Voron to handle this scenario very well.

Keep the data simple

Voron does a lot with memory mapping. If the data you can use can be read directly, you can literally just access it off our own buffer, and have no copy required at all.

And that is all I have for now…

Tags:

Published at

Originally posted at

Distributed counters feature design

This is another experiment with longer posts.

Previously, I used the time series example as the bed on which to test some ideas regarding feature design, to explain how we work and in general work out the rough patches along the way. I should probably note that these posts are purely fiction at this point. We have no plans to include a time series feature in RavenDB at this time. I am trying to work out some thoughts in the open and get your feedback.

At any rate, yesterday we had a request for Cassandra style counters at the mailing list. And as long as I am doing feature design series, I thought that I could talk about how I would go about implementing this. Again, consider this fiction, I have no plans of implementing this at this time.

The essence of what we want is to be able to… count stuff. Efficiently, in a distributed manner, with optional support for cross data center replication.

Very roughly, the idea is to have “sub counters”, unique for every node in the system. Whenever you increment the value, we log this to our own sub counter, and then replicate it out. Whenever you read it, we just sum all the data we have from all the sub counters.

Let us outline the various parts of the solution in the same order as the one I used for time series.

Storage

A counter is just a named 64 bits signed integer. A counter name can be any string up to 128 printable characters. The external interface of the storage would look like this:

   1: public struct CounterIncrement
   2: {
   3:     public string Name;
   4:     public long Change;
   5: }
   6:  
   7: public struct Counter
   8: {
   9:     public string Name;
  10:     public string Source;
  11:     public long Value;
  12: }
  13:  
  14: public interface ICounterStorage
  15: {
  16:     void LocalIncrementBatch(CounterIncrement[] batch);
  17:  
  18:     Counter[] Read(string name);
  19:  
  20:     void ReplicatedUpdates(Counter[] updates);
  21: }

As you can see, this gives us very simple interface for the storage. We can either change the data locally (which modify our own storage) or we can get an update from a replica about its changes.

There really isn’t much more to it, to be fair. The LocalIncrementBatch() increment a local value, and Read() will return all the values for a counter. There is a little bit of trickery involved in how exactly one would store the counter values.

For now, I think we’ll store each counter as two step values. We’ll have a tree of multi tree values that will carry each value from each source. That means that a counter will take roughly 4KB or so. This is easy to work with and nicely fit the model Voron uses internally.

Note that we’ll outline additional requirement for storage (searching for counter by prefix, iterating over counters, addresses of other servers, stats, etc) below. I’m not showing them here because they aren’t the major issue yet.

Over the wire

Skipping out on any optimizations that might be required, we will expose the following endpoints:

  • GET /counters/read?id=users/1/visits&users/1/posts <—will return json response with all the relevant values (already summed up).
    { “users/1/visits”: 43, “users/1/posts”: 3 }
  • GET /counters/read?id=users/1/visits&users/1/1/posts&raw=true <—will return json response with all the relevant values, per source.
    { “users/1/visits”: {“rvn1”: 21, “rvn2”: 22 } , “users/1/posts”:  { “rvn1”: 2, “rvn3”: 1 } }
  • POST /counters/increment <– allows to increment counters. The request is a json array of the counter name and the change.

For a real system, you’ll probably need a lot more stuff, metrics, stats, etc. But this is the high level design, so this would be enough.

Note that we are skipping the high performance stream based writes we outlined for time series. We’ll probably won’t need them, so that doesn’t matter, but they are an option if we need them.

System behavior

This is where it is really not interesting, there is very little behavior here, actually. We only have to read the data from the storage, sum it up, and send it to the user. Hardly what I’ll call business logic.

Client API

The client API will probably look something like this:

   1: counters.Increment("users/1/posts");
   2: counters.Increment("users/1/visits", 4);
   3:  
   4: using(var batch = counters.Batch())
   5: {
   6:     batch.Increment("users/1/posts");
   7:     batch.Increment("users/1/visits",5);
   8:     batch.Submit();
   9: }

Note that we’re offering both batch and single API. We’ll likely also want to offer a fire & forget style, which will be able to offer even better performance (because they could do batching across more than a single thread), but that is out of scope for now.

For simplicity sake, we are going to have the client just a container for all of endpoints that it knows about. The container would be responsible for… updating the client visible topology, selecting the best server to use at any given point, etc.

User interface

There isn’t much to it. Just show a list of counter values in a list. Allow to search by prefix, allow to dive into a particular counter and read its raw values, but that is about it. Oh, and allow to delete a counter.

Deleting data

Honestly, I really hate deletes. They are very expensive to handle properly the moment you have more than a single node. In this case, there is an inherent race condition between a delete going out and another node getting an increment. And then there is the issue of what happens if you had a node down when you did the delete, etc.

This just sucks. Deletion are handled normally, (with the race condition caveat, obviously), and I’ll discuss how we replicate them in a bit.

High availability / scale out

By definition, we actually don’t want to have storage replication here. Either log shipping or consensus based. We actually do want to have different values, because we are going to be modifying things independently on many servers.

That means that we need to do replication at the database level. And that leads to some interesting questions. Again, the hard part here is the deletes. Actually, the really hard part is what we are going to do with the New Server Problem.

The New Server Problem dictates how we are going to bring a new server into the cluster. If we could fix the size of the cluster, that would make things a lot easier. However, we are actually interested in being able to dynamically grow the cluster size.

Therefor, there are only two real ways to do it:

  • Add a new empty node to the cluster, and have it be filled from all the other servers.
  • Add a new node by backing up an existing node, and restoring as a new node.

RavenDB, for example, follows the first option. But it means that in needs to track a lot more information. The second option is actually a lot simpler, because we don’t need to care about keeping around old data.

However, this means that the process of bringing up a new server would now be:

  1. Update all nodes in the cluster with the new node address (node isn’t up yet, replication to it will fail and be queued).
  2. Backup an existing node and restore at the new node.
  3. Start the new node.

The order of steps is quite important. And it would be easy to get it wrong. Also, on large systems, backup & restore can take a long time. Operationally speaking, I would much rather just be able to do something like, bring a new node into the cluster in “silent” mode. That is, it would get information from all the other nodes, and I can “flip the switch” and make it visible to clients at any point in time.  That is how you do it with RavenDB, and it is an incredibly powerful system, when used properly.

That means that for all intents and purposes, we don’t do real deletes. What we’ll actually do is replace the counter value with delete marker. This turns deletes into a much simple “just another write”. It has the sad implication of not free disk space on deletes, but deletes tend to be rare, and it is usually fine to add a “purge” admin option that can be run on as needed basis.

But that brings us to an interesting issue, how do we actually handle replication.

The topology map

To simplify things, we are going to go with one way replication from a node to another. That allows complex topologies like master-master, cluster-cluster, replication chain, etc. But in the end, this is all about a single node replication to another.

The first question to ask is, are we going to replicate just our local changes, or are we going to have to replicate external changes as well? The problem with replicating external changes is that you may have the following topology:

image

Now, Server A got a value and sent it to Server B. Server B then forwarded it to Server C. However, at that point, we also have a the value from Server A replicated directly to Server C. Which value is it supposed to pick? And what about a scenario where you have more complex topology?

In general, because in this type of system, we can have any node accept writes, and we actually desire this to be the case, we don’t want this behavior. We want to only replicate local data, not all the data.

Of course, that leads to an annoying question, what happens if we have a 3 node cluster, and one node fails catastrophically. We can bring a new node in, and the other two nodes will be able to fill in their values via replication, but what about the node that is down? The data isn’t gone, it is still right there in the other two nodes, but we need a way to pull it out.

Therefor, I think that the best option would be to say that nodes only replicate their local state, except in the case of a new node. A new node will be told the address of an existing node in the cluster, at which point it will:

  • Register itself in all the nodes in the cluster (discoverable from the existing node). This assumes a standard two way replication link between all servers, if this isn’t the case, the operators would have the responsibility to setup the actual replication semantics on their own.
  • New node now starts getting updates from all the nodes in the cluster. It keeps them in a log for now, not doing anything yet.
  • Ask that node for a complete update of all of its current state.
  • When it has all the complete state of the existing node, it replays all of the remembered logs that it didn’t have a chance to apply yet.
  • Then it announces that it is in a valid state to start accepting client connections.

Note that this process is likely to be very sensitive to high data volumes. That is why you’ll usually want to select a backup node to read from, and that decision is an ops decision.

You’ll also want to be able to report extensively on the current status of the node, since this can take a while, and ops will be watching this very closely.

Server Name

A node requires a unique name. We can use guids, but those aren’t readable, so we can use machine name + port, but those can change. Ideally, we can require the user to set us up with a unique name. That is important for readability and for being able to alter see all the values we have in all the nodes. It is important that names are never repeated, so we’ll probably have a guid there anyway, just to be on the safe side.

Actual Replication Semantics

Since we have the New Server Problem down to an automated process, we can choose the drastically simpler model of just having an internal queue per each replication destination. Whenever we make a change, we also make a note of that in the queue for that destination, then we start an async replication process to that server, sending all of our updates there.

It is always safe to overwrite data using replication, because we are overwriting our own data, never anyone else.

And… that is about it, actually. There are probably a lot of details that I am missing / would discover if we were to actually implement this. But I think that this is a pretty good idea about what this feature is about.

Published at

Originally posted at

Time series feature design: Storage replication & the bee’s knees

Being able to handle replication at the storage level is a really nice feature to have. More than that, it is a feature that can be broadly applied. But… a database is a lot more than just storage. And being able to just move the data around between machines is nice, but there are other things we have to take into account.

In particular, when we replicate via storage changes, we don’t have a good way to take actions on changes. Most of the time, that means that we can’t actually rely on internal caches, and would probably have to deal with that somehow in another fashion. But there are usually secondary processing that is done on a node that would have to be accounted for.

For example, let us assume that we had the ability to replicate RavenDB (docs) changes between machines using storage replication. The problem here is that we would be replicating the documents, but not the indexes, and when we do that, we would need to index the changed documents on the destination node. However, that would actually require two data stores, one for the actual documents data, and one for all of the non replicated data (indexing, stats, etc).

In other words, I think that such a database would have to be designed specifically for that scenario.

In addition to that, it would probably be best for the storage replication to also be annotated with information for higher level code. So if you modify this range in the file, you’ll also know that you need to drop the following documents from the cache.

Published at

Originally posted at

Tweaking xUnit

One of the interesting challenges that we have with RavenDB is the number and duration of our tests.

In particular, we current have over three thousands tests, and they take hours to run. We are doing a lot of stuff there “let us insert million docs, write a map/reduce index, query on that, then do a mass update, see what happens”, etc. We are also doing a lot of stuff that really can’t be emulated easily. If I’m testing replication for a non existent target, I need to check that actual behavior, etc. Oh, and we’re probably doing silly stuff in there, too.

In order to try to increase our feedback cycle times, I made some modifications to xUnit. It is now going to record the test duration of the tests, the results look like that:

image

You can see that Troy is taking too long. In fact, there is a bug that those tests currently expose that result in a timeout exception, that is why they take so long.

But this is just to demonstrate the issue. The real power here is that we also use this when decided how to run the tests. We are simply sorting them by how long they took to run. If we don’t have a record for that, we’ll give them a run time of –1.

This has a bunch of interesting implications:

  • The faster tests are going to run first. That means that we’ll have earlier feedback if we broke something.
  • The new tests (haven’t had chance to run ever) will run first, those are were the problems are more likely anyway.
  • We only run report this for passing tests, that means that we are going to run failed tests first as well.

In addition to that, this will also give us better feedback on what are slow tests are. So we can actually give them some attention and see if they are really required to be slow or they can be fixed.

Hopefully, we can find a lot of the tests that are long, and just split them off into a separate test project, to be run at a later time.

The important thing is, now we have the information to handle this.

What does the test say?

Because RavenDB is a database, a lot of the tests we have to run are pretty long. For example, we need to touch the disk a lot, and we have a lot of networked tests.

that means that running this test suite can take a while. But the default information we get is pretty lousy. Just the test count and that is it. But when a test hang, and they do if we have bugs, it make it very hard to figure out where the culprit is.

So we forked xunit and added a tiny feature to the console runner:

image

Time series feature design: The Consensus has dRafted a decision

So, after reaching the conclusion that replication is going to be hard, I went back to the office and discussed those challenges and was in general pretty annoyed by it. Then Michael made a really interesting suggestion. Why not put it on RAFT?

And once he explained what he meant, I really couldn’t hold my excitement. We now have a major feature for 4.0. But before I get excited about that (we’ll only be able to actually start working on that in a few months, anyway), let us talk about what the actual suggestion was.

Raft is a consensus algorithm. It allows a distributed set of computers to arrive into a mutually agreed upon set of sequential log records. Hm… I wonder where else we can find sequential log records, and yes, I am looking at you Voron.Journal.

The basic idea is that we can take the concept of log shipping, but instead of having a single master/slave relationship, we change things so we can put Raft in the middle. When committing a transaction, we’ll hold off committing the transaction until we have a Raft consensus that it should be committed. The advantage here is that we won’t be constrained any longer by the master/slave issue. If there is a server down, we can still process requests (maybe need to elect a new cluster leader, but that is about it).

That means that from an architectural standpoint, we’ll have the ability to process write requests for any quorum (N/2+1). That is a pretty standard requirement for distributed databases, so that is perfectly fine.

That is a pretty awesome thing to have, to be honest, and more importantly, this is happening at the low level storage layer. That means that we can apply this behavior not just to a single database solution, but to many database solutions.

I’m pretty excited about this.

Shiny features in the depth: New index optimization

One of the nice features in RavenDB 3.0 is optimizing the process of creating a new index. In particular, we want to optimize it when you create a new index on a small collection in a large database.

If you have a small database, you don’t care, it is going to complete quickly anyway. If you are creating an index on a collection that compose a significant amount of the documents in the database, you don’t care, you are going to have to do a lot of work anyway. But the common case for a big database is that you usually have one very big collection, and much smaller collections for everything else.

In RavenDB 2.x, you still have to pay the full price for indexing everything, but that isn’t the case in RavenDB 3.0. What we have done is to effectively optimize the process so that in this case, we will preload all of the documents taking part in the relevant collection, and send them directly to be indexed.

We do this by utilizing the Raven/DocumentsByEntityName index. Which has already indexed everything in the database anyway. This is a nice little feature, because it allows us to really take advantage of the work we already did long ago. Using one index to pre-populate another is a neat trick, and one that I am very happy about.

Because this is a new code path, it also means that it is actually executed outside of standard indexing. And that in turn means that adding a new index will not impact other indexes at all.

This is a small feature, but it does address a common pain point our users have with working in RavenDB in production.

Reminder, we have our upcoming RavenDB Conference in April, where we’ll discuss other stuff in the 3.0 release.

Tags:

Published at

Originally posted at

Comments (8)

RavenDB 3.0 Status Update

We are gearing up for the RavenDB Conference in April and we just released a private alpha preview sneak peek to a few external people. But we have been working on RavenDB 3.0 for the past 18 months or so, some bits in it are actually dated from 2011(!) that we only now are able to actually put into production. That is a lot of work that is going on and it is easy to actually get lost in what is going on there.

So, without further ado, here are the major highlights of RavenDB 3.0…

Indexing

Yes, we did more work on improving indexing performance. But that is actually secondary. What we really focused on this release are operational indexing concerns.

What does this means? Well, to start with, internally we don’t use the index name any longer. That means that we can do silly things like make an index delete async. Users that have large indexes, especially map/reduce or indexes with LoadDocument calls found that deleting an index can take a very long while. Now this is no longer the case, we are now able to immediately delete the index, and actually do the cleanup in the background.

For that matter, another operational concern people has is the introduction of new indexes. Especially if the index in question covers just small part of a very large database, that used to take a very long time. The index would have to go through all the documents in the database to complete indexing. Now, we are able to make several optimizations that means that we can take just the relevant documents, and complete indexing much more quickly in this scenario.

Introducing a new index would split it entirely from the other indexes while it is running, so you won’t have to deal with a new index slowing down other indexes. And neither will big deletes impact indexing performance in this manner any longer, we have much better interleaving of that now.

Still on an operational bent, we now have much better reporting on what is actually being indexed. You can see what the indexes are doing, and act accordingly.

From a development point of view, we have added some nice things. The ability to index an attachment, so you can index big text without it residing in documents. We have also added some better situational awareness in the indexing code. We have some people who were doing… funny things there. Indexes that were producing hundreds of index entries per every document they indexed. Then we had to deal with the associated performance problems. We now can detect and warn about that, and we let the user specific the valid limits on a per index case.

There are other stuff in indexing, but I want to go over the rest of what we did for 3.0…

RavenFS

RavenFS was created because RavenDB’s attachments are nice, but they aren’t nearly good enough. We have a lot of users that want to use attachments, then they find that they can’t see them, search on them in meaningful ways, etc. More importantly, they are limited because we intended them to be of relatively small size. Users really asked us for something better, and RavenFS is the answer.

We are talking about a replicated file system, which supports very large files as well as all the facilities to work and manage them. It was explicitly designed to  be geo-distributed and it can drastically decrease the network load on systems that need to share very large files that change frequently.

I’m going to talk about RavenFS quite a bit in my keynote in the RavenDB Conference, but I think that it is really cool and there are some very nice use cases for it. Just for fun, it has been used in production by several customers for the past 2 years, so we already have some great experience with that.

JVM Client API

A fully functional JVM client opens us for more fronts with regards to who can make use of RavenDB. We already have people building applications using that. And we intend to have more clients for additional platforms after 3.0 is shipped.

Internal changes

There have been a lot of that, actually. But the one of the most important ones is that we are now hosting RavenDB on top of OWIN and Web API. The change, from our own HTTP server, was done in order to help our users have a better foundation to understand how RavenDB works internally, and to encourage contributions. It also allow us to do some nice things, like have an end point that documents all the endpoints in a RavenDB server.

Another important change is that we are moving away from the Silverlight studio in favor of a brand new HTML5 studio. That is quite exciting, especially because the performance and responsiveness of the system as a whole become so much better. And it doesn’t hurt that we don’t have to deal with the complexities of Silverlight.

Voron

This one probably got the most attention, but hopefully it will be least noticed once you actually get the bits. Voron is our new storage engine. In fact, it is more than that, it is a new way for us to store data, which we use in RavenDB to store our transactional data. The reason it is more than that is that it isn’t limited to what RavenDB currently needs to do. It can do much more, and we are already in the process of doing quite a lot more with it than one might suspect from outside. I’ll leave that for later, and just talk about what we have already done.

RavenDB now have the option of running on Voron. In fact, we have tested it with the entire RavenDB test suite (over 3,000 tests) and it passes with flying colors.

Voron will allow us to run on Linux, at some point, but it is a lot more important that it allows us to very carefully tune our storage usage and get a much better appreciation for how we are actually doing things. We expect to be able to do some really nice things with it, and it has already shown itself to be competitive with regards to performance against Esent.

Operations

There is always that, isn’t it. And there is a reason why is is last, but never least, in this list. (Try to say that repeatedly, fast Smile).

We kicked off performance counters, which caused no end of operational headaches (corrupted counters, permission issues, hanging, etc) in favor of an internal metrics library. Because it is internal, we are able to add a lot more metrics and a lot more meaningful metrics to the system.

We have new endpoints that expose even more internal states. We improved periodic backup support so it would be much nicer to work with (we now allow to define: full export every week, periodic export every day). There are quite a few goodies there available for the ops people to get insight into what is going on.

And… those are the highlights, and the code aspect of things.

We have  a team of about a dozen people working on RavenDB at this time, and we keep growing. This is quite exciting, and I’m really looking forward to getting to meet our users in the conference…

Here is a hint, there are going to be surprises…

Tags:

Published at

Originally posted at

Comments (9)

Time series feature design: Replication

Armed with the knowledge about replication strategies from my previous post, we can now consider this in the context of the time series database.

We actually have two distinct pieces of data that we track. We have the actual time data, the timestamp and value that we keep track of, and we have the series information (tags, mostly).  We can consider using log shipping here, and that would give us a great way to get a read replica. But the question is why we would want to do that. It is nice to get a replica, but that replica would have to be read only. Is that useful?  It could take over, if the master is down, but that would mean that the master would have to stay down (or converted to a slave). And divergent writes are a problem.

While attractive as a cheap way to handle replication, I don’t like this very much.

So that leaves us with using a multi write partners situation. In this case, we can allow the two servers to operate in tandem. We need to have some way to resolve conflicts, and this is where things gets a bit messy.

For series data, it is trivial to just use some form of last write wins. This assumes a synchronized clock between the servers, but we’ll leave that requirement for now.

The problem is with the actual time data. Conceptually, we intend to store the information like this:

image

The problem is how do you detect conflicts. And are they really even possible. Let us assume that we want to update a particular value at time T on both servers. Server A replicates to server B, and now we need to decide how to deal with it. Ignore the value? Overwrite the value?

The important thing is that we need some predictable way to handle this that will end up with all the nodes in the cluster having the same agreed upon value. The simplest scenario, assuming a clock sync, is to use the write timestamp. But that would require us to keep the write time stamp. Currently we can use just 16 bytes for each time record. But recording the write timestamp will increase our usage to 24 bytes. That is a 50% increase just to handle conflicts. I don’t want to pay that.

The good thing about time series data is that a single value isn’t that important, and the likelihood that they will be changed it relatively small. We can just decide to say: We’ll choose a value, for example, we will choose the maximum value for that time, and be done with it. That has its own set of problems, but we’ll deal with that in a bit. We need to discuss how we deal with replication in general, first.

Let us imagine that we have 3 servers:

  • Server A replicates to B
  • Server B replicates to C
  • Server C replicates to A

We have concurrent writes to the same time value on both server A and B. For the purpose of the discussion, let us assume that we have a way to resolve the conflict.

Server A notifies Server B about the change, but server B already have a different value for that. Conflict resolution is run, and we have a new value .That value need to be replicated down stream. It goes to Server C, who then replicate it to Server A, who then replicates it to Server B? Ad infinitum?

I intentionally chose this example, but the same thing can happen with just two servers replicating to one another (master/master). And the problem here is that in order to be able to actually track this properly, we are going to need to keep a lot of metadata around, per value. While I can sort of accept the need to keep the write time (thus requiring 50% more space per value), the idea of holding many times more metadata for replication purposes than the actual data we want to replicate seems… silly at best.

Log shipping replication it is, at least until someone can come up with a better way to resolve the issues above.

On replication strategies, or the return of the long article

Meta note: I’ve been doing short series of blog posts for a while. I thought that this would be a good time to change. I am not sure how big this blog post is going to be, but it is going to be big. Please let me know about which approach you find better, and your reasoning.

I have been thinking about this quite a lot in the past few days. I am trying to see if there is a common solution to replication in general that we can utilize across a number of solutions. If we can do that, we can provide much better feature set for a wide variety of scenarios.

But before we can talk about how to actually implement replication, we need to talk about what type of replication we are talking about. We are assuming a single database (non sharded, running on multiple nodes). In general, there appears to be the following options:

  • Master / slaves
  • Primary / secondaries
  • Multi write partners
  • Multi master

Those are just designations that I’ll use for this series of blog posts. For the purpose of those posts, they are very different beast indeed.

The master/ slaves approach is talking specifically for a scenario where you have a single write master and one or more slaves. A key aspect of this strategy is that you can never (at least under normal operations) make any change whatsoever to the slaves. They are pure reads, and they can not be changed to become writeable without severing their ties to the master or risking data corruption.

A common example of such an approach is log shipping. I’ll discuss it in detail later on, but you can look at the docs for any such system, changing a slave to be writable is a decidedly non trivial process. And for a good reason.

The primary / secondaries mode is very similar to the master / slaves approach, however, here we have an explicit option for a secondary to become the primary. There can be only one primary, but the idea is that we allow a much easier way to switch the primary node. MongoDB uses such a system.

Multi write partners systems allow any node to accept a write, and it will take care of distributing the change to the other nodes. It also need, unlike the other options so far, to deal with conflicts. The ability of two users to write to the same value on multiple nodes at the same time. However, multi write partners usually make assumptions about their partners. For example, that they are relatively in sync, and that there is a separate protocol for bringing a new node online into the partnership that is outside the usual replication metric.

Multi master systems allow, accept and encourages nodes to come and go as they please, they assume that writes can and will conflict, and the need to resolve that on an ongoing basis. There are no expectations from the other nodes about being relatively in sync, and it is common to “re-seed” a new node by just starting replication to it, which means that you need to replicate all the data from the beginning of time to it. It is also common to have a node pop up once in a blue moon, expect to get all changes that happened while it was gone, and then drop off again.

Let us look at the actual implementation details of each, including some examples, and hopefully it’ll be clearer what I am talking about.

 

Log Shipping

Master / slaves is usually implemented via log shipping. The easiest way to think about log shipping is that the master database will send (magically, we don’t really care much how at this point) to the slaves instructions on how to directly modify the database files. In other words, conceptually, it is sending them the following:

   1: writep(fd1, 1024, new[]{ 17,85,124,13,86}, 5);
   2: writep(fd1, 18432, new[]{ 12,95,34,83,76,32,59}, 7);b

Those are very low level modifications, as you can imagine. The advantage here is that it is very easy to capture and replay those changes. The disadvantage is that you cannot really do anything else. Because the changes are happening at the very bottom of the stack, there is no chance to run any sort of logic. We are just writing to the file, same as the master server did.

This is the key reason why it is so hard for a slave to allow writes. The moment it makes any independent write, it opens itself up to the risk that the master would also do a write, that would generate data corruption. That is why you have to do the major song & dance if you want to switch the master & the slave. You have to go through all of this trouble to ensure that you don’t ever have a scenario where you have a write happening on both ends.

Once that happens, you can never ever get those two in sync again. It is just happening at too low a level.

Generating a new node, however, is very easy. Make sure to keep the journal around, do a full backup of the database and move it to another node. Then start shipping the logs over. Because they started at the same point, they can be safely applied.

Note that this version is very sensitive to versioning issues. You cannot have even a tiny change in the versions of working with the low level storage, because then all hell might break lose. This method is very good for generating read replicas. Indeed, this is what this is used for most of the time.

In theory, you can even get it to do failovers, because while the master is down, the slave can write. The problem is how do you handle a case where the slave think that the master is down, and the master think that everything is fine. At that point, you might have both of them accept writes, resulting in an unmergable situation.

In theory, since they share a common root, you can decide that one of them is the victor, and go with that, but that would result in losing data from the loser server, and probably data that you have no actual way of getting back. The changes we keep track of here are very small, and likely too granular to allow you to actually do something meaningful to extract the changed information.

Oplog

This is actually quite similar to the log shipping method, but instead of sending the very low level file I/O operations, we’re actually sending higher level commands. This leads to a quite a few benefits as far as we are concerned. The primary server can send its log as:

   1: set("users/1", {"name": "oren" });
   2: set("users/2", {"name": "ayende" });
   3: del("users/1");

Executing this set of instruction on the secondary will result in identical state on the secondary.  Unlike Log Shipping option, this actually require the secondary server to perform work, so it is more expensive than just apply the already computed file updates.

However, the upside of this is that you can have a far more readable log. It is also much easier to turn a secondary into a primary. Mostly, this is silly. The actual operation is the exact same thing. But because you are working at the protocol level, rather than the file level. You can get some interesting benefits.

Let us assume that you have the same split brain issue, when both primary & secondary think that they are the primary. In the Log Shipping case, we had no way to reconcile the differences. In the case of Oplog, we can actually do this.  The key here is that we can:

  • Dump one of the servers rejected operations into a recoverable state.
  • Attempt to apply both severs logs, hoping that they didn’t both work on the same document.

This is the replication mode used by MongoDB. And it has chosen the first approach for handling such conflicts. Indeed, that is pretty much the only choice that it can safely make. Two servers making modifications to the same object is always going to require manual resolution, of course. And it is usually better to have to do this in advance and explicitly rather than “sometimes it works”.

You can see some discussion on how merging back divergent writes works in MongoDB here. In fact, continuing to use the same source, you can see the internal oplog in MongoDB here:

   1: // Operations
   2:  
   3: > use test
   4: switched to db test
   5: > db.foo.insert({x:1})
   6: > db.foo.update({x:1}, {$set : {y:1}})
   7: > db.foo.update({x:2}, {$set : {y:1}}, true)
   8: > db.foo.remove({x:1})
   9:  
  10: // Op log view
  11:  
  12: > use local
  13: switched to db local
  14: > db.oplog.rs.find()
  15: { "ts" : { "t" : 1286821527000, "i" : 1 }, "h" : NumberLong(0), "op" : "n", "ns" : "", "o" : { "msg" : "initiating set" } }
  16: { "ts" : { "t" : 1286821977000, "i" : 1 }, "h" : NumberLong("1722870850266333201"), "op" : "i", "ns" : "test.foo", "o" : { "_id" : ObjectId("4cb35859007cc1f4f9f7f85d"), "x" : 1 } }
  17: { "ts" : { "t" : 1286821984000, "i" : 1 }, "h" : NumberLong("1633487572904743924"), "op" : "u", "ns" : "test.foo", "o2" : { "_id" : ObjectId("4cb35859007cc1f4f9f7f85d") }, "o" : { "$set" : { "y" : 1 } } }
  18: { "ts" : { "t" : 1286821993000, "i" : 1 }, "h" : NumberLong("5491114356580488109"), "op" : "i", "ns" : "test.foo", "o" : { "_id" : ObjectId("4cb3586928ce78a2245fbd57"), "x" : 2, "y" : 1 } }
  19: { "ts" : { "t" : 1286821996000, "i" : 1 }, "h" : NumberLong("243223472855067144"), "op" : "d", "ns" : "test.foo", "b" : true, "o" : { "_id" : ObjectId("4cb35859007cc1f4f9f7f85d") } }

You can actually see the chain on command to oplog entry. The upsert command in line 7 was turned into an insert in line 18, for example. There appears to also be a lot of work done to avoid having to do any sort of computable work, in favor of resolving things to a simple idempotent operation.

For example, if you have a doc that looks like {counter:1} and you do an update like {$inc:{counter:1}} on the primary, you’ll end up with {counter:2} and the oplog will store {$set:{counter:2}}. The secondaries will replicate that instead of the $inc.

That is pretty nice feature, since it mean that you can much apply changes multiple times and end with the same result. But it all leads to the end result, in which you can’t merge divergent writes.

You do get a much better approach for actually going over the data and doing the fixup yourself, but still.. I don’t really like it.

Multi write partners

In this mode, we have a set of servers, each of which is familiar with their partners. All the writes coming are accepted, and logged. Replication happen from the source server contacting all of the destination servers and asking them: What is the last you heard from me? Here are all of my changes since then. Critically, it is at this point that we can trim the log for all of the actions that were already replicated to all of the servers.

A server being down means that the log of changes to go there is going to increase in size until the partner is up again, or we remove the entry for that server from our replication destination.

So far, this is very similar to how you would structure an oplog. The major difference is how you structure the actual data you log. In the oplog scenario, you’re going to write the changes that happens to the system. And the only way to act on this is to actually apply the op log in the same sequence as it was generated. This leads to a system where you can always have just a single primary node. And that leads to situations when split brains will result in data loss or manual merge steps.

In MWP case, we are going to keep enough context (usually full objects) so that we can give the user a better option to resolve the conflict. This also gives us the option of replaying the log in non sequential manner.

Note, however, that you cannot just bring a new server online and expect it to start playing nicely. You have to start from a known state, usually a db backup of an existing node. Like the log shipping scenario, the process is essentially, start replicating (to the currently non existent server), that will ensure that the log will be there when we actually have the new server. Backup the database and restore on a secondary server. Configure to accept replication from the source server.

The complexities here are that you need to deal with operations that you might already have. That is why this is usually paired with vector clocks, so you can automatically resolve such conflicts. When you cannot resolve such conflicts, this falls down to manual user intervention.

Multi Master

Multi master systems are quite similar to multi write partners, but they are designed to operate independently. It is common for servers to be able communicate with one another only rarely. For example, a mobile system that is only able to get connected just a few hours a week. As such, we cannot just cap the size of the operations to replicate. In fact, the common way to bring a new server up to speed is just to replicate to it. That means that we need to be able to replicate, essentially from any point in the server history, to a new server.

That works great, as long as you don’t have deletes. Those do tend to make things harder, because you need to keep track of those, and replicate them. RavenDB and CouchDB are both multi master systems, for example. Conflicts works the same way, pretty much, and we use a vector clock to determine if a value is in conflict or not.

 

Divergent writes

I mentioned this a few times, but I didn’t fully explain. For my purposes, we assume that we are using 2 servers (and yes, I know all about quorums, etc. Not relevant for this discussion) running in master/slave mode.

At some point, the slave think that the master is down and takes over, and the master doesn’t notice this and still think it is the master. During this time, both server accept the following writes:

Server A Server B
write users/1 wrier users/2
write users/3 write users/3
delete users/4 delete users/5
delete users/6 write users/6
write users/7 delete all users
set all users to active write users/8

After those operation happen, we restore communication between the two servers and they need to decide how to resolve those changes

Getting down to business

Okay, that is enough talking about what those terms mean. Let us consider the implications of using them. Log shipping is by far the simplest method to use. Well, assuming that you actually have a log mechanism, but most dbs do. It is strictly one writer model, and there is absolutely no way to either resolve divergent writes or even to find out what they were. The good thing about log shipping is that it is quite easy to get this working without actually needing to care anything about the actual data involved. We work directly at the file level, we don’t care at all about what the data is. The problem is that we can’t even solve simple conflicts, like writes to the different objects. This is because we are actually working at the file level, and all the changes are there. Attempting to merge changes from multiple logs would likely result in file corruption. The up side is that it is probably the most efficient way to go about doing this.

Oplog is a step above log shipping, but not a big one. It doesn’t resolve the divergent writes issues. This is now an application level protocol. The log needs to contain information specific to the actual type of data that we store. And you need to write explicit code to handle this. That is nice, but it also require strict sequence of all operations. Now, you can try to merge things between different logs. However, you need to worry about conflicts, and more to the point, there is usually nothing in the data itself that will help you even detect conflicts.

Multi write partners are meant to take this up a notch. They do keep track of the version history (usually via vector clocks). Here, the situation is more complex, because we need to explicitly decide how to deal with conflicts (either resolve automatically or defer to user decision), but also how to handle distribution of updates. Usually they are paired with some form of logic that tells you how to direct your writes. So all writes for a particular piece of data would go to a preferred node, to avoid generating multiple versions. The data needs to contains some information about that, so we keep vector clock information around. Once we sent the changes to all our partners, we can drop them, saving in space.

Multi master is meant to ensure that you can have partners that might only see one another occasionally, and it makes no assumptions about the topology. It can handle a node that comes on, get some data replicated, and drop off for a while (or forever). Each node is fully independent, and while they would collaborate with others, they don’t need them. The downside here is that we need to keep track of some things forever. In particular, we need to keep track of deletes, to ensure that we can get them to the remote machines.

What about set operations?

Interesting enough, that is probably the hardest issue to resolve. Consider the case when you have the following operations happen:

Server A Server B
write users/7 delete all users
set all users to active write users/8 (active: false)

What should be the result of this? There isn’t a really good answer. Should users/8 be set to active: true? What about users/7, should it be deleted or kept?

It gets hard because you don’t have good choices. The hard part here is actually figuring out that you have a conflict. And there isn’t a really good way to handle set operations nicely with conflicts. The common solution is to translate this to the actual operations made (delete users/1,user/2, users/3 – writer users/8, users/5) and leave it at that. The set based operation is translated to the actual individual operations that actually happened. And on that we can detect conflicts much more easily.

Log shipping is easiest to work with, operationally speaking. You know what you get, and you deal with that. Oplog is also simple, you have a single master, and that works. Multi master and multi write partners requires you to take explicit notice of conflicts, selection of the appropriate node to reduce conflicts, etc.

In practice, at least in the scenarios relating to RavenDB, the ability to take a server offline for weeks or months doesn’t seem to be used that often. The common deployment model is of servers running as steady partners. There are some optimizations that you can do for multi write partners that are hard/impossible to do with multi master.

My current personal preference at this point would like to go with either log shipping or multi write master. I think that either one of them would be fairly simple to implement and support operationally. I think that I’ll discuss actual design for the time series topic using either option in my next posts.

Logging & Production systems

I got a question from Dominic about logging:

Jeff Atwood wrote a great blog post about over-using logging, where stack traces should be all a developer needs to find the root cause of a problem. Therefore ...

When building an enterprise level system, what rules do you have to deem a log message 'useful' to a developer or support staff?

This is the relevant part in Jeff’s post:

So is logging a giant waste of time? I'm sure some people will read about this far and draw that conclusion, no matter what else I write. I am not anti-logging. I am anti-abusive-logging. Like any other tool in your toolkit, when used properly and appropriately, it can help you create better programs. The problem with logging isn't the logging, per se -- it's the seductive OCD "just one more bit of data in the log" trap that programmers fall into when implementing logging. Logging gets a bad name because it's so often abused. It's a shame to end up with all this extra code generating volumes and volumes of logs that aren't helping anyone.

We've since removed all logging from Stack Overflow, relying exclusively on exception logging. Honestly, I don't miss it at all. I can't even think of a single time since then that I'd wished I'd had a giant verbose logfile to help me diagnose a problem.

I don’t really think that I can disagree with this more vehemently. This might be a valid approach if/when you are writing what is essentially a single threaded, single use, code. It just so happens that most web applications are actually composed of something like that. The request code very rarely does threading, and it is usually just dealing with its own stuff. For system where most of the code is actually doing ongoing work, there really isn’t any alternative to logging.

You cannot debug multi threaded code efficiently. The only way to really handle that is to do printf debugging. In which you write what happens, and then construct the actual execution from the traces. And that is leaving aside one very important issue. It isn’t the exceptions that will get you, it is when your system is subtly wrong. Maybe it missed an update, or skipped a validation, or something just doesn’t look right. And you need to figure out what is going on.

And then you have distributed system, when you might have things happening concurrently in multiple systems, and good luck trying to get a good grip about how to resolve problems without using logging.

Speaking of which, here is a reply to a customer from one of our developers:

image

There is absolutely no way this would have been found without logging. The actual client  visible issue happened quite a bit later than when the actual bug was, and no exception was thrown.

Of course, this is all just solving problems on the developer machine. When you go to production, the rules are different, usually the only thing that you have are the logs, and you need to be able to figure out what was wrong and how to fix it, when the system is running.

I find that I don’t really sweat Debug vs. Info and Warn vs. Error debates. The developers will write whatever they consider to be relevant on each case. And you might get Errors that show up in the logs that are error for that feature, but are merely warnings for the entire product.  I don’t care, all of that can be massages later, using log configuration & filtering. But the very first thing that has to happen is to have the logs. Most of the time you don’t care, but it is pretty much the same as in saying: “We removed all the defibrillators from the building, because they were expensive and took up space. We rely exclusively on CPR in the event of a heart failure. Honestly, I don’t miss it at all. I can’t think of a single time since then that I’d wished I’d a machine to send electricity into someone’s heart to solve a problem.”

When you’ll realize that you need it, it is likely going to be far too late.