Raven Streams is a working title.
So far, I have been working mostly on getting things working from an infrastructure perspective. That is, I want to be able to successfully write & read from the streams, and I have a general architecture for it.
Basically, we have something like this:
This is a single stream (of which we expect to have more than one, but not a very high number). The stream makes all writes to a mem table, backed by a log. When the mem table is full, we switch to a new memtable and write all the data to a sorted string table (SST). This is very much like the design of leveldb.
Internally, however, we represent all data as etags. That is, you don’t have an actual key that you can use, the stream will generate an etag for you. And the SST are sorted in order, so the oldest events are in the oldest file, etc.
On startup, if we haven’t flushed properly, we will translate the log file into a new SST and start from scratch. I am not sure yet if we want / need to implement compaction. Maybe we will do that for very small files only.
Right now there are two methods that you actually care about:
This is how you use them:
This is all working pretty nicely right now. It isn’t nearly ready for production or even test usage, but this is encouraging. Next, I’ll discuss the intended implementation of map/reduce against this.