Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,558
|
Comments: 51,171
Privacy Policy · Terms
filter by tags archive
time to read 3 min | 439 words

I care about the performance of RavenDB. Enough that I would go to epic lengths to fix them. Here I use “epic” both in terms of the Agile meaning of multi-month journeys and the actual amount of work required. See my recent posts about RavenDB 7.1 I/O work.

There hasn’t been a single release in the past 15 years that didn’t improve the performance of RavenDB in some way. We have an entire team whose sole task is to find bottlenecks and fix them, to the point where assembly language is a high-level concept at times (yes, we design some pieces of RavenDB with CPU microcode for performance).

When we ran into this issue, I was… quite surprised, to say the least. The problem was that whenever we serialized a document in RavenDB, we would compile some LINQ expressions.

That is expensive, and utterly wasteful. That is the sort of thing that we should never do, especially since there was no actual need for it.

Here is the essence of this fix:

We ran a performance test on the before & after versions, just to know what kind of performance we left on the table.

Before (ms)After (ms)
33,78220

The fixed version is 1,689 times faster, if you can believe that.

So here is a fix that is both great to have and quite annoying. We focused so much effort on optimizing the server, and yet we missed something that obvious? How can that be?

Well, the answer is that this isn’t an actual benchmark. The problem is that this code is invoked per instance created instead of globally, and it is created once per thread. In any situation where the number of threads is more or less fixed (most production scenarios, where you’ll be using a thread pool, as well as in most benchmarks), you are never going to see this problem.

It is when you have threads dying and being created (such as when you handle spikes) that you’ll run into this issue. Make no mistake, it is an actual issue. When your load spikes, the thread pool will issue new threads, and they will consume a lot of CPU initially for absolutely no reason.

In short, we managed to miss this entirely (the code dates to 2017!) for a long time. It never appeared in any benchmark. The fix itself is trivial, of course, and we are unlikely to be able to show any real benefits from it in a benchmark, but that is yet another step in making RavenDB better.

time to read 26 min | 5029 words

One of the more interesting developments in terms of kernel API surface is the IO Ring. On Linux, it is called IO Uring and Windows has copied it shortly afterward. The idea started as a way to batch multiple IO operations at once but has evolved into a generic mechanism to make system calls more cheaply. On Linux, a large portion of the kernel features is exposed as part of the IO Uring API, while Windows exposes a far less rich API (basically, just reading and writing).

The reason this matters is that you can use IO Ring to reduce the cost of making system calls, using both batching and asynchronous programming. As such, most new database engines have jumped on that sweet nectar of better performance results.

As part of the overall re-architecture of how Voron manages writes, we have done the same. I/O for Voron is typically composed of writes to the journals and to the data file, so that makes it a really good fit, sort of.

An ironic aspect of IO Uring is that despite it being an asynchronous mechanism, it is inherently single-threaded. There are good reasons for that, of course, but that means that if you want to use the IO Ring API in a multi-threaded environment, you need to take that into account.

A common way to handle that is to use an event-driven system, where all the actual calls are generated from a single “event loop” thread or similar. This is how the Node.js API works, and how .NET itself manages IO for sockets (there is a single thread that listens to socket events by default).

The whole point of IO Ring is that you can submit multiple operations for the kernel to run in as optimal a manner as possible. Here is one such case to consider, this is the part of the code where we write the modified pages to the data file:


using (fileHandle)
{
    for (int i = 0; i < pages.Length; i++)
    {
        int numberOfPages = pages[i].GetNumberOfPages();


        var size = numberOfPages * Constants.Storage.PageSize;
        var offset = pages[i].PageNumber * Constants.Storage.PageSize;
        var span = new Span<byte>(pages[i].Pointer, size);
        RandomAccess.Write(fileHandle, span, offset);


        written += numberOfPages * Constants.Storage.PageSize;
    }
}


PID     LWP TTY          TIME CMD
  22334   22345 pts/0    00:00:00 iou-wrk-22343
  22334   22346 pts/0    00:00:00 iou-wrk-22343
  22334   22347 pts/0    00:00:00 iou-wrk-22334
  22334   22348 pts/0    00:00:00 iou-wrk-22334
  22334   22349 pts/0    00:00:00 iou-wrk-22334
  22334   22350 pts/0    00:00:00 iou-wrk-22334
  22334   22351 pts/0    00:00:00 iou-wrk-22334
  22334   22352 pts/0    00:00:00 iou-wrk-22334
  22334   22353 pts/0    00:00:00 iou-wrk-22334
  22334   22354 pts/0    00:00:00 iou-wrk-22334
  22334   22355 pts/0    00:00:00 iou-wrk-22334
  22334   22356 pts/0    00:00:00 iou-wrk-22334
  22334   22357 pts/0    00:00:00 iou-wrk-22334
  22334   22358 pts/0    00:00:00 iou-wrk-22334

Actually, those aren’t threads in the normal sense. Those are kernel tasks, generated by the IO Ring at the kernel level directly. It turns out that internally, IO Ring may spawn worker threads to do the async work at the kernel level. When we had a separate IO Ring per file, each one of them had its own pool of threads to do the work.

The way it usually works is really interesting. The IO Ring will attempt to complete the operation in a synchronous manner. For example, if you are writing to a file and doing buffered writes, we can just copy the buffer to the page pool and move on, no actual I/O took place. So the IO Ring will run through that directly in a synchronous manner.

However, if your operation requires actual blocking, it will be sent to a worker queue to actually execute it in the background. This is one way that the IO Ring is able to complete many operations so much more efficiently than the alternatives.

In our scenario, we have a pretty simple setup, we want to write to the file, making fully buffered writes. At the very least, being able to push all the writes to the OS in one shot (versus many separate system calls) is going to reduce our overhead. More interesting, however, is that eventually, the OS will want to start writing to the disk, so if we write a lot of data, some of the requests will be blocked. At that point, the IO Ring will switch them to a worker thread and continue executing.

The problem we had was that when we had a separate IO Ring per data file and put a lot of load on the system, we started seeing contention between the worker threads across all the files. Basically, each ring had its own separate pool, so there was a lot of work for each pool but no sharing.

If the IO Ring is single-threaded, but many separate threads lead to wasted resources, what can we do? The answer is simple, we’ll use a single global IO Ring and manage the threading concerns directly.

Here is the setup code for that (I removed all error handling to make it clearer):


void *do_ring_work(void *arg)
{
  int rc;
  if (g_cfg.low_priority_io)
  {
    syscall(SYS_ioprio_set, IOPRIO_WHO_PROCESS, 0, 
        IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, 7));
  }
  pthread_setname_np(pthread_self(), "Rvn.Ring.Wrkr");
  struct io_uring *ring = &g_worker.ring;
  struct workitem *work = NULL;
  while (true)
  {
    do
    {
      // wait for any writes on the eventfd 
      // completion on the ring (associated with the eventfd)
      eventfd_t v;
      rc = read(g_worker.eventfd, &v, sizeof(eventfd_t));
    } while (rc < 0 && errno == EINTR);
    
    bool has_work = true;
    while (has_work)
    {
      int must_wait = 0;
      has_work = false;
      if (!work) 
      {
        // we may have _previous_ work to run through
        work = atomic_exchange(&g_worker.head, 0);
      }
      while (work)
      {
        has_work = true;


        struct io_uring_sqe *sqe = io_uring_get_sqe(ring);
        if (sqe == NULL)
        {
          must_wait = 1;
          goto sumbit_and_wait; // will retry
        }
        io_uring_sqe_set_data(sqe, work);
        switch (work->type)
        {
        case workitem_fsync:
          io_uring_prep_fsync(sqe, work->filefd, IORING_FSYNC_DATASYNC);
          break;
        case workitem_write:
          io_uring_prep_writev(sqe, work->filefd, work->op.write.iovecs,
                               work->op.write.iovecs_count, work->offset);
          break;
        default:
          break;
        }
        work = work->next;
      }
    sumbit_and_wait:
      rc = must_wait ? 
        io_uring_submit_and_wait(ring, must_wait) : 
        io_uring_submit(ring);
      struct io_uring_cqe *cqe;
      uint32_t head = 0;
      uint32_t i = 0;


      io_uring_for_each_cqe(ring, head, cqe)
      {
        i++;
        // force another run of the inner loop, 
        // to ensure that we call io_uring_submit again
        has_work = true; 
        struct workitem *cur = io_uring_cqe_get_data(cqe);
        if (!cur)
        {
          // can be null if it is:
          // *  a notification about eventfd write
          continue;
        }
        switch (cur->type)
        {
        case workitem_fsync:
          notify_work_completed(ring, cur);
          break;
        case workitem_write:
          if (/* partial write */)
          {
            // queue again
            continue;
          }
          notify_work_completed(ring, cur);
          break;
        }
      }
      io_uring_cq_advance(ring, i);
    }
  }
  return 0;
}

What does this code do?

We start by checking if we want to use lower-priority I/O, this is because we don’t actually care how long those operations take. The purpose of writing the data to the disk is that it will reach it eventually. RavenDB has two types of writes:

  • Journal writes (durable update to the write-ahead log, required to complete a transaction).
  • Data flush / Data sync (background updates to the data file, currently buffered in memory, no user is waiting for that)

As such, we are fine with explicitly prioritizing the journal writes (which users are waiting for) in favor of all other operations.

What is this C code? I thought RavenDB was written in C#

RavenDB is written in C#, but for very low-level system details, we found that it is far easier to write a Platform Abstraction Layer to hide system-specific concerns from the rest of the code. That way, we can simply submit the data to write and have the abstraction layer take care of all of that for us. This also ensures that we amortize the cost of PInvoke calls across many operations by submitting a big batch to the C code at once.

After setting the IO priority, we start reading from what is effectively a thread-safe queue. We wait for eventfd() to signal that there is work to do, and then we grab the head of the queue and start running.

The idea is that we fetch items from the queue, then we write those operations to the IO Ring as fast as we can manage. The IO Ring size is limited, however. So we need to handle the case where we have more work for the IO Ring than it can accept. When that happens, we will go to the submit_and_wait label and wait for something to complete.

Note that there is some logic there to handle what is going on when the IO Ring is full. We submit all the work in the ring, wait for an operation to complete, and in the next run, we’ll continue processing from where we left off.

The rest of the code is processing the completed operations and reporting the result back to their origin. This is done using the following function, which I find absolutely hilarious:


int32_t rvn_write_io_ring(
    void *handle,
    struct page_to_write *buffers,
    int32_t count,
    int32_t *detailed_error_code)
{
    int32_t rc = SUCCESS;
    struct handle *handle_ptr = handle;
    if (count == 0)
        return SUCCESS;


    if (pthread_mutex_lock(&handle_ptr->global_state->writes_arena.lock))
    {
        *detailed_error_code = errno;
        return FAIL_MUTEX_LOCK;
    }
    size_t max_req_size = (size_t)count * 
                      (sizeof(struct iovec) + sizeof(struct workitem));
    if (handle_ptr->global_state->writes_arena.arena_size < max_req_size)
    {
        // allocate arena space
    }
    void *buf = handle_ptr->global_state->writes_arena.arena;
    struct workitem *prev = NULL;
    int eventfd = handle_ptr->global_state->writes_arena.eventfd;
    for (int32_t curIdx = 0; curIdx < count; curIdx++)
    {
        int64_t offset = buffers[curIdx].page_num * VORON_PAGE_SIZE;
        int64_t size = (int64_t)buffers[curIdx].count_of_pages *
                       VORON_PAGE_SIZE;
        int64_t after = offset + size;


        struct workitem *work = buf;
        *work = (struct workitem){
            .op.write.iovecs_count = 1,
            .op.write.iovecs = buf + sizeof(struct workitem),
            .completed = 0,
            .type = workitem_write,
            .filefd = handle_ptr->file_fd,
            .offset = offset,
            .errored = false,
            .result = 0,
            .prev = prev,
            .notifyfd = eventfd,
        };
        prev = work;
        work->op.write.iovecs[0] = (struct iovec){
            .iov_len = size, 
            .iov_base = buffers[curIdx].ptr
        };
        buf += sizeof(struct workitem) + sizeof(struct iovec);


        for (size_t nextIndex = curIdx + 1; 
            nextIndex < count && work->op.write.iovecs_count < IOV_MAX; 
            nextIndex++)
        {
            int64_t dest = buffers[nextIndex].page_num * VORON_PAGE_SIZE;
            if (after != dest)
                break;


            size = (int64_t)buffers[nextIndex].count_of_pages *
                              VORON_PAGE_SIZE;
            after = dest + size;
            work->op.write.iovecs[work->op.write.iovecs_count++] = 
                (struct iovec){
                .iov_base = buffers[nextIndex].ptr,
                .iov_len = size,
            };
            curIdx++;
            buf += sizeof(struct iovec);
        }
        queue_work(work);
    }
    rc = wait_for_work_completion(handle_ptr, prev, eventfd, 
detailed_error_code);
    pthread_mutex_unlock(&handle_ptr->global_state->writes_arena.lock)
    return rc;
}

Remember that when we submit writes to the data file, we must wait until they are all done. The async nature of IO Ring is meant to help us push the writes to the OS as soon as possible, as well as push writes to multiple separate files at once. For that reason, we use anothereventfd() to wait (as the submitter) for the IO Ring to complete the operation. I love the code above because it is actually using the IO Ring itself to do the work we need to do here, saving us an actual system call in most cases.

Here is how we submit the work to the worker thread:


void queue_work(struct workitem *work)
{
    struct workitem *head = atomic_load(&g_worker.head);
    do
    {
        work->next = head;
    } while (!atomic_compare_exchange_weak(&g_worker.head, &head, work));
}

This function handles the submission of a set of pages to write to a file. Note that we protect against concurrent work on the same file. That isn’t actually needed since the caller code already handles that, but an uncontended lock is cheap, and it means that I don’t need to think about concurrency or worry about changes in the caller code in the future.

We ensure that we have sufficient buffer space, and then we create a work item. A work item is a single write to the file at a given location. However, we are using vectored writes, so we’ll merge writes to the consecutive pages into a single write operation. That is the purpose of the huge for loop in the code. The pages arrive already sorted, so we just need to do a single scan & merge for this.

Pay attention to the fact that the struct workitem actually belongs to two different linked lists. We have the next pointer, which is used to send work to the worker thread, and the prev pointer, which is used to iterate over the entire set of operations we submitted on completion (we’ll cover this in a bit).

Queuing work is done using the following method:


int32_t
wait_for_work_completion(struct handle *handle_ptr, 
    struct workitem *prev, 
    int eventfd, 
    int32_t *detailed_error_code)
{
    // wake worker thread
    eventfd_write(g_worker.eventfd, 1);
    
    bool all_done = false;
    while (!all_done)
    {
        all_done = true;
        *detailed_error_code = 0;


        eventfd_t v;
        int rc = read(eventfd, &v, sizeof(eventfd_t));
        struct workitem *work = prev;
        while (work)
        {
            all_done &= atomic_load(&work->completed);
            work = work->prev;
        }
    }
    return SUCCESS;
}

The idea is pretty simple. We first wake the worker thread by writing to its eventfd(), and then we wait on our own eventfd() for the worker to signal us that (at least some) of the work is done.

Note that we handle the submission of multiple work items by iterating over them in reverse order, using the prev pointer. Only when all the work is done can we return to our caller.

The end result of all this behavior is that we have a completely new way to deal with background I/O operations (remember, journal writes are handled differently). We can control both the volume of load we put on the system by adjusting the size of the IO Ring as well as changing its priority.

The fact that we have a single global IO Ring means that we can get much better usage out of the worker thread pool that IO Ring utilizes. We also give the OS a lot more opportunities to optimize RavenDB’s I/O.

The code in this post shows the Linux implementation, but RavenDB also supports IO Ring on Windows if you are running a recent edition.

We aren’t done yet, mind, I still have more exciting things to tell you about how RavenDB 7.1 is optimizing writes and overall performance. In the next post, we’ll discuss what I call the High Occupancy Lane vs. Critical Lane for I/O and its impact on our performance.

time to read 4 min | 796 words

A good lesson I learned about being a manager is that the bigger the organization, the more important it is for me to be silent. If we are discussing a set of options, I have to talk last, and usually, I have to make myself wait until the end of a discussion before I can weigh in on any issues I have with the proposed solutions.

Speaking last isn’t something I do to have the final word or as a power play, mind you. I do it so my input won’t “taint” the discussion. The bigger the organization, the more pressure there is to align with management. If I want to get unbiased opinions and proper input, I have to wait for it. That took a while to learn because the gradual growth of the company meant that the tipping point basically snuck up on me.

One day, I was working closely with a small team. They would argue freely and push back if they thought I was wrong without hesitation. The next day, the company grew to the point where I would only rarely talk to some people, and when I did, it was the CEO talking, not me.

It’s a subtle shift, but once you see it, you can’t unsee it. I keep thinking if I need to literally get a couple of hats and walk around in the office wearing different hats at different times.

To deal with this issue, I went out of my way to get a few “no-men” (the opposite of yes-men), who can reliably tell me when what I’m proposing is… let’s call it an idealistic view of reality. These are the folks who’ll look at my grand plan to, say, overhaul our entire CRM in a week and say, “Hey, love the enthusiasm, but have you considered the part where we all spontaneously combust from stress?” There may have been some pointing at grey hair and receding hairlines as well.

The key here is that I got these people specifically because I value their opinions, even when I disagree with them. It’s like having a built-in reality check—annoying in the moment, but worth its weight in gold when it keeps you from driving the whole team off a cliff.

This ties into one of the trickier parts of managerial duties: knowing when to steer and when to step back. Early on, I thought being a manager was about having all the answers and making sure everyone knew it. But the reality? It’s more like being a gardener—you plant the seeds (the vision), water them (with resources and support), and then let the team grow into it.

My job isn’t to micromanage every leaf; it’s to make sure the conditions are right for the whole thing to thrive. That means trusting people to do their jobs, even if they don’t do it exactly how I would.

Of course, there’s another side to this gig: the ability to move the goalposts that measure what’s required. Changing the scope of a problem is a really good way to make something that used to be impossible a reality. I’m reminded of this XKCD comic—you know the one, where if you change the problem just enough to turn a “no way” into a “huh, that could work”? That’s a manager’s superpower.

You’re not just solving problems; you’re redefining them so the team can win. Maybe the deadline’s brutal, but if you shift the focus from “everything” to “we don’t need this feature for launch,” suddenly everyone’s breathing again.

It is a very strange feeling because you move from doing things yourself, to working with a team, to working at a distance of once or twice removed. On the one hand, you can get a lot more done, but on the other hand, it can be really frustrating when it isn’t done the way (and with the speed) that I could do it.

This isn’t a motivational post, it is not a fun aspect of my work. I only have so many hours in the day, and being careful about where I put my time is important. At the same time, it means that I have to take into account that what I say matters, and if I say something first, it puts a pretty big hurdle in front of other people if they disagree with me.

In other words, I know it can come off as annoying, but not giving my opinion on something is actually a well-thought-out strategy to get the raw information without influencing the output. When I have all the data, I can give my own two cents on the matter safely.

time to read 8 min | 1476 words

When we build a new feature in RavenDB, we either have at least some idea about what we want to build or we are doing something that is pure speculation. In either case, we will usually spend only a short amount of time trying to plan ahead.

A good example of that can be found in my RavenDB 7.1 I/O posts, which cover about 6+ months of work for a major overhaul of the system. That was done mostly as a series of discussions between team members, guidance from the profiler, and our experience, seeing where the path would lead us. In that case, it led us to a five-fold performance improvement (and we’ll do better still by the time we are done there).

That particular set of changes is one of the more complex and hard-to-execute changes we have made in RavenDB over the past 5 years or so. It touched a lot of code, it changed a lot of stuff, and it was done without any real upfront design. There wasn’t much point in designing, we knew what we wanted to do (get things faster), and the way forward was to remove obstacles until we were fast enough or ran out of time.

I re-read the last couple of paragraphs, and it may look like cowboy coding, but that is very much not the case. There is a process there, it is just not something we would find valuable to put down as a formal design document. The key here is that we have both a good understanding of what we are doing and what needs to be done.

RavenDB 4.0 design document

The design document we created for RavenDB 4.0 is probably the most important one in the project’s history. I just went through it again, it is over 20 pages of notes and details that discuss the current state of RavenDB at the time (written in 2015) and ideas about how to move forward.

It is interesting because I remember writing this document. And then we set out to actually make it happen, that wasn’t a minor update. It took close to three years to complete the process, to give you some context about the complexity and scale of the task.

To give some further context, here is an image from that document:

And here is the sharding feature in RavenDB right now:

This feature is called prefixed sharding in our documentation. It is the direct descendant of the image from the original 4.0 design document. We shipped that feature sometime last year. So we are talking about 10 years from “design” to implementation.

I’m using “design” in quotes here because when I go through this v4.0 design document, I can tell you that pretty much nothing that ended up in that document was implemented as envisioned. In fact, most of the things there were abandoned because we found much better ways to do the same thing, or we narrowed the scope so we could actually ship on time.

Comparing the design document to what RavenDB 4.0 ended up being is really interesting, but it is very notable that there isn’t much similarity between the two. And yet that design document was a fundamental part of the process of moving to v4.0.

What Are Design Documents?

A classic design document details the architecture, workflows, and technical approach for a software project before any code is written. It is the roadmap that guides the development process.

For RavenDB, we use them as both a sounding board and a way to lay the foundation for our understanding of the actual task we are trying to accomplish. The idea is not so much to build the design for a particular feature, but to have a good understanding of the problem space and map out various things that could work.

Recent design documents in RavenDB

I’m writing this post because I found myself writing multiple design documents in the past 6 months. More than I have written in years. Now that RavenDB 7.0 is out, most of those are already implemented and available to you. That gives me the chance to compare the design process and the implementation with recent work.

Vector Search & AI Integration for RavenDB

This was written in November 2024. It outlines what we want to achieve at a very high level. Most importantly, it starts by discussing what we won’t be trying to do, rather than what we will. Limiting the scope of the problem can be a huge force multiplier in such cases, especially when dealing with new concepts.

Reading throughout that document, it lays out the external-facing aspect of vector search in RavenDB. You have the vector.search() method in RQL, a discussion on how it works in other systems, and some ideas about vector generation and usage.

It doesn’t cover implementation details or how it will look from the perspective of RavenDB. This is at the level of the API consumer, what we want to achieve, not how we’ll achieve it.

AI Integration with RavenDB

Given that we have vector search, the next step is how to actually get and use it. This design document was a collaborative process, mostly written during and shortly after a big design discussion we had (which lasted for hours).

The idea there was to iron out the overall understanding of everyone about what we want to achieve. We considered things like caching and how it plays into the overall system, there are notes there at the level of what should be the field names.

That work has already been implemented. You can access it through the new AI button in the Studio. Check out this icon on the sidebar:

That was a much smaller task in scope, but you can see how even something that seemed pretty clear changed as we sat down and actually built it. Concepts we didn’t even think to consider were raised, handled, and implemented (without needing another design).

Voron HSNW Design Notes

This design document details our initial approach to building the HSNW implementation inside Voron, the basis for RavenDB’s new vector search capabilities.

That one is really interesting because it is a pure algorithmic implementation, completely internal to our usage (so no external API is needed), and I wrote it after extensive research.

The end result is similar to what I planned, but there are still significant changes.  In fact, pretty much all the actual implementation details are different from the design document. That is both expected and a good thing because it means that once we dove in, we were able to do things in a better way.

Interestingly, this is often the result of other constraints forcing you to do things differently. And then everything rolls down from there.

“If you have a problem, you have a problem. If you have two problems, you have a path for a solution.”

In the case of HSNW, a really complex part of the algorithm is handling deletions. In our implementation, there is a vector, and it has an associated posting list attached to it with all the index entries. That means we can implement deletion simply by emptying the associated posting list. An entire section in the design document (and hours spent pondering) is gone, just like that.

If the design document doesn’t reflect the end result of the system, are they useful?

I would unequivocally state that they are tremendously useful. In fact, they are crucial for us to be able to tackle complex problems. The most important aspect of design documents is that they capture our view of what the problem space is.

Beyond their role in planning, design documents serve another critical purpose: they act as a historical record. They capture the team’s thought process, documenting why certain decisions were made and how challenges were addressed. This is especially valuable for a long-lived project like RavenDB, where future developers may need context to understand the system’s evolution.

Imagine a design document that explores a feature in detail—outlining options, discussing trade-offs, and addressing edge cases like caching or system integrations. The end result may be different, but the design document, the feature documentation (both public and internal), and the issue & commit logs serve to capture the entire process very well.

Sometimes, looking at the road not taken can give you a lot more information than looking at what you did.

I consider design documents to be a very important part of the way we design our software. At the same time, I don’t find them binding, we’ll write the software and see where it leads us in the end.

What are your expectations and experience with writing design documents? I would love to hear additional feedback.

time to read 1 min | 195 words

One of the “minor” changes in RavenDB 7.0 is that we moved from our own in-house logging system to using NLog. I looked at my previous blog posts, and I found the blog post outlining the rationale for the decision to use our own logging infrastructure from 2016.

At the time, no other logging framework was able to sustain the kind of performance that we required. The .NET community has come a long way since then, and it has become clear that we need to revisit this decision. Performance has a much higher priority, and the API at all levels supports that (spans, avoiding allocations, etc).

The move to NLog gives users a much simpler way to integrate RavenDB logs into their monitoring & observability pipeline.

You can read about the new NLog feature in our blog.

We also spent time making the logs view inside the RavenDB Studio nicer, taking advantage of the new capabilities we now expose:

Hopefully, you won’t need to dig too deeply into the logs, but it is now easier than ever to use them.

time to read 8 min | 1445 words

RavenDB 7.0 adds Snowflake integration to the set of ETL targets it supports.

Snowflake is a data warehouse solution, designed for analytics and data at scale. RavenDB is aimed at transactional scenarios and has a really good story around data distribution and wide geographical deployments.

You can check out the documentation to read the details about how you can use this integration to push data from RavenDB to your Snowflake database. In this post, I want to introduce one usage scenario for such integration.

RavenDB is commonly deployed on the edge, running on site in grocery stores, restaurants’ self-serve kiosks, supermarket checkout counters, etc. Such environments have to be tough and resilient to errors, network problems, mishandling, and much more.

We had to field support calls in the style of “there is ketchup all over the database”, for example.

In such environments, you must operate completely independently of the cloud. Both because of latency and performance issues and because you must keep yourself up & running even if the Internet is down. RavenDB does very well in such a scenario because of its internal architecture, the ability to run in a multi-master configuration, and its replication capabilities.

From a business perspective, that is a critical capability, to push data to the edge and be able to operate independently of any other resource. At the same time, this represents a significant challenge since you lose the ability to have an overall view of what is going on.

RavenDB’s Snowflake integration is there to bridge this gap. The idea is that you can define Snowflake ETL processes that would push the data from all the branches you have to a single shared Snowflake account. Your headquarters can then run queries, analyse the data, and in general have near real-time analytics without hobbling the branches with having to manage the data remotely.

The Grocery Store Scenario

In our grocery store, we manage the store using RavenDB as the backend. With documents such as this to record sales:


{
  "Items": [
    {
      "ProductId": "P123",
      "ProductName": "Milk",
      "QuantitySold": 5
    },
    {
      "ProductId": "P456",
      "ProductName": "Bread",
      "QuantitySold": 2
    },
    {
      "ProductId": "P789",
      "ProductName": "Eggs",
      "QuantitySold": 10
    }
  ],
  "Timestamp": "2025-02-28T12:00:00Z",
  "@metadata": {
    "@collection": "Orders"
  }
}

And this document to record other inventory updates in the store:


{
  "ProductId": "P123",
  "ProductName": "Milk",
  "QuantityDiscarded": 3,
  "Reason": "Spoilage",
  "Timestamp": "2025-02-28T14:00:00Z",
  "@metadata": {
    "@collection": "DiscardedInventory"
  }
}

These documents are repeated many times for each store, recording the movement of inventory, tracking sales, etc. Now we want to share those details with headquarters.

There are two ways to do that. One is to use the Snowflake ETL to push the data itself to the HQ’s Snowflake account. You can see an example of that when we push the raw data to Snowflake in this article.

The other way is to make use of RavenDB’s map/reduce capabilities to do some of the work in each store and only push the summary data to Snowflake. This can be done in order to reduce the load on Snowflake if you don’t need a granular level of data for analytics.

Here is an example of such a map/reduce index:


// Index Name: ProductConsumptionSummary
// Output Collection: ProductsConsumption


map('Orders', function(order) {
    return {
        ProductId: order.ProductId,
        TotalSold: order.QuantitySold,
        TotalDiscarded: 0,
         Date: dateOnly(order.Timestamp)
    };
});


map('DiscardedInventory', function(discard) {
    return {
        ProductId: discard.ProductId,
        TotalSold: 0,
        TotalDiscarded: discard.QuantityDiscarded,
         Date: dateOnly(order.Timestamp)
    };
});


groupBy(x => ({x.ProductId, x.Date) )
  .aggregate(g =>{
         ProductId: g.key.ProductId,
         Date: g.key.Date,
         TotalSold: g.values.reduce((c, val) => g.TotlaSld + c, 0),
         TotalDiscarded: g.values.reduce((c, val) => g.TotalDiscarded + c, 0)
   });

This index will output its documents to the artificial collection: ProductsConsumption.

We can then define a Snowflake ETL task that would push that to Snowflake, like so:


loadToProductsConsumption({
        PRODUCT_ID: doc.ProductId,
        STORE_ID: load('config/store').StoreId,
        TOTAL_SOLD: doc.TotalSold,
        TOTAL_DISCARDED: doc.TotalDiscarded,
        DATE: doc.Date
});

With that in place, each branch would push details about its sales, inventory discarded, etc., to the Snowflake account. And headquarters can run their queries and get a real-time view and understanding about what is going on globally.

You can read more about Snowflake ETL here. The full docs, including all the details on how to set up properly, are here.

time to read 4 min | 728 words

The big-ticket item for RavenDB 7.0 may be the new vector search and AI integration, but those aren’t the only new features this new release brings to the table.

AWS SQS ETL allows you to push data directly from RavenDB into an SQS queue. You can read the full details in our documentation, but the basic idea is that you can supply your own logic that would push data from RavenDB documents to SQS for additional processing.

For example, suppose you need to send an email to the customer when an order is marked as shipped. You write the code to actually send the email as an AWS Lambda function, like so:


def lambda_handler(event, context):
    for record in event['Records']:
        message_body = json.loads(record['body'])
        
        email_body = """
Subject: Your Order Has Shipped!


Dear {customer_name},
Great news! Your order #{order_id} has shipped. Here are the details:
Track your package here: {tracking_url}
""".format(
            customer_name=message_body.get('customer_name'),
            order_id=message_body.get('order_id'),
            tracking_url=message_body.get('tracking_url')
        )
        
        send_email(
            message_body.get('customer_email'),
            email_body
        )
        
        sqs_client.delete_message(
            QueueUrl=shippedOrdersQueue,
            ReceiptHandle=record['receiptHandle']
        )

You wire that to the right SQS queue using:


aws lambda create-event-source-mapping \
--function-name ProcessShippedOrders \
--event-source-arn arn:aws:sqs:$reg:$acc:ShippedOrders \
--batch-size 10

The next step is to get the data into the queue. This is where the new AWS SQS ETL inside of RavenDB comes into play.

You can specify a script that reacts to changes inside your database and sends a message to SQS as a result. Look at the following, on the Orders collection:


if(this.Status !== 'Completed') return;


const customer = load(this.Customer);
loadToShippedOrders({
  'customer_email': customer.Email,
  'customer_name': customer.Name,
  'order_id':        id(this),
  'tracking_url': this.Shipping.TrackingUrl
});

And you are… done! We have a full blown article with all the steps walking you through configuring both RavenDB and AWS that I encourage you to read.

Like our previous ETL processes for queues, you can also use RavenDB in the Outbox pattern to gain transactional capabilities on top of the SQS queue. You write the messages you want to reach SQS as part of your normal RavenDB transaction, and the RavenDB SQS ETL will ensure that they reach the queue if the transaction was successfully committed.

time to read 3 min | 500 words

RavenDB 7.0 is out, and the big news is vector search and AI integration in the box. You can download the new bits, run it in the cloud, or just look at the public instance to test it out.

Before discussing the actual feature, I want to show you what we have done:


$query = 'I feel like italian today'
from Products
where vector.search(embedding.text(Name), $query)

Try that in the public instance using the sample database, here is what you get back!

I wrote parts of the vector search for RavenDB, and even so, Iwas pretty amazed when I realized that this query above just works. Note that there is no actual setup to be done here. You just issue a query and ask it to use the vector.search() during execution. RavenDB handles everything else.

You can read more about the vector search feature in our documentation, but the gist of it is that you can run a piece of data (text, image, or even a video) through a large language model and get a vector back. That vector allows you to query using the model’s understanding of the data.

The idea is that this moves you beyond querying with keywords or even full-text search. You are now able to search for meaning and intent. You can leverage models such as OpenAI, Ollama, Grok, Mistral, or DeepSeek to give your users deep insight into their data inside RavenDB.

RavenDB embeds a small model (bge-micro-v2) and can apply it during auto-indexes, both for generating embeddings for your data and for queries. As you can see, even with a tiny model, the results are spectacular.

Naturally, you can also use larger models, including OpenAI, Ollama, Grok, and more. Using a large model means it has a better contextual understanding of the relationships between the data and the query and can provide more accurate results.

RavenDB’s support for vector search includes:

  • Approximate neighbor search using the HNSW algorithm.
  • Exact neighbor search.
  • Support for vectors using float arrays, base64 encoded, and binary attachments.
  • RavenVector type for optimizing disk space and improving the read speed of vectors.
  • Using full vectors or providing quantized ones to reduce disk space.
  • Support for auto-quantization of vectors during indexing & queries.

Our aim with the new RavenDB 7.0 release is to make it possible for you to find meaning - to be able to leverage vectors, embeddings, large language models, and AI without fuss or trouble. I’m really proud to say that we have exceeded all my expectations.

There are a lot more exciting announcements about RavenDB & AI integration in the pipeline, and I’m really excited to share them with you in the near future.

There are actually other features that are part of RavenDB 7.0, but AI is eating the world, so we’ll cover them in a separate blog post instead of giving them a tertiary spot in this one.

time to read 7 min | 1335 words

I have been delaying the discussion about the performance numbers for a reason. Once we did all the work that I described in the previous posts, we put it to an actual test and ran it against the benchmark suite. In particular, we were interested in the following scenario:

High insert rate, with about 100 indexes active at the same time. Target: Higher requests / second, lowered latency

Previously, after hitting a tipping point, we would settle at under 90 requests/second and latency spikes that hit over 700(!) ms. That was the trigger for much of this work, after all. The initial results were quite promising, we were showing massive improvement across the board, with over 300 requests/second and latency peaks of 350 ms.

On the one hand, that is a really amazing boost, over 300% improvement. On the other hand, just 300 requests/second - I hoped for much higher numbers. When we started looking into exactly what was going on, it became clear that I seriously messed up.

Under load, RavenDB would issue fsync() at a rate of over 200/sec. That is… a lot, and it means that we are seriously limited in the amount of work we can do. That is weird since we worked so hard on reducing the amount of work the disk has to do. Looking deeper, it turned out to be an interesting combination of issues.

Whenever Voron changes the active journal file, we’ll register the new journal number in the header file, which requires an fsync() call. Because we are using shared journals, the writes from both the database and all the indexes go to the same file, filling it up very quickly. That meant we were creating new journal files at a rate of more than one per second. That is quite a rate for creating new journal files, mostly resulting from the sheer number of writes that we funnel into a single file.

The catch here is that on each journal file creation, we need to register each one of the environments that share the journal. In this case, we have over a hundred environments participating, and we need to update the header file for each environment. With the rate of churn that we have with the new shared journal mode, that alone increases the number of fsync() generated.

It gets more annoying when you realize that in order to actually share the journal, we need to create a hard link between the environments. On Linux, for example, we need to write the following code:


bool create_hard_link_durably(const char *src, const char *dest) {
    if (link(src, dest) == -1) 
        return false;


    int dirfd = open(dest, O_DIRECTORY);
    if (dirfd == -1) 
        return false;


    int rc = fsync(dirfd);
    close(dirfd);
    return rc != -1;
}

You need to make 4 system calls to do this properly, and most crucially, one of them is fsync() on the destination directory. This is required because the man page states:

Calling fsync() does not necessarily ensure that the entry in the directory containing the file has also reached disk.  For that an explicit fsync() on a file descriptor for the directory is also needed.

Shared journals mode requires that we link the journal file when we record a transaction from that environment to the shared journal. In our benchmark scenario, that means that each second, we’ll write from each environment to each journal. We need an fsync() for the directory of each environment per journal, and in total, we get to over 200 fsync() per journal file, which we replace at a rate of more than one per second.

Doing nothing as fast as possible…

Even with this cost, we are still 3 times faster than before, which is great, but I think we can do better. In order to be able to do that, we need to figure out a way to reduce the number of fsync() calls being generated. The first task to handle is updating the file header whenever we create a new journal.

We are already calling fsync() on the directory when we create the journal file, so we ensure that the file is properly persisted in the directory. There is no need to also record it in the header file. Instead, we can just use the directory listing to handle this scenario. That change alone saved us about 100 fsync() / second.

The second problem is with the hard links, we need to make sure that these are persisted. But calling fsync() for each one is cost-prohibitive. Luckily, we already have a transactional journal, and we can re-use that. As part of committing a set of transactions to the shared journal, we’ll also record an entry in the journal with the associated linked paths.

That means we can skip calling fsync() after creating the hard link, since if we run into a hard crash, we can recover the linked journals during the journal recovery. That allows us to skip the other 100 fsync() / second.

Another action we can take to reduce costs is to increase the size of the journal files. Since we are writing entries from both the database and indexes to the same file, we are going through them a lot faster now, so increasing the default maximum size will allow us to amortize the new file costs across more transactions.

The devil is in the details

The idea of delaying the fsync() of the parent directory of the linked journals until recovery is a really great one because we usually don’t recover. That means we delay the cost indefinitely, after all. Recovery is rare, and adding a few milliseconds to the recovery time is usually not meaningful.

However… There is a problem when you look at this closely. The whole idea behind shared journals is that transactions from multiple storage environments are written to a single file, which is hard-linked to multiple directories. Once written, each storage environment deals with the journal files independently.

That means that if the root environment is done with a journal file and deletes it before the journal file hard link was properly persisted to disk and there was a hard crash… on recovery, we won’t have a journal to re-create the hard links, and the branch environments will be in an invalid state.

That is a set of circumstances that is going to be unlikely, but that is something that we have to prevent, nevertheless.

The solution for that is to keep track of all the journal directories, and whenever we are about to delete a journal file, we’ll sync all the associated journal directories. The key here is that when we do that, we keep track of the current journal written to that directory. Instead of having to run fsync() for each directory per journal file, we can amortize this cost. Because we delayed the actual syncing, we have time to create more journal files, so calling fsync() on the directory ensures that multiple files are properly persisted.

We still have to sync the directories, but at least it is not to the tune of hundreds of times per second.

Performance results

After making all of those changes, we run the benchmark again. We are looking at over 500 req/sec and latency peaks that hover around 100 ms under load. As a reminder, that is almost twice as much as the improved version (and a hugeimprovement in latency).

If we compare this to the initial result, we increased the number of requests per second by over 500%.

But I have some more ideas about that, I’ll discuss them in the next post…

time to read 17 min | 3214 words

I wrote before about a surprising benchmark that we ran to discover the limitations of modern I/O systems. Modern disks such as NVMe have impressive capacity and amazing performance for everyday usage. When it comes to the sort of activities that a database engine is interested in, the situation is quite different.

At the end of the day, a transactional database cares a lot about actually persisting the data to disk safely. The usual metrics we have for disk benchmarks are all about buffered writes, that is why we run our own benchmark. The results were really interesting (see the post), basically, it feels like there is a bottleneck writing to the disk. The bottleneck is with the number of writes, not how big they are.

If you are issuing a lot of small writes, your writes will contend on that bottleneck and you’ll see throughput that is slow. The easiest way to think about it is to consider a bus carrying 50 people at once versus 50 cars with one person each. The same road would be able to transport a lot more people with the bus rather than with individual cars, even though the bus is heavier and (theoretically, at least) slower.

Databases & Storages

In this post, I’m using the term Storage to refer to a particular folder on disk, which is its own self-contained storage with its own ACID transactions. A RavenDB database is composed of many such Storages that are cooperating together behind the scenes.

The I/O behavior we observed is very interesting for RavenDB. The way RavenDB is built is that a single database is actually made up of a central Storage for the data, and separate Storages for each of the indexes you have. That allows us to do split work between different cores, parallelize work, and most importantly, benefit from batching changes to the indexes.

The downside of that is that a single transaction doesn’t cause a single write to the disk but multiple writes. Our test case is the absolute worst-case scenario for the disk, we are making a lot of individual writes to a lot of documents, and there are about 100 indexes active on the database in question.

In other words, at any given point in time, there are many (concurrent) outstanding writes to the disk. We don’t actually care about most of those writers, mind you. The only writer that matters (and is on the critical path) is the database one. All the others are meant to complete in an asynchronous manner and, under load, will actually perform better if they stall (since they can benefit from batching).

The problem is that we are suffering from this situation. In this situation, the writes that the user is actually waiting for are basically stuck in traffic behind all the lower-priority writes. That is quite annoying, I have to say.

The role of the Journal in Voron

The Write Ahead Journal in Voron is responsible for ensuring that your transactions are durable.  I wrote about it extensively in the past (in fact, I would recommend the whole series detailing the internals of Voron). In short, whenever a transaction is committed, Voron writes that to the journal file using unbuffered I/O.

Remember that the database and each index are running their own separate storages, each of which can commit completely independently of the others. Under load, all of them may issue unbuffered writes at the same time, leading to congestion and our current problem.

During normal operations, Voron writes to the journal, waits to flush the data to disk, and then deletes the journal. They are never actually read except during startup. So all the I/O here is just to verify that, on recovery, we can trust that we won’t lose any data.

The fact that we have many independent writers that aren’t coordinating with one another is an absolute killer for our performance in this scenario. We need to find a way to solve this, but how?

One option is to have both indexes and the actual document in the same storage. That would mean that we have a single journal and a single writer, which is great. However, Voron has a single writer model, and for very good reasons. We want to be able to process indexes in parallel and in batches, so that was a non-starter.

The second option was to not write to the journal in a durable manner for indexes. That sounds… insane for a transactional database, right? But there is logic to this madness. RavenDB doesn’t actually need its indexes to be transactional, as long as they are consistent, we are fine with “losing” transactions (for indexes only, mind!). The reasoning behind that is that we can re-index from the documents themselves (who would be writing in a durable manner).

We actively considered that option for a while, but it turned out that if we don’t have a durable journal, that makes it a lot more difficult to recover. We can’t rely on the data on disk to be consistent, and we don’t have a known good source to recover from. Re-indexing a lot of data can also be pretty expensive. In short, that was an attractive option from a distance, but the more we looked into it, the more complex it turned out to be.

The final option was to merge the journals. Instead of each index writing to its own journal, we could write to a single shared journal at the database level. The problem was that if we did individual writes, we would be right back in the same spot, now on a single file rather than many. But our tests show that this doesn’t actually matter.

Luckily, we are well-practiced in the notion of transaction merging, so this is exactly what we did. Each storage inside a database is completely independent and can carry on without needing to synchronize with any other. We defined the following model for the database:

  • Root Storage: Documents
  • Branch: Index - Users/Search
  • Branch: Index - Users/LastSuccessfulLogin
  • Branch: Index - Users/Activity

This root & branch model is a simple hierarchy, with the documents storage serving as the root and the indexes as branches. Whenever an index completes a transaction, it will prepare the journal entry to be written, but instead of writing the entry to its own journal, it will pass the entry to the root.

The root (the actual database, I remind you) will be able to aggregate the journal entries from its own transaction as well as all the indexes and write them to the disk in a single system call. Going back to the bus analogy, instead of each index going to the disk using its own car, they all go on the bus together.

We now write all the entries from multiple storages into the same journal, which means that we have to distinguish between the different entries.  I wrote a bit about the challenges involved there, but we got it done.

The end result is that we now have journal writes merging for all the indexes of a particular database, which for large databases can reduce the total number of disk writes significantly. Remember our findings from earlier, bigger writes are just fine, and the disk can process them at GB/s rate. It is the number of individual writes that matters most here.

Writing is not even half the job, recovery (read) in a shared world

The really tough challenge here wasn’t how to deal with the write side for this feature. Journals are never read during normal operations. Usually we only ever write to them, and they keep a persistent record of transactions until we flush all changes to the data file, at which point we can delete them.

It is only when the Storage starts that we need to read from the journals, to recover all transactions that were committed to them. As you can imagine, even though this is a rare occurrence, it is one of critical importance for a database.

This change means that we direct all the transactions from both the indexes and the database into a single journal file. Usually, each Storage environment has its own Journals/ directory that stores its journal files. On startup, it will read through all those files and recover the data file.

How does it work in the shared journal model? For a root storage (the database), nothing much changes. We need to take into account that the journal files may contain transactions from a different storage, such as an index, but that is easy enough to filter.

What about branch storage (an index) recovery? Well, it can probably just read the Journals/ directory of the root (the database), no?

Well, maybe. Here are some problems with this approach. How do we encode the relationship between root & branch? Do we store a relative path, or an absolute path? We could of course just always use the root’s Journals/ directory, but that is problematic. It means that we could only open the branch storage if we already have the root storage open. Accepting this limitation means adding a new wrinkle into the system that currently doesn’t exist.

It is highly desirable (for many reasons) to want to be able to work with just a single environment. For example, for troubleshooting a particular index, we may want to load it in isolation from its database. Losing that ability, which ties a branch storage to its root, is not something we want.

The current state, by the way, in which each storage is a self-contained folder, is pretty good for us. Because we can play certain tricks. For example, we can stop a database, rename an index folder, and start it up again. The index would be effectively re-created. Then we can stop the database and rename the folders again, going back to the previous state. That is not possible if we tie all their lifetimes together with the same journal file.

Additional complexity is not welcome in this project

Building a database is complicated enough, adding additional levels of complexity is a Bad Idea. Adding additional complexity to the recovery process (which by its very nature is both critical and rarely executed) is a Really Bad Idea.

I started laying out the details about what this feature entails:

  • A database cannot delete its journal files until all the indexes have synced their state.
  • What happens if an index is disabled by the user?
  • What happens if an index is in an error state?
  • How do you manage the state across snapshot & restore?
  • There is a well-known optimization for databases in which we split the data file and the journal files into separate volumes. How is that going to work in this model?
  • Putting the database and indexes on separate volumes altogether is also a well-known optimization technique. Is that still possible?
  • How do we migrate from legacy databases to the new shared journal model?

I started to provide answers for all of these questions… I’ll spare you the flow chart that was involved, it looked something between abstract art and the remains of a high school party.

The problem is that at a very deep level, a Voron Storage is meant to be its own independent entity, and we should be able to deal with it as such. For example, RavenDB has a feature called Side-by-Side indexes, which allows us to have two versions of an index at the same time. When both the old and new versions are fully caught up, we can shut down both indexes, delete the old one, and rename the new index with the old one’s path.

A single shared journal would have to deal with this scenario explicitly, as well as many other different ones that all made such assumptions about the way the system works.

Not my monkeys, not my circus, not my problem

I got a great idea about how to dramatically simplify the task when I realized that a core tenet of Voron and RavenDB in general is that we should not go against the grain and add complexity to our lives. In the same way that Voron uses memory-mapped files and carefully designed its data access patterns to take advantage of the kernel’s heuristics.

The idea is simple, instead of having a single shared journal that is managed by the database (the root storage) and that we need to access from the indexes (the branch storages), we’ll have a single shared journal with many references.

The idea is that instead of having a single journal file, we’ll take advantage of an interesting feature: hard links. A hard link is just a way to associate the same file data with multiple file names, which can reside in separate directories. A hard link is limited to files running on the same volume, and the easiest way to think about them is as pointers to the file data.

Usually, we make no distinction between the file name and the file itself, but at the file system level, we can attach a single file to multiple names. The file system will manage the reference counts for the file, and when the last reference to the file is removed, the file system will delete the file.

The idea is that we’ll keep the same Journals/ directory structure as before, where each Storage has its own directory. But instead of having separate journals for each index and the database, we’ll have hard links between them. You can see how it will look like here, the numbers next to the file names are the inode numbers, you can see that there are multiple such files with the same inode number (indicating that there are multiple links to the same underlying file)..


└── [  40956] my.shop.db
    ├── [  40655] Indexes
    │   ├── [  40968] Users_ByName
    │   │   └── [  40970] Journals
    │   │       ├── [  80120] 0002.journal
    │   │       └── [  82222] 0003.journal
    │   └── [  40921] Users_Search
    │       └── [  40612] Journals
    │           ├── [  81111] 0001.journal
    │           └── [  82222] 0002.journal
    └── [  40812] Journals
        ├── [  81111] 0014.journal
        └── [  82222] 0015.journal

With this idea, here is how a RavenDB database manages writing to the journal. When the database needs to commit a transaction, it will write to its journal, located in the Journals/ directory. If an index (a branch storage) needs to commit a transaction, it does not write to its own journal but passes the transaction to the database (the root storage), where it will be merged with any other writes (from the database or other indexes), reducing the number of write operations.

The key difference here is that when we write to the journal, we check if that journal file is already associated with this storage environment. Take a look at the Journals/0015.journal file, if the Users_ByName index needs to write, it will check if the journal file is already associated with it or not.  In this case, you can see that Journals/0015.journal points to the same file (inode) as Indexes/Users_ByName/Journals/0003.journal.

What this means is that the shared journals mode is only applicable for committing transactions, there have been no changes required for the reads / recovery side. That is a major plus for this sort of a critical feature since it means that we can rely on code that we have proven to work over 15 years.

The single writer mode makes it work

A key fact to remember is that there is always only a single writer to the journal file. That means that there is no need to worry about contention or multiple concurrent writes competing for access. There is one writer and many readers (during recovery), and each of them can treat the file as effectively immutable during recovery.

The idea behind relying on hard links is that we let the operating system manage the file references. If an index flushes its file and is done with a particular journal file, it can delete that without requiring any coordination with the rest of the indexes or the database. That lack of coordination is a huge simplification in the process.

In the same manner, features such as copying & moving folders around still work. Moving a folder will not break the hard links, but copying the folder will. In that case, we don’t actually care, we’ll still read from the journal files as normal. When we need to commit a new transaction after a copy, we’ll create a new linked file.

That means that features such as snapshots just work (although restoring from a snapshot will create multiple copies of the “same” journal file). We don’t really care about that, since in short order, the journals will move beyond that and share the same set of files once again.

In the same way, that is how we’ll migrate from the old system to the new one. It is just a set of files on disk, and we can just create new hard links as needed.

Advanced scenarios behavior

I mentioned earlier that a well-known technique for database optimizations is to split the database file and the journals into separate volumes (which provides higher overall I/O throughput). If the database and the indexes reside on different volumes, we cannot use hard links, and the entire premise of this feature fails.

In practice, at least for our users’ scenarios, that tends to be the exception rather than the rule. And shared journals are a relevant optimization for the most common deployment model.

Additional optimizations - vectored I/O

The idea behind shared journals is that we can funnel the writes from multiple environments through a single pipe, presenting the disk with fewer (and larger) writes. The fact that we need to write multiple buffers at the same time also means that we can take advantage of even more optimizations.

In Windows, we can use WriteFileGather to submit a single system call to merge multiple writes from many indexes and the database. On Linux, we use pwritev for the same purpose. The end result is additional optimizations beyond just the merged writes.

I have been working on this set of features for a very long time, and all of them are designed to be completely invisible to the user. They either give us a lot more flexibility internallyor they are meant to just provide better performance without requiring any action from the user.

I’m really looking forward to showing the performance results. We’ll get to that in the next post…

FUTURE POSTS

  1. When racing the Heisenbug, code quality goes out the Windows - one day from now
  2. Optimizing concurrent count operations - 3 days from now
  3. Production postmortem: The race condition in the interlock - 5 days from now

There are posts all the way to Mar 28, 2025

RECENT SERIES

  1. Production Postmortem (52):
    12 Dec 2023 - The Spawn of Denial of Service
  2. RavenDB 7.1 (6):
    18 Mar 2025 - One IO Ring to rule them all
  3. RavenDB 7.0 Released (4):
    07 Mar 2025 - Moving to NLog
  4. Challenge (77):
    03 Feb 2025 - Giving file system developer ulcer
  5. Answer (13):
    22 Jan 2025 - What does this code do?
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}