Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,565
|
Comments: 51,184
Privacy Policy · Terms
filter by tags archive
time to read 2 min | 218 words

RavenDB is a pretty big system, with well over 1 million lines of code. Recently, I had to deal with an interesting problem. I had a CancellationToken at hand, which I expected to remain valid for the duration of the full operation.

However, something sneaky was going on there. Something was cancelling my CancelationToken, and not in an expected manner. At last count, I had roughly 2 bazillion CancelationTokens in the RavenDB codebase. Per request, per database, global to the server process, time-based, operation-based, etc., etc.

Figuring out why the CancelationToken was canceled turned out to be a chore. Instead of reading through the code, I cheated.


token.Register(() =>
{
    Console.WriteLine("Cancelled!" + Environment.StackTrace);
});

I ran the code, tracked back exactly who was calling cancel, and realized that I had mixed the request-based token with the database-level token. A single line fix in the end. Until I knew where it was, it was very challenging to figure it out.

This approach, making the code tell you what is wrong, is an awesome way to cut down debugging time by a lot.

time to read 8 min | 1522 words

There are at least 3 puns in the title of this blog post. I’m sorry, but I’m writing this after several days of tracking an impossible bug. I’m actually writing a set of posts to wind down from this hunt, so you’ll have to suffer through my more prosaic prose.

This bug is the kind that leaves you questioning your sanity after days of pursuit, the kind that I’m sure I’ll look back on and blame for any future grey hair I have. I’m going to have another post talking about the bug since it is such a doozy. In this post, I want to talk about the general approach I take when dealing with something like this.

Beware, this process involves a lot of hair-pulling. I’m saving that for when the real nasties show up.

The bug in question was a race condition that defied easy reproduction. It didn’t show up consistently—sometimes it surfaced, sometimes it didn’t. The only “reliable” way to catch it was by running a full test suite, which took anywhere from 8 to 12 minutes per run. If the suite hung, we knew we had a hit. But that left us with a narrow window to investigate before the test timed out or crashed entirely. To make matters worse, the bug was in new C code called from a .NET application.

New C code is a scary concept. New C code that does multithreading is an even scarier concept. Race conditions there are almost expected, right?

That means that the feedback cycle is long. Any attempt we make to fix it is going to be unreliable - “Did I fix it, or it just didn’t happen?” and there isn’t a lot of information going on.The first challenge was figuring out how to detect the bug reliably.

Using Visual Studio as the debugger was useless here—it only reproduced in release mode, and even with native debugging enabled, Visual Studio wouldn’t show the unmanaged code properly. That left us blind to the C library where the bug resided. I’m fairly certain that there are ways around that, but I was more interested in actually getting things done than fighting the debugger.

We got a lot of experience with WinDbg, a low-level debugger and a real powerhouse. It is also about as friendly as a monkey with a sore tooth and an alcohol addiction. The initial process was all about trying to reproduce the hang and then attach WinDbg to it.

Turns out that we never actually generated PDBs for the C library. So we had to figure out how to generate them, then how to carry them all the way from the build to the NuGet package to the deployment for testing - to maybe reproduce the bug again. Then we could see in what area of the code we are even in.

Getting WinDbg attached is just the start; we need to sift through the hundreds of threads running in the system. That is where we actually started applying the proper process for this.

This piece of code is stupidly simple, but it is sufficient to reduce “what thread should I be looking at” from 1 - 2 minutes to 5 seconds.


SetThreadDescription(GetCurrentThread(), L"Rvn.Ring.Wrkr");

I had the thread that was hanging, and I could start inspecting its state. This was a complex piece of code, so I had no idea what was going on or what the cause was. This is when we pulled the next tool from our toolbox.


void alert() {
    while (1) {
        Beep(800, 200);
        Sleep(200);
    }
}

This isn’t a joke, it is a super important aspect. In WinDbg, we noticed some signs in the data that the code was working on, indicating that something wasn’t right. It didn’t make any sort of sense, but it was there. Here is an example:


enum state
{
  red,
  yellow,
  green
};


enum state _currentState;

And when we look at it in the debugger, we get:


0:000> dt _currentState
Local var @ 0x50b431f614 Type state
17 ( banana_split )

That is beyond a bug, that is some truly invalid scenario. But that also meant that I could act on it. I started adding things like this:


if(_currentState != red && 
   _currentState != yellow && 
   _currentState != green) {
   alert();
}

The end result of this is that instead of having to wait & guess, I would now:

  • Be immediately notified when the issue happened.
  • Inspect the problematic state earlier.
  • Hopefully glean some additional insight so I can add more of those things.

With this in place, we iterated. Each time we spotted a new behavior hinting at the bug’s imminent trigger, we put another call to the alert function to catch it earlier. It was crude but effective—progressively tightening the noose around the problem.

Race conditions are annoyingly sensitive; any change to the system—like adding debug code—alters its behavior. We hit this hard. For example, we’d set a breakpoint in WinDbg, and the alert function would trigger as expected. The system would beep, we’d break in, and start inspecting the state. But because this was an optimized release build, the debugging experience was a mess. Variables were often optimized away into registers or were outright missing, leaving us to guess what was happening.

I resorted to outright hacks like this function:


__declspec(noinline) void spill(void* ptr) {
    volatile void* dummy = ptr;
    dummy; // Ensure dummy isn't flagged as unused
}

The purpose of this function is to force the compiler to assign an address to a value. Consider the following code:


if (work->completed != 0) {
    printf("old_global_state : %p, current state: %p\n",
         old_global_state, handle_ptr->global_state);
    alert();
    spill(&work);
}

Because we are sending a pointer to the work value to the spill function, the compiler cannot just put that in a register and must place it on the stack. That means that it is much easier to inspect it, of course.

Unfortunately, adding those spill calls led to the problem being “fixed”, we could no longer reproduce it. Far more annoyingly, any time that we added any sort of additional code to try to narrow down where this was happening, we had a good chance of either moving the behavior somewhere completely different or masking it completely.

Here are some of our efforts to narrow it down, if you want to see what the gory details look like.

At this stage, the process became a grind. We’d hypothesize about the bug’s root cause, tweak the code, and test again. Each change risked shifting the race condition’s timing, so we’d often see the bug vanish, only to reappear later in a slightly different form. The code quality suffered—spaghetti logic crept in as we layered hacks on top of hacks. But when you’re chasing a bug like this, clean code takes a back seat to results. The goal is to understand the failure, not to win a style award.

Bug hunting at this level is less about elegance and more about pragmatism. As the elusiveness of the bug increases, so does code quality and any other structured approach to your project. The only thing on your mind is, how do I narrow it down?. How do I get this chase to end?

Next time, I’ll dig into the specifics of this particular bug. For now, this is the high-level process: detect, iterate, hack, and repeat. No fluff—just the reality of the chase. The key in any of those bugs that we looked at is to keep narrowing the reproduction to something that you can get in a reasonable amount of time.

Once that happens, when you can hit F5 and get results, this is when you can start actually figuring out what is going on.

time to read 18 min | 3547 words

This post isn’t actually about a production issue—thankfully, we caught this one during testing. It’s part of a series of blog posts that are probably some of my favorite posts to write. Why? Because when I’m writing one, it means I’ve managed to pin down and solve a nasty problem.

 This time, it’s a race condition in RavenDB that took mountains of effort, multiple engineers, and a lot of frustration to resolve.

For the last year or so, I’ve been focused on speeding up RavenDB’s core performance, particularly its IO handling. You might have seen my earlier posts about this effort. One key change we made was switching RavenDB’s IO operations to use IO Ring, a new API designed for high-performance, asynchronous IO, and other goodies. If you’re in the database world and care about squeezing every ounce of performance out of your system, this is the kind of thing that you want to use.

This wasn’t a small tweak. The pull request for this work exceeded 12,000 lines of code—over a hundred commits—and likely a lot more code when you count all the churn. Sadly, this is one of those changes where we can’t just split the work into digestible pieces. Even now, we still have some significant additional work remaining to do.

We had two or three of our best engineers dedicated to it, running benchmarks, tweaking, and testing over the past few months. The goal is simple: make RavenDB faster by any means necessary.

And we succeeded, by a lot (and yes, more on that in a separate post). But speed isn’t enough; it has to be correct too. That’s where things got messy.

Tests That Hang, Sometimes

We noticed that our test suite would occasionally hang with the new code. Big changes like this—ones that touch core system components and take months to implement—often break things. That’s expected, and it’s why we have tests. But these weren’t just failures; sometimes the tests would hang, crash, or exhibit other bizarre behavior. Intermittent issues are the worst. They scream “race condition,” and race conditions are notoriously hard to track down.

Here’s the setup. IO Ring isn’t available in managed code, so we had to write native C code to integrate it. RavenDB already has a Platform Abstraction Layer (PAL) to handle differences between Windows, Linux, and macOS, so we had a natural place to slot this in.

The IO Ring code had to be multithreaded and thread-safe. I’ve been writing system-level code for over 20 years, and I still get uneasy about writing new multithreaded C code. It’s a minefield. But the performance we could get… so we pushed forward… and now we had to see where that led us.

Of course, there was a race condition. The actual implementation was under 400 lines of C code—deliberately simple, stupidly obvious, and easy to review. The goal was to minimize complexity: handle queuing, dispatch data, and get out. I wanted something I could look at and say, “Yes, this is correct.” I absolutely thought that I had it covered.

We ran the test suite repeatedly. Sometimes it passed; sometimes it hung; rarely, it would crash.

When we looked into it, we were usually stuck on submitting work to the IO Ring. Somehow, we ended up in a state where we pushed data in and never got called back. Here is what this looked like.


0:019> k
 #   Call Site
00   ntdll!ZwSubmitIoRing
01   KERNELBASE!ioring_impl::um_io_ring::Submit+0x73
02   KERNELBASE!SubmitIoRing+0x3b
03   librvnpal!do_ring_work+0x16c 
04   KERNEL32!BaseThreadInitThunk+0x17
05   ntdll!RtlUserThreadStart+0x2c

In the previous code sample, we just get the work and mark it as done. Now, here is the other side, where we submit the work to the worker thread.


int32_t rvn_write_io_ring(void* handle, int32_t count, 
        int32_t* detailed_error_code)
{
        int32_t rc = 0;
        struct handle* handle_ptr = handle;
        EnterCriticalSection(&handle_ptr->global_state->lock);
        ResetEvent(handle_ptr->global_state->notify);
        char* buf = handle_ptr->global_state->arena;
        struct workitem* prev = NULL;
        for (int32_t curIdx = 0; curIdx < count; curIdx++)
        {
                struct workitem* work = (struct workitem*)buf;
                buf += sizeof(struct workitem);
                *work = (struct workitem){
                        .prev = prev,
                        .notify = handle_ptr->global_state->notify,
                };
                prev = work;
                queue_work(work);
        }
        SetEvent(IoRing.event);


        bool all_done = false;
        while (!all_done)
        {
                all_done = true;
                WaitForSingleObject(handle_ptr->global_state->notify, INFINITE)
                ResetEvent(handle_ptr->global_state->notify);
                struct workitem* work = prev;
                while (work)
                {
                        all_done &= InterlockedCompareExchange(
&work->completed, 0, 0);
                        work = work->prev;
                }
        }


        LeaveCriticalSection(&handle_ptr->global_state->lock);
        return rc;
}

We basically take each page we were asked to write and send it to the worker thread for processing, then we wait for the worker to mark all the requests as completed. Note that we play a nice game with the prev and next pointers. The next pointer is used by the worker thread while the prev pointer is used by the submitter thread.

You can also see that this is being protected by a critical section (a lock) and that there are clear hand-off segments. Either I own the memory, or I explicitly give it to the background thread and wait until the background thread tells me it is done. There is no place for memory corruption. And yet, we could clearly get it to fail.

Being able to have a small reproduction meant that we could start making changes and see whether it affected the outcome. With nothing else to look at, we checked this function:


void queue_work_origin(struct workitem* work)
{
    work->next = IoRing.head;
    while (true)
    {
        struct workitem* cur_head = InterlockedCompareExchangePointer(
                        &IoRing.head, work, work->next);
        if (cur_head == work->next)
            break;
        work->next = cur_head;
    }
}

I have written similar code dozens of times, I very intentionally made the code simple so it would be obviously correct. But when I even slightly tweaked the queue_work function, the issue vanished. That wasn’t good enough, I needed to know what was going on.

Here is the “fixed” version of the queue_work function:


void queue_work_fixed(struct workitem* work)
{
        while (1)
        {
                struct workitem* cur_head = IoRing.head;
                work->next = cur_head;
                if (InterlockedCompareExchangePointer(
&IoRing.head, work, cur_head) == cur_head)
                        break;
        }
}

This is functionally the same thing. Look at those two functions! There shouldn’t be a difference between them. I pulled up the assembly output for those functions and stared at it for a long while.


1 work$ = 8
 2 queue_work_fixed PROC                             ; COMDAT
 3        npad    2
 4 $LL2@queue_work:
 5        mov     rax, QWORD PTR IoRing+32
 6        mov     QWORD PTR [rcx+8], rax
 7        lock cmpxchg QWORD PTR IoRing+32, rcx
 8        jne     SHORT $LL2@queue_work
 9        ret     0
10 queue_work_fixed ENDP

A total of ten lines of assembly. Here is what is going on:

  • Line 5 - we read the IoRing.head into register rax (representing cur_head).
  • Line 6 - we write the rax register (representing cur_head) to work->next.
  • Line 7 - we compare-exchange the value of IoRing.head with the value in rcx (work) using rax (cur_head) as the comparand.
  • Line 8 - if we fail to update, we jump to line 5 again and re-try.

That is about as simple a code as you can get, and exactly expresses the intent in the C code. However, if I’m looking at the original version, we have:


1 work$ = 8
 2 queue_work_origin PROC                               ; COMDAT
 3         npad    2
 4 $LL2@queue_work_origin:
 5         mov     rax, QWORD PTR IoRing+32
 6         mov     QWORD PTR [rcx+8], rax
;                        ↓↓↓↓↓↓↓↓↓↓↓↓↓ 
 7         mov     rax, QWORD PTR IoRing+32
;                        ↑↑↑↑↑↑↑↑↑↑↑↑↑
 8         lock cmpxchg QWORD PTR IoRing+32, rcx
 9         cmp     rax, QWORD PTR [rcx+8]
10         jne     SHORT $LL2@queue_work_origin
11         ret     0
12 queue_work_origin ENDP

This looks mostly the same, right? But notice that we have just a few more lines. In particular, lines 7, 9, and 10 are new. Because we are using a field, we cannot compare to cur_head directly like we previously did but need to read work->next again on lines 9 &10. That is fine.

What is not fine is line 7. Here we are reading IoRing.headagain, and work->next may point to another value. In other words, if I were to decompile this function, I would have:


void queue_work_origin_decompiled(struct workitem* work)
{
    while (true)
    {
        work->next = IoRing.head;
//                        ↓↓↓↓↓↓↓↓↓↓↓↓↓ 
        struct workitem* tmp = IoRing.head;
//                        ↑↑↑↑↑↑↑↑↑↑↑↑↑
        struct workitem* cur_head = InterlockedCompareExchangePointer(
                        &IoRing.head, work, tmp);
        if (cur_head == work->next)
            break;
    }
}

Note the new tmp variable? Why is it reading this twice? It changes the entire meaning of what we are trying to do here.

You can look at the output directly in the Compiler Explorer.

This smells like a compiler bug. I also checked the assembly output of clang, and it doesn’t have this behavior.

I opened a feedback item with MSVC to confirm, but the evidence is compelling. Take a look at this slightly different version of the original. Instead of using a global variable in this function, I’m passing the pointer to it.


void queue_work_origin_pointer(
struct IoRingSetup* ring, struct workitem* work)
{
        while (1)
        {
                struct workitem* cur_head = ring->head;
                work->next = cur_head;
                if (InterlockedCompareExchangePointer(
&ring->head, work, work->next) ==  work->next)
                        break;
        }
}

And here is the assembly output, without the additional load.


ring$ = 8
work$ = 16
queue_work_origin PROC                              ; COMDAT
        prefetchw BYTE PTR [rcx+32]
        npad    12
$LL2@queue_work:
        mov     rax, QWORD PTR [rcx+32]
        mov     QWORD PTR [rdx+8], rax
        lock cmpxchg QWORD PTR [rcx+32], rdx
        cmp     rax, QWORD PTR [rdx+8]
        jne     SHORT $LL2@queue_work
        ret     0
queue_work_origin ENDP

That unexpected load was breaking our thread-safety assumptions, and that led to a whole mess of trouble. Violated invariants are no joke.

The actual fix was pretty simple, as you can see. Finding it was a huge hurdle. The good news is that I got really familiar with this code, to the point that I got some good ideas on how to improve it further 🙂.

time to read 2 min | 373 words

.NET Aspire is a framework for building cloud-ready distributed systems in .NET. It allows you to orchestrate your application along with all its dependencies, such as databases, observability tools, messaging, and more.

RavenDB now has full support for .NET Aspire. You can read the full details in this article, but here is a sneak peek.

Defining RavenDB deployment as part of your host definition:


using Projects;


var builder = DistributedApplication.CreateBuilder(args);


var serverResource = builder.AddRavenDB(name: "ravenServerResource");
var databaseResource = serverResource.AddDatabase(
    name: "ravenDatabaseResource", 
    databaseName: "myDatabase");


builder.AddProject<RavenDBAspireExample_ApiService>("RavenApiService")
    .WithReference(databaseResource)
    .WaitFor(databaseResource);


builder.Build().Run();

And then making use of that in the API projects:


var builder = WebApplication.CreateBuilder(args);


builder.AddServiceDefaults();
builder.AddRavenDBClient(connectionName: "ravenDatabaseResource", configureSettings: settings =>
{
    settings.CreateDatabase = true;
    settings.DatabaseName = "myDatabase";
});
var app = builder.Build();


// here we’ll add some API endpoints shortly…


app.Run();

You can read all the details here. The idea is to make it easier & simpler for you to deploy RavenDB-based systems.

time to read 8 min | 1552 words

In version 7.0, RavenDB introduced vector search, enabling semantic search on text and image embeddings.For example, searching for "Italian food" could return results like Mozzarella & Pasta. We now focus our efforts to enhance the usability and capability of this feature.

Vector search uses embeddings (AI models' representations of data) to search for meaning.Embeddings and vectors are powerful but complex.The Embeddings Generation feature simplifies their use.

RavenDB makes it trivial to add semantic search and AI capabilities to your system by natively integrating with AI models to generate embeddings from your data.  RavenDB Studio's AI Hub allows you to connect to various models by simply specifying the model and the API key.

You can read more about this feature in this article or in the RavenDB docs. This post is about the story & reasoning behind this feature.

Cloudflare has a really good post explaining how embeddings work. TLDR, it is a way for you to search for meaning. That is why Ravioli shows up for Italian food, because the model understands their association and places them near each other in vector space. I’m assuming that you have at least some understanding of vectors in this post.

The Embeddings Generation feature in RavenDB goes beyond simply generating embeddings for your data.It addresses the complexities of updating embeddings when documents change, managing communication with external models, and handling rate limits.

The elevator pitch for this feature is:

RavenDB natively integrates with AI models to generate embeddings from your data, simplifying the integration of semantic search and AI capabilities into your system.The goal is to make using the AI model transparent for the application, allowing you to easily and quickly build advanced AI-integrated features without any hassle.

While this may sound like marketing jargon, the value of this feature becomes apparent when you experience the challenges of working without it.

To illustrate this, RavenDB Studio now includes an AI Hub.

You can create a connection to any of the following models:

Basically, the only thing you need to tell RavenDB is what model you want and the API key to use. Then, it is able to connect to the model.

The initial release of RavenDB 7.0 included bge-micro-v2 as an embedded model. After using that and trying to work with external models, it became clear that the difference in ease of use meant that we had to provide a good story around using embeddings.

Some things I’m not willing to tolerate, and the current status of working with embeddings in most other databases is a travesty of complexity.

Next, we need to define an Embeddings Generation task, which looks like this:

Note that I’m not doing a walkthrough of how this works (see this article or the RavenDB docs for more details about that); I want to explain what we are doing here.

The screenshot shows how to create a task that generates embeddings from the Title field in the Articles collection.For a large text field, chunking options (including HTML stripping and markdown) allow splitting the text according to your configuration and generate multiple embeddings.RavenDB supports plain text, HTML, and markdown, covering the vast majority of text formats.You can simply point RavenDB at a field, and it will generate embeddings, or you can use a script to specify the data for embeddings generation.

Quantization

Embeddings, which are multi-dimensional vectors, can have varying numbers of dimensions depending on the model.For example, RavenDB's embedded model (bge-micro-v2) has 384 dimensions, while OpenAI's text-embedding-3-large has 3,072 dimensions.Other common values for dimensions are 768 and 1,536.

Each dimension in the vector is represented by a 32-bit float, which indicates the position in that dimension.Consequently, a vector with 1,536 dimensions occupies 6KB of memory.Storing 10 million such vectors would require over 57GB of memory.

Although storing raw embeddings can be beneficial, quantization can significantly reduce memory usage at the cost of some accuracy.RavenDB supports both binary quantization (reducing a 6KB embedding to 192 bytes) and int8 quantization (reducing 6KB to 1.5KB).By using quantization, 57GB of data can be reduced to 1.7GB, with a generally acceptable loss of accuracy.Different quantization methods can be used to balance space savings and accuracy.

Caching

Generating embeddings is expensive.For example, using text-embedding-3-small from OpenAI costs $0.02 per 1 million tokens.While that sounds inexpensive, this blog post has over a thousand tokens so far and will likely reach 2,000 by the end.One of my recent blog posts had about 4,000 tokens.This means it costs roughly 2 cents per 500 blog posts, which can get expensive quickly with a significant amount of data.

Another factor to consider is handling updates.If I update a blog post's text, a new embedding needs to be generated.However, if I only add a tag, a new embedding isn't needed. We need to be able to handle both scenarios easily and transparently.

Additionally, we need to consider how to handle user queries.As shown in the first image, sending direct user input for embedding in the model can create an excellent search experience.However, running embeddings for user queries incurs additional costs.

RavenDB's Embedding Generation feature addresses all these issues.When a document is updated, we intelligently cache the text and its associated embedding instead of blindly sending the text to the model to generate a new embedding each time..This means embeddings are readily available without worrying about updates, costs, or the complexity of interacting with the model.

Queries are also cached, so repeated queries never have to hit the model.This saves costs and allows RavenDB to answer queries faster.

Single vector store

The number of repeated values in a dataset also affects caching.Most datasets contain many repeated values.For example, a help desk system with canned responses doesn't need a separate embedding for each response.Even with caching, storing duplicate information wastes time and space.  RavenDB addresses this by storing the embedding only once, no matter how many documents reference it, which saves significant space in most datasets.

What does this mean?

I mentioned earlier that this is a feature that you can only appreciate when you contrast the way you work with other solutions, so let’s talk about a concrete example. We have a product catalog, and we want to use semantic search on that.

We define the following AI task:

It uses the open-ai connection string to generate embeddings from the ProductsName field.

Here are some of the documents in my catalog:

In the screenshots, there are all sorts of phones, and the question is how do we allow ourselves to search through that in interesting ways using vector search.

For example, I want to search for Android phones. Note that there is no mention of Android in the catalog, we are going just by the names. Here is what I do:


$query = 'android'


from "Products" 
where vector.search(
      embedding.text(Name, ai.task('products-on-openai')), 
      $query
)

I’m asking RavenDB to use the existing products-on-openai task on the Name field and the provided user input. And the results are:

I can also invoke this from code, searching for a “mac”:


var products = session.Query<Products>()
.VectorSearch(
x => x.WithText("Name").UsingTask("products-on-openai"), 
factory => factory.ByText("Mac")
).ToList();

This query will result in the following output:

That matched my expectations, and it is easy, and it totally and utterly blows my mind. We aren’t searching for values or tags or even doing full-text search. We are searching for the semantic meaning of the data.

You can even search across languages. For example, take a look at this query:

This just works!

Here is a list of the things that I didn’t have to do:

  • Generate the embeddings for the catalog
  • And ensure that they are up to date as I add, remove & update products
  • Handle long texts and appropriate chunking
  • Perform quantization to reduce storage costs
  • Handle issues such as rate limits, model downtime (The GPUs at OpenAI are melting as I write this), and other “fun” states
  • Create a vector search index
  • Generate an embedding vector from the user’s input
  • See above for all the details we skip here
  • Query the vector search index using the generated embedding

This allows you to focus directly on delivering solutions to your customers instead of dealing with the intricacies of AI models, embeddings, and vector search.

I asked Grok to show me what it would take to do the same thing in Python. Here is what it gave me. Compared to this script, the RavenDB solution provides:

  • Efficiently managing data updates, including skipping model calls for unchanged data and regenerating embeddings when necessary.
  • Implementing batching requests to boost throughput.
  • Enabling concurrent embedding generation to minimize latency.
  • Caching results to prevent redundant model calls.
  • Using a single store for embeddings to eliminate duplication.
  • Managing caching and batching for queries.

In short, Embeddings Generation is the sort of feature that allows you to easily integrate AI models into your application with ease.

Use it to spark joy in your users easily, quickly, and without any hassle.

time to read 3 min | 562 words

I recently reviewed a function that looked something like this:


public class WorkQueue<T>
{
    private readonly ConcurrentQueue<T> _embeddingsQueue = new();
    private long _approximateCount = 0;


    public long ApproximateCount => Interlocked.Read(ref _approximateCount);


    public void Register(IEnumerable<T> items)
    {
        foreach (var item in items)
        {
            _embeddingsQueue.Enqueue(item);


            Interlocked.Increment(ref _approximateCount);
        }
    }
}

I commented that we should move the Increment() operation outside of the loop because if two threads are calling Register() at the same time, we’ll have a lot of contention here.

The reply was that this was intentional since calling Interlocked.CompareExchange() to do the update in a batch manner is more complex. The issue was a lack of familiarity with the Interlocked.Add() function, which allows us to write the function as:


public void Register(IEnumerable<T> items)
{
    int count = 0;
    foreach (var item in items)
    {
        _embeddingsQueue.Enqueue(item);
        count++;
    }
    Interlocked.Add(ref _approximateCount, count);
}

This allows us to perform just one atomic operation on the count. In terms of assembly, we are going to have these two options:


lock inc qword ptr [rcx] ; Interlocked.Increment()
lock add [rbx], rcx      ; Interlocked.Add()

Both options have essentially the same exact performance characteristics, but if we need to register a large batch of items, the second option drastically reduces the contention.

In this case, we don’t actually care about having an accurate count as items are added, so there is no reason to avoid the optimization.

time to read 7 min | 1214 words

After describing in detail the major refactoring we did for how RavenDB (via Voron, its storage engine) has gone through, there is one question remaining. What’s the point? The code is a lot simpler, of course, but the whole point of this much effort is to allow us to do interesting things.

There is performance, of course, but we haven’t gotten around to testing that yet because something that is a lot more interesting came up: Disk space management.

Voron allocates disk space from the operating system in batches of up to 1GB at a time. This is done to reduce file fragmentation and allow the file system to optimize the disk layout. It used to be something critical, but SSDs and NVMe made that a lot less important (but still a factor).

What happens if we have a very large database, but we delete a big collection of documents? This is a case where the user’s expectations and Voron’s behavior diverge. A user who just deleted a few million documents would expect to see a reduction in the size of the database. But Voron will mark the area on the disk as “to-be-used-later” and won’t free the disk space back to the operating system.

There were two reasons for this behavior:

  • We designed Voron in an era where it was far more common for systems to have hard disks, where fragmentation was a very serious problem.
  • It is really complicated to actually release disk space back to the operating system.

The first reason is no longer that relevant, since most database servers can reasonably expect to run on SSD or NVMe these days, significantly reducing the cost of fragmentation. The second reason deserves a more in-depth answer.

In order to release disk space back to the operating system, you have to do one of three things:

  • Store the data across multiple files and delete a file where it is no longer in use.
  • Run compaction, basically re-build the database from scratch in a compact form.
  • Use advanced features such as sparse files (hole punching) to return space to the file system without changing the file size.

The first option, using multiple files, is possible but pretty complex. Mostly because of the details of how you split to multiple files, whenever a single entry in an otherwise empty file will prevent its deletion, etc. There are also practical issues, such as the number of open file handles that are allowed, internal costs at the operating system level, etc.

Compaction, on the other hand, requires that you have enough space available during the compaction to run. In other words, if your disk is 85% full, and you delete 30% of the data, you don’t have free space to run a compaction.

Another consideration for compaction is that it can be really expensive. Running compaction on a 100GB database, for example, can easily take hours and in the cloud will very quickly exhaust your I/O credits.

RavenDB & Voron have supported compaction for over a decade, but it was always something that you did on very rare occasions. A user had to manually trigger it, and the downsides are pretty heavy, as you can see.

In most cases, I have to say, returning disk space back to the operating system is not something that is that interesting. That free space is handled by RavenDB and will be reused before we’ll allocate any additional new space from the OS. However, this is one of those features that keep coming up, because we go against users’ expectations.

The final option I discussed is using hole punching or sparse files (the two are pretty much equivalent - different terms between operating systems). The idea is that we can go to the operating system and tell it that a certain range in the file is not used, and that it can make use of that disk space again. Any future read from that range will return zeroes. If you write to this region, the file system will allocate additional space for those writes.

That behavior is problematic for RavenDB, because we used to use memory-mapped I/O to write to the disk. If there isn’t sufficient space to write, memory-mapped I/O is going to generate a segmentation fault / access violation. In general, getting an access violation because of a disk full is not okay by us, so we couldn’t use sparse files. The only option we were able to offer to reduce disk space was full compaction.

You might have noticed that I used past tense in the last paragraph. That is because I am now no longer limited to using just memory-mapped I/O. Using normal I/O for this purpose works even if we run out of disk space, we will get the usual disk full error (which we are already handling anyway).

Yes, that means that starting with RavenDB 7.1, we’ll automatically release free disk space directly back to the operating system, matching your likely expectations about the behavior. This is done in increments of 1MB, since we still want to reduce fragmentation and the number of file metadata that the file system needs to manage.

The one MB trigger

RavenDB will punch a hole in the file whenever there is a consecutive 1MB of free space. This is important to understand because of fragmentation. If you wrote 100 million documents, each 2 KB in size, and then deleted every second document, what do you think will happen? There won’t be any consecutive 1MB range for us to free.

Luckily, that sort of scenario tends to be pretty rare, and it is far more common to have clustering of writes and deletes, which allow us to take advantage of locality and free the disk space back to the OS automatically.

RavenDB will first use all the free space inside the file, reclaiming sparse regions as needed, before it will request additional disk space from the OS. When we do request additional space, we’ll still get it in large chunks (and without using sparse files). That is because it is far more likely to be immediately used, and we want to avoid giving the file system too much work.

Note that the overall file size is going to stay the same, but the actually used disk space is going to be reduced. We updated the RavenDB Studio to report both numbers, but when browsing the files manually, you need to keep that in mind.

I expect that this will be most noticeable for users who are running on cloud instances, where it is common to size the disks to be just sufficiently big enough for actual usage.

It Just Works

There is no action that you need to take to enable this behavior, and on first start of RavenDB 7.1, it will immediately release any free space already in the data files.

The work was actually completed and merged in August 2024, but it is going to be released sometime in Q2/Q3 of 2025. You might have noticed that there have been a lot of low-level changes targeted at RavenDB 7.1. We need to run them through the wringer to make sure that everything works as it should.

I’m looking forward to seeing this in action, there are some really nice indications about the sort of results we can expect. I’ll talk about that in more detail in another post, this one is getting long enough.

time to read 8 min | 1581 words

In the previous post, I talked about a massive amount of effort (2+ months of work) and about 25,000 lines of code changes. The only purpose of this task was to remove two locks from the system. During high load, we spent huge amounts of time contending for these locks, so removing them was well worth the effort.

During this work, I essentially found myself in the guts of Voron (RavenDB’s storage engine) and mostly dealing with old code. I’m talking about code that was written between 10 and 15 years ago. I wrote a blog post about it at the time. Working with old code is an interesting experience, especially since most of this code was written by me. I can remember some of my thoughts from the time I wrote it.

Old code is working code, and old code is also something that was built upon. Other parts of the codebase are making assumptions about the way the system behaves. And the more time a piece of code doesn't change, the more likely its behavior is going to ossify. Changing old code is hard because of the many ways that such dependencies can express themselves.

I dug through all of this decade-plus old code and I realized something pretty horrible.

It turns out that I made a mistake in understanding how Windows implements buffering for memory-mapped files. I realized my mistake around mid-2024, see the related post for the

actual details.

The TLDR summary, however, is that when using unbuffered file I/O with memory-mapped files on Windows, you cannot expect the mapped memory to reflect the data written using the file I/O API. Windows calls it coherence, and it was quite confusing when I first realized what the problem was. It turns out that this applies only to unbuffered I/O and there is no such problem with buffered I/O.

The scenario I needed to work with can use buffered I/O, however, which has been a profound shock to me. Large portions of the architecture of Voron are actually shaped by this limitation.

Because I thought that you couldn’t use both file I/O and memory-mapped files at the same time in Windows and get a consistent view of the data (the documentation literally says that, I have to add), RavenDB used memory-mapped I/O to write to the data file. That is a choice, certainly, but not one that I particularly liked. It was just that I was able to make things work and move on to other things.

This is another tale of accidental complexity, actually. I had a problem and found a solution to it, which at the time I considered quite clever. Because I had a solution, I never tried to dig any deeper into it and figure out whether this is the only solution.

This choice of using only memory-mapped I/O to write to the data file had consequences. In particular, it meant that:

  • We had to map the data using read-write mode.
  • There was no simple way to get an error if a write failed - since we just copied the data to memory, there was no actual write to fail. An error to write to disk would show up as a memory access violation (segmentation fault!) or just not show up at all.
  • Writing to a page that isn’t in memory may require us to read it first (even if we are overwriting all of it).

I accepted those limitations because I thought that this was the only way to do things. When I realized that I was wrong, that opened up so many possibilities. As far as the refactoring work, the way Voron did things changed significantly. We are now mapping the data file as read-only and writing to it using file I/O.

That means we have a known point of failure if we fail to write. That probably deserves some explanation. Failure to write to the disk can come in a bunch of ways. In particular, successfully writing to a file is not enough to safely store data, you also need to sync the file before you can be assured that the data is safe. The key here is that write + sync ensures that you’ll know that this either succeeded or failed.

Here is the old way we were writing to the data file. Conceptually, this looks like this:


auto mem = EnsureFileSize(pagesToWrite[pagesToWriteLength - 1].EndPosition);
for(auto i = 0; i < pagesToWriteLength; i++)
{
    auto path = pagesToWrite[i];
    memcpy(mem + page.Number * 8192, page.Buffer, page.Length);    
}


// some later time
if(FAILED(msync(mem))
   return SYNC_FAILURE;

And here is the first iteration of using the file I/O API for writes.


fallocate_if_needed(pagesToWrite[pagesToWriteLength - 1].EndPosition);
for(auto i = 0; i < pagesToWriteLength; i++)
{
    auto path = pagesToWrite[i];
    if(FAILED(pwrite(page.Number * 8192, page.Buffer, page.Length)))
         return WRITE_FAILURE;
}


// some time later
if (FAILED(fdatasync(file))
   return SYNC_FAILURE;

Conceptually, this is just the same, but notice that we respond immediately to write failures here.

When we started testing this feature, we realized something really interesting. The new version was much slower than the previous one, and it also generated a lot more disk writes.

I moved mountains for this?

Sometimes you get a deep sense of frustration when you look at benchmark results. The amount of work invested in this change is… pretty high. And from an architectural point of view, I’m loving it. The code is simpler, more robust, and allows us to cleanly do a lot more than we used to be able to.

The code also should be much faster, but it wasn’t. And given that performance is a critical aspect of RavenDB, that may cause us to scrap the whole thing.

Looking more deeply into the issue, it turned out that my statement about old code and the state of the system was spot on. Take a look at the two code snippets above and consider how they look from the point of view of the operating system. In the case of the memcpy() version, there is a strong likelihood that the kernel isn’t even involved (the pages are already paged in), and the only work done here is marking them as dirty (done by the CPU directly).

That means that the OS will figure out that it has stuff to write to the disk either when we call msync() or when its own timer is called. On the other hand, when we call pwrite(), we involve the OS at every stage of the process, making it far more likely that it will start the actual write to the disk earlier. That means that we are wasting batching opportunities.

In other words, because we used memory-mapped writes, we (accidentally, I might add) created a situation where we tried very hard to batch those writes in memory as much as possible. Another aspect here is that we are issuing a separate system call for each page. That means we are paying another high price.

The good thing about this is that we now have a good way to deal with those issues. The pwrite() code above was simply the first version used to test things out. Since we now have the freedom to run, we can use whatever file I/O we want.

In particular, RavenDB 7.1 now supports the notion of write modes, with the following possible options:

  • mmap - exactly like previous versions, uses a writable memory map and memcpy() to write the values to the data file.
  • file_io - uses pwrite() to write the data, onc page at a time, as shown above.
  • vectored_file_io - uses pwritev() to write the data, merging adjacent writes to reduce the number of system calls we use (Posix only, since Windows has strict limits on this capability).
  • io_ring - uses HIORING (Windows) / IO_Uring (Linux) to submit the whole set of work to the kernel as a single batch of operations.

RavenDB will select the appropriate mode for the system on its own, usually selecting io_ring for modern Linux and Windows machines, and vectored_file_io for Mac. You can control that using the RAVEN_VORON_WRITER_MODE environment variable, but that is provided only because we want to have an escape hatch, not something that you are meant to configure.

With those changes, we are on a much better footing for overall performance, but we aren’t done yet! I would love to give you the performance numbers, but we didn’t actually run the full test suite with just these changes. And that is because we aren’t done yet, I’ll cover that in the next post.

time to read 10 min | 1875 words

Even though RavenDB 7.0 isn’t quite out of the gate yet (expect the release very soon), I want to start talking about the RavenDB 7.1 release. This release is going to represent the biggest change in RavenDB in about 7 years, and at the same time, it is expected to be a complete non-event for most people.

We care enough about performance that we have both a dedicated team to verify it as well as really talented people whose sole job is to run benchmarks and optimize RavenDB. This year, the goal was to test RavenDB’s performance when running on large machines (many hundreds of GBs or RAM, dozens of cores, etc). The idea was to find bottlenecks and remove them, and we quickly found a few.

Performance optimization is often a very repetitive process. Find a problem, figure out the root cause, and move on. It is a process full of small changes and 0.1% improvements. The key is that if you practice this routinely, you end up with real measurable results over time, since your changes compound.

Most of the time, these are the changes you celebrate:

Here you can see a 6.6% increase in the number of operations per millisecond. That is a pretty big change, especially since it affects reading documents from storage.

We were roughly two months into this process when we ran into the end of the line. The scenario in question was 95% reads / 5% writes, with the idea of seeing how quickly the system responds under load, but without stressing it to the limits. Take a look at our findings:

What you can see here is that creating a transaction is costly, andover 50% of that cost is due to contention over a lock. We didn’t notice that until we had a lot of cores to go around, since there wasn’t a sufficient number of threads actively contending for this lock. Once we started looking for it, the problem was very apparent and visible on smaller machines as well.

RavenDB creates a single transaction per request, so that means we are spending a lot of time doing essentially nothing useful. What is worse, this is actively spinning. So the more cores we have, the more CPU we’ll use and the less useful work we’ll do. Yuck!

When we looked at the root cause, the problem became pretty apparent. Take a look at the following snippet:


_txCreation.EnterReadLock(); // creating read transaction
try
{
    _cancellationTokenSource.Token.ThrowIfCancellationRequested();


    tx = new LowLevelTransaction(previous.LowLevelTransaction, transactionPersistentContext, context);


    ActiveTransactions.Add(tx);
}
finally
{
    _txCreation.ExitReadLock();
}






using (PreventNewTransactions()) // during commit of write transaction
{
    if (tx.Committed && tx.FlushedToJournal)
        Interlocked.Exchange(ref _transactionsCounter, tx.Id);


    State = tx.State;


    Journal.Applicator.OnTransactionCommitted(tx);
    ScratchBufferPool.UpdateCacheForPagerStatesOfAllScratches();
    Journal.UpdateCacheForJournalSnapshots();


    tx.OnAfterCommitWhenNewTransactionsPrevented();
}

What happens is that during the commit of a write transaction we need to make updates to a bunch of places. We also need all new read transactions to see all our changes at once. So during commit, we take a short lock to update a couple of locations. And when we create a write transaction, we hold a read lock to be able to properly read all those values safely.

This particular code (and the approach in general) dates back to the very early days of Voron:

On one hand, I look at this code and cringe internally. On the other hand, it has served us really well for a long time. Of course, on the gripping hand, that code has been around for long enough to have grown some barnacles.

Notice the OnAfterCommitWhenNewTransactionsPrevented() call we have there? It is meant to allow other operations that need to run while no read transactions are allowed, in order to maintain in-memory consistency.

Fixing this issue was both straightforward and quite complex at the same time. Instead of storing all the data in various places, I introduced the following record:


record EnvironmentStateRecord(
    Pager.State DataPagerState, 
    long TransactionId,
    ImmutableDictionary<long, PageFromScratchBuffer> ScratchPagesTable,
    TreeRootHeader Root,
    long NextPageNumber,
    (long Number, long Last4KWritePosition) Journal,
    object ClientState);

The actual details of what it does aren’t really that interesting. What is important is that we have an immutable record that is being generated by the write transaction. That record contains all the database state that we carry over between transactions. When we want to publish this state, we can just make a single atomic write to a pointer, no need for a lock.

Of course, it can’t be that simple. We needed to essentially convert large portions of the code to take their state from the environment record and provide a way for higher-level code to add their own state to the record on write.

That took about two months of hard work, I think. Not complicated work, just grinding through all of those places, figuring out the best way to represent that, and making all the necessary changes.

Unfortunately, that wasn’t the only lock convoy we found. Files inside RavenDB are held by a class called Pager (since it serves pages of data). A file (Pager) may hold multiple States, which is a fancy way of referring to a memory-mapped range we use. The Pager may hold multiple such mapping concurrently. For example, if we increase the file size, we need to map the memory again - so for a certain period of time, we have two such mappings.

A transaction may create a reference to such a state (and thus may access the mapping) for its lifetime. At the end of the transaction, we can release the state, and if it is the last reference, we’ll close the mapping.

This is implemented using the following code:

This runs for multiple Pagers for each transaction, mind you. The actual AddRef() call is very cheap, but the lock…


public void AddRef() => Interlocked.Increment(ref _refs);

As you can see from the image, this is also pretty old code. The problem was how to deal with the issue. I have to be able to keep track of the mapped memory if a transaction is using it, but at the same time, I want to dispose of it quickly if it isn’t being used.

The answer to this question was to lean on the GC. Instead of implementing AddRef() / Release(), I hold the Pager.State object and handle disposal in the finalizer. If there is a reference to it, the finalizer isn’t called, so this gives me the exact behavior I want, for free.

However… I also need to be able to dispose of this explicitly when closing the pager. A common scenario is that I’m closing the Pager and deleting the directory. If I have an outstanding reference to the file, that will fail.

Here is how I’m dealing with the issue. The Pager holds a Weak-Reference to the State and may explicitly dispose of it:


public void Dispose()
{
    foreach (WeakReference<State> state in _states)
    {
        if (state.TryGetTarget(out var v))
        {
            v.Dispose();
        }
    }
}

That also allowed me to refactor the way we deal with files in general inside Voron. The first version of Voron did everything in managed code (including significant P/Invoke and cross-platform work). At some point, we introduced a native DLL (PAL - Platform Abstraction Layer - written in C) that allowed us to have a more consistent API.

The issue was that there was no clean division of labor between the native C code and the managed code, and some responsibilities were split between them in a way that wasn’t very clean.

RavenDB is using Zig

To be rather more exact, I’m using Zig as a build system to make cross-compilation a lot easier. I initially moved everything to use Zig, but it turns out that Zig isn’t able to produce DLLs that would run properly on older versions of Windows.

At least, nothing that I did convinced Zig to give me something that wouldn’t fail to load. So we are still using MSVC to compile for Windows and then Zig to both Linux & Mac (x64 & ARM). If you have good ideas on how to solve that, I would really appreciate it.

As a final thought, this work took several months, and when I got to the end, we got a pull request with some interesting numbers (+11.3K LOC, -11.2K LOC).

I have to admit that I feel a bit bad about that for the reviewers 🙂. I was really hoping that I would get a net negative result in the number of lines changed, but that couldn’t be helped. The end result is that all of our state is now far simpler, but we didn’t actually change anything in the way we behave. We removed two frequently hit lock convoys and allowed RavenDB to be far more efficient.

How efficient? Well… that is a bit complex to answer, because we didn’t stop here. There are more things that happened in RavenDB 7.1 that I need to talk to you about. We’ll cover them in my next post.

time to read 4 min | 759 words

I’m trying to reason about the behavior of this code, and I can’t decide if this is a stroke of genius or if I’m suffering from a stroke. Take a look at the code, and then I’ll discuss what I’m trying to do below:


HANDLE hFile = CreateFileA("R:/original_file.bin", 
GENERIC_READ | GENERIC_WRITE, 
FILE_SHARE_READ | FILE_SHARE_WRITE, 
NULL, 
OPEN_ALWAYS, FILE_ATTRIBUTE_NORMAL, 
NULL);
if (hFile == INVALID_HANDLE_VALUE) {
    printf("Error creating file: %d\n", GetLastError());
    exit(__LINE__);
}




HANDLE hMapFile = CreateFileMapping(hFile, NULL, 
PAGE_READWRITE, 0, 0, NULL);
if (hMapFile == NULL) {
    fprintf(stderr, "Could not create file mapping object: %x\n", GetLastError());
    exit(__LINE__);
}


char* lpMapAddress = MapViewOfFile(hMapFile, FILE_MAP_WRITE, 0, 0, 0);
if (lpMapAddress == NULL) {
    fprintf(stderr, "Could not map view of file: %x\n", GetLastError());
    exit(__LINE__);
}


for (size_t i = 2 * MB; i < 4 * MB; i++)
{
        lpMapAddress[i]++;
}


HANDLE hDirect = CreateFileA("R:/original_file.bin", 
GENERIC_READ | GENERIC_WRITE, 
FILE_SHARE_READ | FILE_SHARE_WRITE, 
NULL, 
OPEN_ALWAYS, 
FILE_ATTRIBUTE_NORMAL, 
NULL);


SetFilePointerEx(hDirect, (LARGE_INTEGER) { 6 * MB }, & fileSize, FILE_BEGIN);
for (i = 6 ; i < 10 ; i++) {
    if (!WriteFile(hDirect, lpMapAddress + i * MB, MB, &bytesWritten, NULL)) {
        fprintf(stderr, "WriteFile direct failed on iteration %d: %x\n", i, GetLastError());
        exit(__LINE__);
    }
}

The idea is pretty simple, I’m opening the same file twice. Once in buffered mode and mapping that memory for both reads & writes. The problem is that to flush the data to disk, I have to either wait for the OS, or call FlushViewOfFile() and FlushFileBuffers() to actually flush it to disk explicitly.

The problem with this approach is that FlushFileBuffers() has undesirable side effects. So I’m opening the file again, this time for unbuffered I/O. I’m writing to the memory map and then using the same mapping to write to the file itself. On Windows, that goes through a separate path (and may lose coherence with the memory map).

The idea here is that since I’m writing from the same location, I can’t lose coherence. I either get the value from the file or from the memory map, and they are both the same. At least, that is what I hope will happen.

For the purpose of discussion, I can ensure that there is no one else writing to this file while I’m abusing the system in this manner. What do you think Windows will do in this case?

I believe that when I’m writing using unbuffered I/O in this manner, I’m forcing the OS to drop the mapping and refresh from the disk. That is likely the reason why it may lose coherence, because there may be already reads that aren’t served from main memory, or something like that.

This isn’t an approach that I would actually take for production usage, but it is a damn interesting thing to speculate on. If you have any idea what will actually happen, I would love to have your input.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. Production Postmortem (52):
    07 Apr 2025 - The race condition in the interlock
  2. RavenDB (13):
    02 Apr 2025 - .NET Aspire integration
  3. RavenDB 7.1 (6):
    18 Mar 2025 - One IO Ring to rule them all
  4. RavenDB 7.0 Released (4):
    07 Mar 2025 - Moving to NLog
  5. Challenge (77):
    03 Feb 2025 - Giving file system developer ulcer
View all series

RECENT COMMENTS

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}