Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,582
|
Comments: 51,212
Privacy Policy · Terms
filter by tags archive
time to read 22 min | 4361 words

RavenDB 7.1 introduces Gen AI Integration, enabling seamless integration of various AI models directly within your database. No, you aren’t going to re-provision all your database servers to run on GPU instances; we empower you to leverage any model—be it OpenAI, Mistral, Grok, or any open-source solution on your own hardware.

Our goal is to replicate the intuitive experience of copying data into tools like ChatGPT to ask a question. The idea is to give developers the same kind of experience with their RavenDB documents, and with the same level of complexity and hassle (i.e., none).

The key problem we want to solve is that while copy-pasting to ChatGPT is trivial, actually making use of an AI model in production presents significant logistical challenges. The new GenAI integration feature addresses these complexities. You can use AI models inside your database with the same ease and consistency you expect from a direct query.

The core tenet of RavenDB is that we take the complexity upon ourselves, leaving you with just the juicy bits to deal with. We bring the same type of mindset to Gen AI Integration.

Let’s explore exactly how you use this feature. Then I’ll dive into exactly how this works behind the scenes, and exactly how much load we are carrying for you.

Example: Automatic Product Translations

I’m using the sample database for RavenDB, which is a simple online shop (based on the venerable Northwind database). That database contains products such as these:

Scottish LongbreadsLonglife TofuFlotemysost
GudbrandsdalsostRhönbräu KlosterbierMozzarella di Giovanni
Outback LagerLakkalikööriRöd Kaviar

I don’t even know what “Rhönbräu Klosterbier” is, for example. I can throw that to an AI model and get a reply back: "Rhön Brewery Monastery Beer." Now at least I know what that is. I want to do the same for all the products in the database, but how can I do that?

We broke the process itself into several steps, which allow RavenDB to do some really nice things (see the technical deep dive later). But here is the overall concept in a single image. See the details afterward:

Here are the key concepts for the process:

  • A context extraction script that applies to documents and extracts the relevant details to send to the model.
  • The prompt that the model is working on (what it is tasked with).
  • The JSON output schema, which allows us to work with the output in a programmatic fashion.
  • And finally, the update script that applies the output of the model back to the document.

In the image above, I also included the extracted context and the model output, so you’ll have better insight into what is actually going on.

With all the prep work done, let’s dive directly into the details of making it work.

I’m using OpenAI here, but that is just an example, you can use any model you like (including those that run on your own hardware, of course).

We’ll start the process by defining which model to use. Go to AI Hub > AI Connection Strings and define a new connection string. You need to name the connection string, select OpenAI as the connector, and provide your API key. The next stage is to select the endpoint and the model. I’m using gpt-4o-mini here because it is fast, cheap, and provides pretty good results.

With the model selected, let’s get started. We need to go to AI Hub > AI Tasks > Add AI Task > Gen AI. This starts a wizard to guide you through the process of defining the task. The first thing to do is to name the task and select which connection string it will use. The real fun starts when you click Next.

Defining the context

We need to select which collection we’ll operate on (Products) and define something called the Context generation script. What is that about? The idea here is that we don’t need to send the full document to the model to process - we just need to push the relevant information we want it to operate on. In the next stage, we’ll define what is the actual operation, but for now, let’s see how this works.

The context generation script lets you select exactly what will be sent to the model. The method ai.genContext generates a context object from the source document. This object will be passed as input to the model, along with a Prompt and a JSON schema defined later. In our case, it is really simple:


ai.genContext({
    Name: this.Name
});

Here is the context object that will be generated from a sample document:

Click Next and let’s move to the Model Input stage, where things really start to get interesting. Here we are telling the model what we want to do (using the Prompt), as well as telling it how it should reply to us (by defining the JSON Schema).

For our scenario, the prompt is pretty simple:


You are a professional translator for a product catalog. 
Translate the provided fields accurately into the specified languages, ensuring clarity and cultural appropriateness.

Note that in the prompt, we are not explicitly specifying which languages to translate to or which fields to process. We don’t need to - the fields the model will translate are provided in the context objects created by the "context generation script."

As for what languages to translate, we can specify that by telling the model what the shape of the output should be. We can do that using a JSON Schema or by providing a sample response object. I find it easier to use sample objects instead of writing JSON schemas, but both are supported. You’ll usually start with sample objects for rough direction (RavenDB will automatically generate a matching JSON schema from your sample object) and may want to shift to a JSON schema later if you want more control over the structure.

Here is one such sample response object:


{
    "Name": {
        "Simple-English": "Simplified English, avoid complex / rare words",
        "Spanish": "Spanish translation",
        "Japanese": "Japanese translation",
        "Hebrew": "Hebrew translation"
    }
}

I find that it is more hygienic to separate the responsibilities of all the different pieces in this manner. This way, I can add a new language to be translated by updating the output schema without touching the prompt, for example.

The text content within the JSON object provides guidance to the model, specifying the intended data for each field.This functions similarly to the description field found in JSON Schema.

We have the prompt and the sample object, which together instruct the model on what to do. At the bottom, you can see the context object that was extracted from the document using the script. Putting it all together, we can send that to the model and get the following output:


{
    "Name": {
        "Simple-English": "Cabrales cheese",
        "Spanish": "Queso Cabrales",
        "Japanese": "カブラレスチーズ",
        "Hebrew": "גבינת קברלס"
    }
}

The final step is to decide what we’ll do with the model output. This is where the Update Script comes into play.


this.i18n = $output;

This completes the setup, and now RavenDB will start processing your documents based on this configuration. The end result is that your documents will look something like this:


{
    "Name": "Queso Cabrales",
    "i18n": {
        "Name": {
            "Simple-English": "Cabrales cheese",
            "Spanish": "Queso Cabrales",
            "Japanese": "カブラレスチーズ",
            "Hebrew": "גבינת קברלס"
        }
    },
    "PricePerUnit": 21,
    "ReorderLevel": 30,
    // rest of document redacted
}

I find it hard to clearly explain what is going on here in text. This is the sort of thing that works much better in a video. Having said that, the basic idea is that we define a Gen AI task for RavenDB to execute. The task definition includes the following discrete steps: defining the connection string; defining the context generation script, which creates context objects; defining the prompt and schema; and finally, defining the document update script. And then we’re done.

The context objects, prompt, and schema serve as input to the model. The update script is executed for each output object received from the model, per context object.

From this point onward, it is RavenDB’s responsibility to communicate with the model and handle all the associated logistics. That means, of course, that if you want to go ahead and update the name of a product, RavenDB will automatically run the translation job in the background to get the updated value.

When you see this at play, it feels like absolute magic. I haven’t been this excited about a feature in a while.

Diving deep into how this works

A large language model is pretty amazing, but getting consistent and reliable results from it can be a chore. The idea behind Gen AI Integration in RavenDB is that we are going to take care of all of that for you.

Your role, when creating such Gen AI Tasks, is to provide us with the prompt, and we’ll do the rest. Well… almost. We need a bit of additional information here to do the task properly.

The prompt defines what you want the model to do. Because we aren’t showing the output to a human, but actually want to operate on it programmatically, we don’t want to get just raw text back. We use the Structured Output feature to define a JSON Schema that forces the model to give us the data in the format we want.

It turns out that you can pack a lot of information for the model about what you want to do using just those two aspects. The prompt and the output schema work together to tell the model what it should do for each document.

Controlling what we send from each document is the context generation script. We want to ensure that we aren’t sending irrelevant or sensitive data. Model costs are per token, and sending it data that it doesn’t need is costly and may affect the result in undesirable ways.

Finally, there is the update script, which takes the output from the model and updates the document. It is important to note that the update script shown above (which just stores the output of the model in a property on the document) is about the simplest one that you can have.

Update scripts are free to run any logic, such as marking a line item as not appropriate for sale because the customer is under 21. That means you don’t need to do everything through the model, you can ask the model to apply its logic, then process the output using a simple script (and in a predictable manner).

What happens inside?

Now that you have a firm grasp of how all the pieces fit together, let’s talk about what we do for you behind the scenes. You don’t need to know any of that, by the way. Those are all things that should be completely opaque to you, but it is useful to understand that you don’t have to worry about them.

Let’s talk about the issue of product translation - the example we have worked with so far. We define the Gen AI Task, and let it run. It processes all the products in the database, generating the right translations for them. And then what?

The key aspect of this feature is that this isn’t a one-time operation. This is an ongoing process. If you update the product’s name again, the Gen AI Task will re-translate it for you. It is actually quite fun to see this in action. I have spent <undisclosed> bit of time just playing around with it, modifying the data, and watching the updates streaming in.

That leads to an interesting observation: what happens if I update the product’s document, but not the name? Let’s say I changed the price, for example. RavenDB is smart about it, we only need to go to the model if the data in the extracted context was modified. In our current example, this means that only when the name of the product changes will we need to go back to the model.

How does RavenDB know when to go back to the model?

When you run the Gen AI Task, RavenDB stores a hash representing the work done by the task in the document’s metadata. If the document is modified, we can run the context generation script to determine whether we need to go to the model again or if nothing has changed from the previous time.

RavenDB takes into account the Prompt, JSON Schema, Update Script, and the generated context object when comparing to the previous version. A change to any of them indicates that we should go ask the model again. If there is no change, we simply skip all the work.

In this way, RavenDB takes care of detecting when you need to go to the model and when there is no need to do so. The key aspect is that you don’t need to do anything for this to work. It is just the way RavenDB works for you.

That may sound like a small thing, but it is actually quite profound. Here is why it matters:

  • Going to the model is slow - it can take multiple seconds (and sometimes significantly longer) to actually get a reply from the model. By only asking the model when we know the data has changed, we are significantly improving overall performance.
  • Going to the model is expensive - you’ll usually pay for the model by the number of tokens you consume. If you go to the model with an answer you already got, that’s simply burning money, there’s no point in doing that.
  • As a user, that is something you don’t need to concern yourself with. You tell RavenDB what you want the model to do, what information from the document is relevant, and you are done.

You can see the entire flow of this process in the following chart:

Let’s consider another aspect. You have a large product catalog and want to run this Gen AI Task. Unfortunately, AI models are slow (you may sense a theme here), and running each operation sequentially is going to take a long time. You can tell RavenDB to run this concurrently, and it will push as much as the AI model (and your account’s rate limits) allow.

Speaking of rate limits, that is sadly something that is quite easy to hit when working with realistic datasets (a few thousand requests per minute at the paid tier). If you need to process a lot of data, it is easy to hit those limits and fail. Dealing with them is also something that RavenDB takes care of for you. RavenDB will know how to properly wait, scale back, and ensure that you are using the full capacity at your disposal without any action on your part.

The key here is that we enable your data to think, and doing that directly in the database means you don’t need to reach for complex orchestrations or multi-month integration projects. You can do that in a day and reap the benefits immediately.

Applicable scenarios for Gen AI Integration in RavenDB

By now, I hope that you get the gist of what this feature is about. Now I want to try to blow your mind and explore what you can do with it…

Automatic translation is just the tip of the iceberg. I'm going to explore a few such scenarios, focusing primarily on what you’ll need to write to make it happen (prompt, etc.) and what this means for your applications.

Unstructured to structured data (Tagging & Classification)

Let’s say you are building a job board where companies and applicants can register positions and resumes. One of the key problems is that much of your input looks like this:


Date: May 28, 2025 
Company: Example's Financial
Title: Senior Accountant 
Location: Chicago
Join us as a Senior Accountant, where you will prepare financial statements, manage the general ledger, ensure compliance with tax regulations, conduct audits, and analyze budgets. We seek candidates with a Bachelor’s in Accounting, CPA preferred, 5+ years of experience, and proficiency in QuickBooks and Excel. Enjoy benefits including health, dental, and vision insurance, 401(k) match, and paid time off. The salary range is $80,000 - $100,000 annually. This is a hybrid role with 3 days on-site and 2 days remote.

A simple prompt such as:


You are tasked with reading job applications and transforming them into structure data, following the provided output schema. Fill in additional details where it is relevant (state from city name, for example) but avoid making stuff up.


For requirements, responsibilities and benefits - use tag like format min-5-years, office, board-certified, etc.

Giving the model the user-generated text, we’ll get something similar to this:


{
    "location": {
        "city": "Chicago",
        "state": "Illinois",
        "country": "USA",
        "zipCode": ""
    },
    "requirements": [
        "bachelors-accounting",
        "cpa-preferred",
        "min-5-years-experience",
        "quickbooks-proficiency",
        "excel-proficiency"
    ],
    "responsibilities": [
        "prepare-financial-statements",
        "manage-general-ledger",
        "ensure-tax-compliance",
        "conduct-audits",
        "analyze-budgets"
    ],
    "salaryYearlyRange": {
        "min": 80000,
        "max": 100000,
        "currency": "USD"
    },
    "benefits": [
        "health-insurance",
        "dental-insurance",
        "vision-insurance",
        "401k-match",
        "paid-time-off",
        "hybrid-work"
    ]
}

You can then plug that into your system and have a much easier time making sense of what is going on.

In the same vein, but closer to what technical people are used to: imagine being able to read a support email from a customer and extract what version they are talking about, the likely area of effect, and who we should forward it to.

This is the sort of project you would have spent multiple months on previously. Gen AI Integration in RavenDB means that you can do that in an afternoon.

Using a large language model to make decisions in your system

For this scenario, we are building a help desk system and want to add some AI smarts to it. For example, we want to provide automatic escalation for support tickets that are high value, critical for the user, or show a high degree of customer frustration.

Here is an example of a JSON document showing what the overall structure of a support ticket might look like. We can provide this to the model along with the following prompt:


You are an AI tasked with evaluating a customer support ticket thread to determine if it requires escalation to an account executive. 


Your goal is to analyze the thread, assess specific escalation triggers, and determine if an escalation is required.


Reasons to escalate:
* High value customer
* Critical issue, stopping the business
* User is showing agitataion / frustration / likely to leave us

We also ask the model to respond using the following structure:


{
   "escalationRequired": false,
   "escalationReason": "TechnicalComplexity | UrgentCustomerImpact | RecurringIssue | PolicyException",
   "reason": "Details on why escalation was recommended"
}

If you run this through the model, you’ll get a result like this:


{
"escalationRequired": true,
"escalationReason": "UrgentCustomerImpact",
"reason": "Customer reports critical CRM dashboard failure, impacting business operations, and expresses frustration with threat to switch providers."
}

The idea here is that if the model says we should escalate, we can react to that. In this case, we create another document to represent this escalation. Other features can then use that to trigger a Kafka message to wake the on-call engineer, for example.

Note that now we have graduated from “simple” tasks such as translating text or extracting structured information to full-blown decisions, letting the model decide for us what we should do. You can extend that aspect by quite a bit in all sorts of interesting ways.

Security & Safety

A big part of utilizing AI today is understanding that you cannot fully rely on the model to be trustworthy. There are whole classes of attacks that can trick the model into doing a bunch of nasty things.

Any AI solution needs to be able to provide a clear story around the safety and security of your data and operations. For Gen AI Integration in RavenDB, we have taken the following steps to ensure your safety.

You control which model to use. You aren’t going to use a model that we run or control. You choose whether to use OpenAI, DeepSeek, or another provider. You can run on a local Ollama instance that is completely under your control, or talk to an industry-specific model that is under the supervision of your organization.

RavenDB works with all modern models, so you get to choose the best of the bunch for your needs.

You control which data goes out. When building Gen AI tasks, you select what data to send to the model using the context generation script. You can filter sensitive data or mask it. Preferably, you’ll send just the minimum amount of information that the model needs to complete its task.

You control what to do with the model’s output. RavenDB doesn’t do anything with the reply from the model. It hands it over to your code (the update script), which can make decisions and determine what should be done.

Summary

To conclude, this new feature makes it trivial to apply AI models in your systems, directly from the database. You don’t need to orchestrate complex processes and workflows - just let RavenDB do the hard work for you.

There are a number of scenarios where this can be extremely useful. From deciding whether a comment is spam or not, to translating data on the fly, to extracting structured data from free-form text, to… well, you tell me. My hope is that you have some ideas about ways that you can use these new options in your system.

I’m really excited that this is now available, and I can’t wait to see what people will do with the new capabilities.

time to read 2 min | 237 words

We have just released the RavenDB 7.1 Release Candidate, and you can download it from the website:

The big news in this release is the new Gen AI integration. You are now able to define Generative AI tasks on your own data.

I have a Deep Dive post that talks about it in detail, but the gist of it is that you can write prompts like:

  • Translate the Product titles in our catalog from English to Spanish, Greek, and Japanese.
  • Read the Job Postings’ descriptions and extract the years of experience, required skills, salary range, and associated benefits into this well-defined structure.
  • Analyze Helpdesk Tickets and decide whether we need to escalate to a higher tier.

RavenDB will take the prompt and your LLM of choice and apply it transparently to your data. This allows your data to think.

There is no need for complex integrations or months-long overtime work, just tell RavenDB what you want, and the changes will show up in your database.

I’m really excited about this feature. You can start by downloading the new bits and taking it for a spin, check it out under the AI Hub > AI Tasks menu item in the Studio.

The deep dive post is also something that you probably want to go through 🙂.

As usual, I would dearly love your feedback on the feature and what you can do with it..

time to read 4 min | 796 words

When building RavenDB 7.0, a major feature was Vector Search and AI integration.We weren't the first database to make Vector Search a core feature, and that was pretty much by design.

Not being the first out of the gate meant that we had time to observe the industry, study new research, and consider how we could best enable Vector Search for our users. This isn’t just about the algorithm or the implementation, but about the entire mindset of how you provide the feature to your users. The logistics of a feature dictate how effectively you can use it, after all.

This post is prompted by the recent release of SQL Server 2025 Preview, which includes Vector Search indexing.Looking at what others in the same space are doing is fascinating. The SQL Server team is using the DiskANN algorithm for their Vector Search indexes, and that is pretty exciting to see.

The DiskANN algorithm was one of the algorithms we considered when implementing Vector Search for RavenDB. We ended up choosing the HNSW algorithm as the basis for our vector indexing.This is a common choice; most databases with both indexing options use HNSW. PostgreSQL, MongoDB, Redis, and Elasticsearch all use HNSW.

Microsoft’s choice to use DiskANN isn’t surprising (DiskANN was conceived at Microsoft, after all). I also assume that Microsoft has sufficient resources and time to do a good job actually implementing it. So I was really excited to see what kind of behavior the new SQL Server has here.

RavenDB's choice of HNSW for vector search ended up being pretty simple.Of all the algorithms considered, it was the only one that met our requirements.These requirements are straightforward: Vector Search should function like any other index in the system. You define it, it runs, your queries are fast. You modify the data, the index is updated, your queries are still fast.

I don’t think this is too much to ask :-), but it turned out to be pretty complicated when we look at the Vector Search indexes. Most vector indexing solutions have limitations, such as requiring all data upfront (ANNOY, SCANN) or degrading over time (IVF Flat, LSH) with modifications.

HNSW, on the other hand, builds incrementally and operates efficiently on inserted, updated, and deleted data without significant maintenance.


Therefore, it was interesting to examine the DiskANN behavior in SQL Server, as it's a rare instance of a world-class algorithm available from the source that I can start looking at.

I must say I'm not impressed. I’m not talking about the actual implementation, but rather the choices that were made for this feature in general. As someone who has deeply explored this topic and understands its complexities, I believe using vector indexes in SQL Server 2025, as it currently appears, will be a significant hassle and only suitable for a small set of scenarios.

I tested the preview using this small Wikipedia dataset, which has just under 500,000 vectors and less than 2GB of data – a tiny dataset for vector search.On a Docker instance with 12 cores and 32 GB RAM, SQL Server took about two and a half hours to create the index!

In contrast, RavenDB will index the same dataset in under two minutes.I might have misconfigured SQL Server or encountered some licensing constraints affecting performance, but the difference between 2 minutes and 150 minutes is remarkable. I’m willing to let that one go, assuming I did something wrong with the SQL Server setup.

Another crucial aspect is that creating a vector index in SQL Server has other implications. Most notably, the source table becomes read-only and is fully locked during the (very long) indexing period.

This makes working with vector indexes on frequently updated data very challenging to impossible. You would need to copy data every few hours, perform indexing (which is time-consuming), and then switch which table you are querying against – a significant inconvenience.

Frankly, it seems suitable only for static or rarely updated data, for example, if you have documentation that is updated every few months.It's not a good solution for applying vector search to dynamic data like a support forum with continuous questions and answers.

I believe the design of SQL Server's vector search reflects a paradigm where all data is available upfront, as discussed in research papers. DiskANN itself is immutable once created. There is another related algorithm, FreshDiskANN, which can handle updates, but that isn’t what SQL Server has at this point.

The problem is the fact that this choice of algorithm is really not operationally transparent for users. It will have serious consequences for anyone trying to actually make use of this for anything but frozen datasets.

In short, even disregarding the indexing time difference, the ability to work with live data and incrementally add vectors to the index makes me very glad we chose HNSW for RavenDB. The entire problem just doesn’t exist for us.

time to read 7 min | 1271 words

RavenDB 7.0 introduces vector search using the Hierarchical Navigable Small World (HNSW) algorithm, a graph-based approach for Approximate Nearest Neighbor search. HNSW enables efficient querying of large vector datasets but requires random access to the entire graph, making it memory-intensive.

HNSW's random read and insert operations demand in-memory processing. For example, inserting 100 new items into a graph of 1 million vectors scatters them randomly, with no efficient way to sort or optimize access patterns. That is in dramatic contrast to the usual algorithms we can employ (B+Tree, for example), which can greatly benefit from such optimizations.

Let’s assume that we deal with vectors of 768 dimensions, or 3KB each. If we have 30 million vectors, just the vector storage is going to take 90GB of memory. Inserts into the graph require us to do effective random reads (with no good way to tell what those would be in advance). Without sufficient memory, performance degrades significantly due to disk swapping.

The rule of thumb for HNSW is that you want at least 30% more memory than the vectors would take. In the case of 90GB of vectors, the minimum required memory is going to be 128GB.

Cost reduction

You can also use quantization (shrinking the size of the vector with some loss of accuracy). Going to binary quantization for the same dataset requires just 3GB of space to store the vectors. But there is a loss of accuracy (may be around 20% loss).

We tested RavenDB’s HNSW implementation on a 32 GB machine with a 120 GB vector index (and no quantization), simulating a scenario with four times more data than available memory.

This is an utterly invalid scenario, mind you. For that amount of data, you are expected to build the HNSW index on a machine with 192GB. But we wanted to see how far we could stress the system and what kind of results we would get here.

The initial implementation stalled due to excessive memory use and disk swapping, rendering it impractical. Basically, it ended up doing a page fault on each and every operation, and that stalled forward progress completely.

Optimization Approach

Optimizing for such an extreme scenario seemed futile, this is an invalid scenario almost by definition. But the idea is that improving this out-of-line scenario will also end up improving the performance for more reasonable setups.

When we analyzed the costs of HNSW in a severely memory-constrained environment, we found that there are two primary costs.

  1. Distance computations: Comparing vectors (e.g., cosine similarity) is computationally expensive.
  2. Random vector access: Loading vectors from disk in a random pattern is slow when memory is insufficient.

Distance computation is doing math on two 3KB vectors, and on a large graph (tens of millions), you’ll typically need to run between 500 - 1,500 distance comparisons. To give some context, adding an item to a B+Tree of the same size will have fewer than twenty comparisons (and highly localized ones, at that).

That means reading about 2MB of data per insert on average. Even if everything is in memory, you are going to be paying a significant cost here in CPU cycles. If the data does not reside in memory, you have to fetch it (and it isn’t as neat as having a single 2MB range to read, it is scattered all over the place, and you need to traverse the graph in order to find what you need to read).

To address this issue, we completely restructured how we go about inserting nodes into the graph. We avoid serial execution and instead spawn multiple insert processes at the same time. Interestingly enough, we are single-threaded in this regard. We extract from the process the parts where it does the distance computation and loads the vectors.

Each process will run the algorithm until it reaches the stage where it needs to run distance computation on some vectors. In that case, it will yield to another process and let it run. We keep doing this until we have no more runnable processes to execute.

We can then scan the list of nodes that we need to process (run distance computation on all their edges), and we can then:

  • Gather all vectors needed for graph traversal.
  • Preload them from disk efficiently.
  • Run distance computations across multiple threads simultaneously.

The idea here is that we save time by preloading data efficiently. Once the data is loaded, we throw all the distance computations per node to the thread pool. As soon as any of those distance computations are done, we resume the stalled process for it.

The idea is that at any given point in time, we have processes moving forward or we are preloading from the disk, while at the same time we have background threads running to compute distances.

This allows us to make maximum use of the hardware. Here is what this looks like midway through the process:

As you can see, we still have to go to the disk a lot (no way around that with a working set of 120GB on a 32GB machine), but we are able to make use of that efficiently. We always have forward progress instead of just waiting.

Don’ttry this on the cloud - most cloud instances have a limited amount of IOPS to use, and this approach will burn through any amount you have quickly. We are talking about roughly 15K - 20K IOPS on a sustained basis. This is meant for testing in adverse conditions, on hardware that is utterly unsuitable for the task at hand. A machine with the proper amount of memory will not have to deal with this issue.

While still slow on a 32 GB machine due to disk reliance, this approach completed indexing 120 GB of vectors in about 38 hours (average rate of 255 vectors/sec). To compare, when we ran the same thing on a 64GB machine, we were able to complete the indexing process in just over 14 hours (average rate of 694 vectors/sec).

Accuracy of the results

When inserting many nodes in parallel to the graph, we risk a loss of accuracy. When we insert them one at a time, nodes that are close together will naturally be linked to one another. But when running them in parallel, we may “miss” those relations because the two nodes aren’t yet discoverable.

To mitigate that scenario, we preemptively link all the in-flight nodes to each other, then run the rest of the algorithm. If they are not close to one another, these edges will be pruned. It turns out that this behavior actually increased the overall accuracy (by about 1%) compared to the previous behavior. This is likely because items that are added at the same time are naturally linked to one another.

Results

On smaller datasets (“merely” hundreds of thousands of vectors) that fit in memory, the optimizations reduced indexing time by 44%! That is mostly because we now operate in parallel and have more optimized I/O behavior.

We are quite a bit faster, we are (slightly) more accurate, and we also have reasonable behavior when we exceed the system limits.

        What about binary quantization?

I mentioned that binary quantization can massively reduce the amount of space required for vector search. We also ran tests with the same dataset using binary quantization. The total size of all the vectors was around 3GB, and the total index size (including all the graph nodes, etc.) was around 18 GB. That took under 5 hours to index on the same 32GB machine.

If you care to know what it is that we are doing, take a look at the Hnsw.Parallel file inside the repository. The implementation is quite involved, but I’m really proud of how it turned out in the end.

time to read 8 min | 1414 words

Let’s say I want to count the number of reachable nodes for each node in the graph. I can do that using the following code:


void DFS(Node start, HashSet<Node> visited) 
{
    if (start == null || visited.Contains(start)) return;
    visited.Add(start);
    foreach (var neighbor in start.Neighbors) 
    {
        DFS(neighbor, visited);
    }
}


void MarkReachableCount(Graph g)
{
   foreach(var node in g.Nodes)
   {
       HashSet<Node> visited = [];
       DFS(node, visisted);
       node.ReachableGraph = visited.Count;
   }
}

A major performance cost for this sort of operation is the allocation cost. We allocate a separate hash set for each node in the graph, and then allocate whatever backing store is needed for it. If you have a big graph with many connections, that is expensive.

A simple fix for that would be to use:


void MarkReachableCount(Graph g)
{
   HashSet<Node> visited = [];
   foreach(var node in g.Nodes)
   {
       visited.Clear();
       DFS(node, visisted);
       node.ReachableGraph = visited.Count;
   }
}

This means that we have almost no allocations for the entire operation, yay!

This function also performs significantly worse than the previous one, even though it barely allocates. The reason for that? The call to Clear() is expensive. Take a look at the implementation - this method needs to zero out two arrays, and it will end up being as large as the node with the most reachable nodes. Let’s say we have a node that can access 10,000 nodes. That means that for each node, we’ll have to clear an array of about 14,000 items, as well as another array that is as big as the number of nodes we just visited.

No surprise that the allocating version was actually cheaper. We use the visited set for a short while, then discard it and get a new one. That means no expensive Clear() calls.

The question is, can we do better? Before I answer that, let’s try to go a bit deeper in this analysis. Some of the main costs in HashSet<Node> are the calls to GetHashCode() and Equals(). For that matter, let’s look at the cost of the Neighbors array on the Node.

Take a look at the following options:


public record Node1(List<Node> Neighbors);
public record Node2(List<int> NeighborIndexes);

Let’s assume each node has about 10 - 20 neighbors. What is the cost in memory for each option? Node1 uses references (pointers), and will take 256 bytes just for the Neighbors backing array (32-capacity array x 8 bytes). However, the Node2 version uses half of that memory.

This is an example of data-oriented design, and saving 50% of our memory costs is quite nice. HashSet<int> is also going to benefit quite nicely from JIT optimizations (no need to call GetHashCode(), etc. - everything is inlined).

We still have the problem of allocations vs. Clear(), though. Can we win?

Now that we have re-framed the problem using int indexes, there is a very obvious optimization opportunity: use a bit map (such as BitsArray). We know upfront how many items we have, right? So we can allocate a single array and set the corresponding bit to mark that a node (by its index) is visited.

That dramatically reduces the costs of tracking whether we visited a node or not, but it does not address the costs of clearing the bitmap.

Here is how you can handle this scenario cheaply:


public class Bitmap
{
    private ulong[] _data;
    private ushort[] _versions;
    private int _version;


    public Bitmap(int size)
    {
        _data = new ulong[(size + 63) / 64];
        _versions = new ushort[_data.Length];
    }


    public void Clear()
    {
        if(_version++ < ushort.MaxValue)
            return;
            
        Array.Clear(_data);
        Array.Clear(_versions);
    }


    public bool Add(int index)
    {
        int arrayIndex = index >> 6;
        if(_versions[arrayIndex] != _version)
        {
            _versions[arrayIndex] = _version;
            _data[arrayIndex] = 0;
        }


        int bitIndex = index & 63;
        ulong mask = 1UL << bitIndex;
        ulong old = _data[arrayIndex];
        _data[arrayIndex] |= mask;
        return (old & mask) == 0;
    }
}

The idea is pretty simple, in addition to the bitmap - we also have another array that marks the version of each 64-bit range. To clear the array, we increment the version. That would mean that when adding to the bitmap, we reset the underlying array element if it doesn’t match the current version. Once every 64K items, we’ll need to pay the cost of actually resetting the backing stores, but that ends up being very cheap overall (and worth the space savings to handle the overflow).

The code is tight, requires no allocations, and performs very quickly.

time to read 8 min | 1458 words

We recently tackled performance improvements for vector search in RavenDB. The core challenge was identifying performance bottlenecks. Details of specific changes are covered in a separate post. The post is already written, but will be published next week, here is the direct link to that.

In this post, I don’t want to talk about the actual changes we made, but the approach we took to figure out where the cost is.

Take a look at the following flame graph, showing where our code is spending the most time.

As you can see, almost the entire time is spent computing cosine similarity. That would be the best target for optimization, right?

I spent a bunch of time writing increasingly complicated ways to optimize the cosine similarity function. And it worked, I was able to reduce the cost by about 1.5%!

That is something that we would generally celebrate, but it was far from where we wanted to go. The problem was elsewhere, but we couldn’t see it in the profiler output because the cost was spread around too much.

Our first task was to restructure the code so we could actually see where the costs were. For instance, loading the vectors from disk was embedded within the algorithm. By extracting and isolating this process, we could accurately profile and measure its performance impact.

This restructuring also eliminated the "death by a thousand cuts" issue, making hotspots evident in profiling results. With clear targets identified, we can now focus optimization efforts effectively.

That major refactoring had two primary goals. The first was to actually extract the costs into highly visible locations, the second had to do with how you address them. Here is a small example that scans a social graph for friends, assuming the data is in a file.


def read_user_friends(file, user_id: int) -> List[int]:
    """Read friends for a single user ID starting at indexed offset."""
    pass # redacted


def social_graph(user_id: int, max_depth: int) -> Set[int]:
    if max_depth < 1:
        return set()
    
    all_friends = set() 
    visited = {user_id} 
    work_list = deque([(user_id, max_depth)])  
    
    with open("relations.dat", "rb") as file:
        while work_list:
            curr_id, depth = work_list.popleft()
            if depth <= 0:
                continue
                
            for friend_id in read_user_friends(file, curr_id):
                if friend_id not in visited:
                    all_friends.add(friend_id)
                    visited.add(friend_id)
                    work_list.append((friend_id, depth - 1))
    
    return all_friends

If you consider this code, you can likely see that there is an expensive part of it, reading from the file. But the way the code is structured, there isn’t really much that you can do about it. Let’s refactor the code a bit to expose the actual costs.


def social_graph(user_id: int, max_depth: int) -> Set[int]:
    if max_depth < 1:
        return set()
    
    all_friends = set() 
    visited = {user_id} 
    
    with open("relations.dat", "rb") as file:
        work_list =  read_user_friends(file, [user_id])
        while work_list and max_depth >= 0:
            to_scan = set()
            for friend_id in work_list: # gather all the items to read
                if friend_id in visited:
                    continue
                
                all_friends.add(friend_id)
                visited.add(friend_id)
                to_scan.add(curr_id)


            # read them all in one shot
            work_list = read_users_friends(file, to_scan)
            # reduce for next call
            max_depth = max_depth - 1    
    
    return all_friends

Now, instead of scattering the reads whenever we process an item, we gather them all and then send a list of items to read all at once. The costs are far clearer in this model, and more importantly, we have a chance to actually do something about it.

Optimizing a lot of calls to read_user_friends(file, user_id) is really hard, but optimizing read_users_friends(file, users) is a far simpler task. Note that the actual costs didn’t change because of this refactoring, but the ability to actually make the change is now far easier.

Going back to the flame graph above, the actual cost profile differs dramatically as the size of the data rose, even if the profiler output remained the same. Refactoring the code allowed us to see where the costs were and address them effectively.

Here is the end result as a flame graph. You can clearly see the preload section that takes a significant portion of the time. The key here is that the change allowed us to address this cost directly and in an optimal manner.

The end result for our benchmark was:

  • Before: 3 minutes, 6 seconds
  • After: 2 minutes, 4 seconds

So almost exactly ⅓ of the cost was removed because of the different approach we took, which is quite nice.

This technique, refactoring the code to make the costs obvious, is a really powerful one. Mostly because it is likely the first step to take anyway in many performance optimizations (batching, concurrency, etc.).

time to read 1 min | 187 words

Orleans is a distributed computing framework for .NET. It allows you to build distributed systems with ease, taking upon itself all the state management, persistence, distribution, and concurrency.

The core aspect in Orleans is the notion of a “grain” - a lightweight unit of computation & state. You can read more about it in Microsoft’s documentation, but I assume that if you are reading this post, you are already at least somewhat familiar with it.

We now support using RavenDB as the backing store for grain persistence, reminders, and clustering. You can read the official announcement about the release here, and the docs covering how to use RavenDB & Microsoft Orleans.

You can use RavenDB to persist and retrieve Orleans grain states, store Orleans timers and reminders, as well as manage Orleans cluster membership.

RavenDB is well suited for this task because of its asynchronous nature, schema-less design, and the ability to automatically adjust itself to different loads on the fly.

If you are using Orleans, or even just considering it, give it a spin with RavenDB. We would love your feedback.

time to read 3 min | 440 words

RavenDB is moving at quite a pace, and there is actually more stuff happening than I can find the time to talk about. I usually talk about the big-ticket items, but today I wanted to discuss some of what we like to call Quality of Life features.

The sort of things that help smooth the entire process of using RavenDB - the difference between something that works and something polished. That is something I truly care about, so with a great sense of pride, let me walk you through some of the nicest things that you probably wouldn’t even notice that we are doing for you.


RavenDB Node.js Client - v7.0 released (with Vector Search)

We updated the RavenDB Node.js client to version 7.0, with the biggest item being explicit support for vector search queries from Node.js. You can now write queries like these:


const docs = session.query<Product>({collection: "Products"})
   .vectorSearch(x => x.withText("Name"),
      factory => factory.byText("italian food"))
  .all();

This is the famous example of using RavenDB’s vector search to find pizza and pasta in your product catalog, utilizing vector search and automatic data embeddings.


Converting automatic indexes to static indexes

RavenDB has auto indexes. Send a query, and if there is no existing index to run the query, the query optimizer will generate one for you. That works quite amazingly well, but sometimes you want to use this automatic index as the basis for a static (user-defined) index. Now you can do that directly from the RavenDB Studio, like so:

You can read the full details of the feature at the following link.


RavenDB Cloud - Incidents History & Operational Suggestions

We now expose the operational suggestions to you on the dashboard. The idea is that you can easily and proactively check the status of your instances and whether you need to take any action.

You can also see what happened to your system in the past, including things that RavenDB’s system automatically recovered from without you needing to lift a finger.

For example, take a look at this highly distressed system:


As usual, I would appreciate any feedback you have on the new features.

time to read 26 min | 5029 words

One of the more interesting developments in terms of kernel API surface is the IO Ring. On Linux, it is called IO Uring and Windows has copied it shortly afterward. The idea started as a way to batch multiple IO operations at once but has evolved into a generic mechanism to make system calls more cheaply. On Linux, a large portion of the kernel features is exposed as part of the IO Uring API, while Windows exposes a far less rich API (basically, just reading and writing).

The reason this matters is that you can use IO Ring to reduce the cost of making system calls, using both batching and asynchronous programming. As such, most new database engines have jumped on that sweet nectar of better performance results.

As part of the overall re-architecture of how Voron manages writes, we have done the same. I/O for Voron is typically composed of writes to the journals and to the data file, so that makes it a really good fit, sort of.

An ironic aspect of IO Uring is that despite it being an asynchronous mechanism, it is inherently single-threaded. There are good reasons for that, of course, but that means that if you want to use the IO Ring API in a multi-threaded environment, you need to take that into account.

A common way to handle that is to use an event-driven system, where all the actual calls are generated from a single “event loop” thread or similar. This is how the Node.js API works, and how .NET itself manages IO for sockets (there is a single thread that listens to socket events by default).

The whole point of IO Ring is that you can submit multiple operations for the kernel to run in as optimal a manner as possible. Here is one such case to consider, this is the part of the code where we write the modified pages to the data file:


using (fileHandle)
{
    for (int i = 0; i < pages.Length; i++)
    {
        int numberOfPages = pages[i].GetNumberOfPages();


        var size = numberOfPages * Constants.Storage.PageSize;
        var offset = pages[i].PageNumber * Constants.Storage.PageSize;
        var span = new Span<byte>(pages[i].Pointer, size);
        RandomAccess.Write(fileHandle, span, offset);


        written += numberOfPages * Constants.Storage.PageSize;
    }
}


PID     LWP TTY          TIME CMD
  22334   22345 pts/0    00:00:00 iou-wrk-22343
  22334   22346 pts/0    00:00:00 iou-wrk-22343
  22334   22347 pts/0    00:00:00 iou-wrk-22334
  22334   22348 pts/0    00:00:00 iou-wrk-22334
  22334   22349 pts/0    00:00:00 iou-wrk-22334
  22334   22350 pts/0    00:00:00 iou-wrk-22334
  22334   22351 pts/0    00:00:00 iou-wrk-22334
  22334   22352 pts/0    00:00:00 iou-wrk-22334
  22334   22353 pts/0    00:00:00 iou-wrk-22334
  22334   22354 pts/0    00:00:00 iou-wrk-22334
  22334   22355 pts/0    00:00:00 iou-wrk-22334
  22334   22356 pts/0    00:00:00 iou-wrk-22334
  22334   22357 pts/0    00:00:00 iou-wrk-22334
  22334   22358 pts/0    00:00:00 iou-wrk-22334

Actually, those aren’t threads in the normal sense. Those are kernel tasks, generated by the IO Ring at the kernel level directly. It turns out that internally, IO Ring may spawn worker threads to do the async work at the kernel level. When we had a separate IO Ring per file, each one of them had its own pool of threads to do the work.

The way it usually works is really interesting. The IO Ring will attempt to complete the operation in a synchronous manner. For example, if you are writing to a file and doing buffered writes, we can just copy the buffer to the page pool and move on, no actual I/O took place. So the IO Ring will run through that directly in a synchronous manner.

However, if your operation requires actual blocking, it will be sent to a worker queue to actually execute it in the background. This is one way that the IO Ring is able to complete many operations so much more efficiently than the alternatives.

In our scenario, we have a pretty simple setup, we want to write to the file, making fully buffered writes. At the very least, being able to push all the writes to the OS in one shot (versus many separate system calls) is going to reduce our overhead. More interesting, however, is that eventually, the OS will want to start writing to the disk, so if we write a lot of data, some of the requests will be blocked. At that point, the IO Ring will switch them to a worker thread and continue executing.

The problem we had was that when we had a separate IO Ring per data file and put a lot of load on the system, we started seeing contention between the worker threads across all the files. Basically, each ring had its own separate pool, so there was a lot of work for each pool but no sharing.

If the IO Ring is single-threaded, but many separate threads lead to wasted resources, what can we do? The answer is simple, we’ll use a single global IO Ring and manage the threading concerns directly.

Here is the setup code for that (I removed all error handling to make it clearer):


void *do_ring_work(void *arg)
{
  int rc;
  if (g_cfg.low_priority_io)
  {
    syscall(SYS_ioprio_set, IOPRIO_WHO_PROCESS, 0, 
        IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, 7));
  }
  pthread_setname_np(pthread_self(), "Rvn.Ring.Wrkr");
  struct io_uring *ring = &g_worker.ring;
  struct workitem *work = NULL;
  while (true)
  {
    do
    {
      // wait for any writes on the eventfd 
      // completion on the ring (associated with the eventfd)
      eventfd_t v;
      rc = read(g_worker.eventfd, &v, sizeof(eventfd_t));
    } while (rc < 0 && errno == EINTR);
    
    bool has_work = true;
    while (has_work)
    {
      int must_wait = 0;
      has_work = false;
      if (!work) 
      {
        // we may have _previous_ work to run through
        work = atomic_exchange(&g_worker.head, 0);
      }
      while (work)
      {
        has_work = true;


        struct io_uring_sqe *sqe = io_uring_get_sqe(ring);
        if (sqe == NULL)
        {
          must_wait = 1;
          goto sumbit_and_wait; // will retry
        }
        io_uring_sqe_set_data(sqe, work);
        switch (work->type)
        {
        case workitem_fsync:
          io_uring_prep_fsync(sqe, work->filefd, IORING_FSYNC_DATASYNC);
          break;
        case workitem_write:
          io_uring_prep_writev(sqe, work->filefd, work->op.write.iovecs,
                               work->op.write.iovecs_count, work->offset);
          break;
        default:
          break;
        }
        work = work->next;
      }
    sumbit_and_wait:
      rc = must_wait ? 
        io_uring_submit_and_wait(ring, must_wait) : 
        io_uring_submit(ring);
      struct io_uring_cqe *cqe;
      uint32_t head = 0;
      uint32_t i = 0;


      io_uring_for_each_cqe(ring, head, cqe)
      {
        i++;
        // force another run of the inner loop, 
        // to ensure that we call io_uring_submit again
        has_work = true; 
        struct workitem *cur = io_uring_cqe_get_data(cqe);
        if (!cur)
        {
          // can be null if it is:
          // *  a notification about eventfd write
          continue;
        }
        switch (cur->type)
        {
        case workitem_fsync:
          notify_work_completed(ring, cur);
          break;
        case workitem_write:
          if (/* partial write */)
          {
            // queue again
            continue;
          }
          notify_work_completed(ring, cur);
          break;
        }
      }
      io_uring_cq_advance(ring, i);
    }
  }
  return 0;
}

What does this code do?

We start by checking if we want to use lower-priority I/O, this is because we don’t actually care how long those operations take. The purpose of writing the data to the disk is that it will reach it eventually. RavenDB has two types of writes:

  • Journal writes (durable update to the write-ahead log, required to complete a transaction).
  • Data flush / Data sync (background updates to the data file, currently buffered in memory, no user is waiting for that)

As such, we are fine with explicitly prioritizing the journal writes (which users are waiting for) in favor of all other operations.

What is this C code? I thought RavenDB was written in C#

RavenDB is written in C#, but for very low-level system details, we found that it is far easier to write a Platform Abstraction Layer to hide system-specific concerns from the rest of the code. That way, we can simply submit the data to write and have the abstraction layer take care of all of that for us. This also ensures that we amortize the cost of PInvoke calls across many operations by submitting a big batch to the C code at once.

After setting the IO priority, we start reading from what is effectively a thread-safe queue. We wait for eventfd() to signal that there is work to do, and then we grab the head of the queue and start running.

The idea is that we fetch items from the queue, then we write those operations to the IO Ring as fast as we can manage. The IO Ring size is limited, however. So we need to handle the case where we have more work for the IO Ring than it can accept. When that happens, we will go to the submit_and_wait label and wait for something to complete.

Note that there is some logic there to handle what is going on when the IO Ring is full. We submit all the work in the ring, wait for an operation to complete, and in the next run, we’ll continue processing from where we left off.

The rest of the code is processing the completed operations and reporting the result back to their origin. This is done using the following function, which I find absolutely hilarious:


int32_t rvn_write_io_ring(
    void *handle,
    struct page_to_write *buffers,
    int32_t count,
    int32_t *detailed_error_code)
{
    int32_t rc = SUCCESS;
    struct handle *handle_ptr = handle;
    if (count == 0)
        return SUCCESS;


    if (pthread_mutex_lock(&handle_ptr->global_state->writes_arena.lock))
    {
        *detailed_error_code = errno;
        return FAIL_MUTEX_LOCK;
    }
    size_t max_req_size = (size_t)count * 
                      (sizeof(struct iovec) + sizeof(struct workitem));
    if (handle_ptr->global_state->writes_arena.arena_size < max_req_size)
    {
        // allocate arena space
    }
    void *buf = handle_ptr->global_state->writes_arena.arena;
    struct workitem *prev = NULL;
    int eventfd = handle_ptr->global_state->writes_arena.eventfd;
    for (int32_t curIdx = 0; curIdx < count; curIdx++)
    {
        int64_t offset = buffers[curIdx].page_num * VORON_PAGE_SIZE;
        int64_t size = (int64_t)buffers[curIdx].count_of_pages *
                       VORON_PAGE_SIZE;
        int64_t after = offset + size;


        struct workitem *work = buf;
        *work = (struct workitem){
            .op.write.iovecs_count = 1,
            .op.write.iovecs = buf + sizeof(struct workitem),
            .completed = 0,
            .type = workitem_write,
            .filefd = handle_ptr->file_fd,
            .offset = offset,
            .errored = false,
            .result = 0,
            .prev = prev,
            .notifyfd = eventfd,
        };
        prev = work;
        work->op.write.iovecs[0] = (struct iovec){
            .iov_len = size, 
            .iov_base = buffers[curIdx].ptr
        };
        buf += sizeof(struct workitem) + sizeof(struct iovec);


        for (size_t nextIndex = curIdx + 1; 
            nextIndex < count && work->op.write.iovecs_count < IOV_MAX; 
            nextIndex++)
        {
            int64_t dest = buffers[nextIndex].page_num * VORON_PAGE_SIZE;
            if (after != dest)
                break;


            size = (int64_t)buffers[nextIndex].count_of_pages *
                              VORON_PAGE_SIZE;
            after = dest + size;
            work->op.write.iovecs[work->op.write.iovecs_count++] = 
                (struct iovec){
                .iov_base = buffers[nextIndex].ptr,
                .iov_len = size,
            };
            curIdx++;
            buf += sizeof(struct iovec);
        }
        queue_work(work);
    }
    rc = wait_for_work_completion(handle_ptr, prev, eventfd, 
detailed_error_code);
    pthread_mutex_unlock(&handle_ptr->global_state->writes_arena.lock)
    return rc;
}

Remember that when we submit writes to the data file, we must wait until they are all done. The async nature of IO Ring is meant to help us push the writes to the OS as soon as possible, as well as push writes to multiple separate files at once. For that reason, we use anothereventfd() to wait (as the submitter) for the IO Ring to complete the operation. I love the code above because it is actually using the IO Ring itself to do the work we need to do here, saving us an actual system call in most cases.

Here is how we submit the work to the worker thread:


void queue_work(struct workitem *work)
{
    struct workitem *head = atomic_load(&g_worker.head);
    do
    {
        work->next = head;
    } while (!atomic_compare_exchange_weak(&g_worker.head, &head, work));
}

This function handles the submission of a set of pages to write to a file. Note that we protect against concurrent work on the same file. That isn’t actually needed since the caller code already handles that, but an uncontended lock is cheap, and it means that I don’t need to think about concurrency or worry about changes in the caller code in the future.

We ensure that we have sufficient buffer space, and then we create a work item. A work item is a single write to the file at a given location. However, we are using vectored writes, so we’ll merge writes to the consecutive pages into a single write operation. That is the purpose of the huge for loop in the code. The pages arrive already sorted, so we just need to do a single scan & merge for this.

Pay attention to the fact that the struct workitem actually belongs to two different linked lists. We have the next pointer, which is used to send work to the worker thread, and the prev pointer, which is used to iterate over the entire set of operations we submitted on completion (we’ll cover this in a bit).

Queuing work is done using the following method:


int32_t
wait_for_work_completion(struct handle *handle_ptr, 
    struct workitem *prev, 
    int eventfd, 
    int32_t *detailed_error_code)
{
    // wake worker thread
    eventfd_write(g_worker.eventfd, 1);
    
    bool all_done = false;
    while (!all_done)
    {
        all_done = true;
        *detailed_error_code = 0;


        eventfd_t v;
        int rc = read(eventfd, &v, sizeof(eventfd_t));
        struct workitem *work = prev;
        while (work)
        {
            all_done &= atomic_load(&work->completed);
            work = work->prev;
        }
    }
    return SUCCESS;
}

The idea is pretty simple. We first wake the worker thread by writing to its eventfd(), and then we wait on our own eventfd() for the worker to signal us that (at least some) of the work is done.

Note that we handle the submission of multiple work items by iterating over them in reverse order, using the prev pointer. Only when all the work is done can we return to our caller.

The end result of all this behavior is that we have a completely new way to deal with background I/O operations (remember, journal writes are handled differently). We can control both the volume of load we put on the system by adjusting the size of the IO Ring as well as changing its priority.

The fact that we have a single global IO Ring means that we can get much better usage out of the worker thread pool that IO Ring utilizes. We also give the OS a lot more opportunities to optimize RavenDB’s I/O.

The code in this post shows the Linux implementation, but RavenDB also supports IO Ring on Windows if you are running a recent edition.

We aren’t done yet, mind, I still have more exciting things to tell you about how RavenDB 7.1 is optimizing writes and overall performance. In the next post, we’ll discuss what I call the High Occupancy Lane vs. Critical Lane for I/O and its impact on our performance.

FUTURE POSTS

  1. fsync()-ing a directory on Linux (and not Windows) - 3 days from now

There are posts all the way to Jun 09, 2025

RECENT SERIES

  1. Webinar (7):
    05 Jun 2025 - Think inside the database
  2. Recording (16):
    29 May 2025 - RavenDB's Upcoming Optimizations Deep Dive
  3. RavenDB News (2):
    02 May 2025 - May 2025
  4. Production Postmortem (52):
    07 Apr 2025 - The race condition in the interlock
  5. RavenDB (13):
    02 Apr 2025 - .NET Aspire integration
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}