Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,501
|
Comments: 51,070
Privacy Policy · Terms
filter by tags archive
time to read 2 min | 326 words

Take a look at this wonderful example of foresightedness (or hubris).

In a little over ten years, Let’s Encrypt root certificates are going to expire. There are already established procedures for how to handle this from other Certificate Authorities, and I assume that there will be a well-communicated plan for this in advance.

That said, I’m writing this blog post primarily because I want to put the URL in the notes for the meeting above. Because in 10 years, I’m pretty certain that I won’t be able to recall why this is such a concerning event for us.

RavenDB uses certificates for authentication, usually generated via Let’s Encrypt. Since those certificates expire every 3 months, they are continuously replaced. When we talk about trust between different RavenDB instances, that can cause a problem. If the certificate changes every 3 months, how can I trust it?

RavenDB trusts a certificate directly, as well as any later version of that certificate assuming that the leaf certificate has the same key and that they have at least one shared signer. That is to handle the scenario where you replace the intermediate certificate (you can go up to the root certificate for trust at that point).

Depending on the exact manner in which the root certificate will be replaced, we need to verify that RavenDB is properly handling this update process. This meeting is set for over a year before the due date, which should give us more than enough time to handle this.

Right now, if they are using the same key on the new root certificate, it will just work as expected. If they opt for cross-singing with another root certificate, we need to ensure that we can verify the signatures on both chains. That is hard to plan for because things change.

In short, future Oren, be sure to double-check this in time.

time to read 3 min | 487 words

Corax is the new indexing and querying engine in RavenDB, which recently came out with RavenDB 6.0. Our focus when building Corax was on one thing, performance. I did a full talk explaining how it works from the inside out, available here as well as a couple of podcasts.

Now that RavenDB 6.0 has been out for a while, we’ve had the chance to complete a few features that didn’t make the cut for the big 6.0 release. There is a host of small features for Corax, mostly completing tasks that were not included in the initial 6.0 release.

All these features are available in the 6.0.102 release, which went live in late April 2024.

The most important new feature for Corax is query plan visualization.

Let’s run the following query in the RavenDB Studio on the sample data set:


from index 'Orders/ByShipment/Location'
where spatial.within(ShipmentLocation, 
                  spatial.circle( 10, 49.255, 4.154, 'miles')
      )
and (Employee = 'employees/5-A' or Company = 'companies/85-A')
order by Company, score()
include timings()

Note that we are using the includetimings() feature. If you configure this index to use Corax, issuing the above query will also give us the full query plan. In this case, you can see it here:

You can see exactly how the query engine has processed your query and the pipeline it has gone through.

We have incorporated many additional features into Corax, including phrase queries, scoring based on spatial results, and more complex sorting pipelines. For the most part, those are small but they fulfill specific needs and enable a wider range of scenarios for Corax.

Over six months since Corax went live with 6.0, I can tell that it has been a successful feature. It performs its primary job well, being a faster and more efficient querying engine. And the best part is that it isn’t even something that you need to be aware of.

Corax has been the default indexing engine for the Development and Community editions of RavenDB for over 3 months now, and almost no one has noticed.

It’s a strange metric, I know, for a feature to be successful when no one is even aware of its existence, but that is a common theme for RavenDB. The whole point behind RavenDB is to provide a database that works, allowing you to forget about it.

time to read 1 min | 103 words

A couple of months ago I had the joy of giving an internal lecture to our developer group about Voron, RavenDB’s dedicated storage engine. In the lecture, I’m going over the design and implementation of our storage engine.

If you ever had an interest on how RavenDB’s transactional and high performance storage works, that is the lecture for you. Note that this is aimed at our developers, so we are going deep.

You can find the slides here and here is the full video.

time to read 3 min | 566 words

We got an interesting question in the RavenDB Discussion:

We have Polo (shirts) products. Some customers search for Polo and others search for Polos. The term Polos exists in only a few of the descriptions and marketing info so the results are different.

Is there a way to automatically generate singular and plural forms of a term or would I have to explicitly add those?

What is actually requested here is to perform a process known as stemming. Turning a word into its root. That is a core concept in full-text search, and RavenDB allows you to make use of that.

The idea is that during indexing and queries, RavenDB will transform the search terms into a common stem and search on that. Let’s look at how this works, shall we?

The first step is to make an index named Products/Search with the following definition:


from p in docs.Products
select new { p.Name }

That is about as simple an index as you can get, but we still need to configure the indexing of the Name field on the index, like so:

You can see that I customized the Name field and marked it for full-text search using the SnowballAnalyzer, which is responsible for properly stemming the terms.

However, if you try to create this index, you’ll get an error. By default, RavenDB doesn’t include the SnowballAnalyzer, but that isn’t going to stop us. This is because RavenDB allows users to define custom analyzers.

In the database “Settings”, go to “Custom Analyzers”:

And there you can  add a new analyzer. You can find the code for the analyzer in question in this Gist link.

You can also register analyzers by compiling them and placing the resulting DLLs in the RavenDB binaries directory. I find that having it as a single source file that we push to RavenDB in this manner is far cleaner.

Registering the analyzer via source means that you don’t need to worry about versioning, deploying to all the nodes in the cluster, or any such issues. It’s the responsibility of RavenDB to take care of this.

I produced the analyzer file by simply concatenating the relevant classes into a single file, basically creating a consolidated version containing everything required. That is usually done for C or C++ projects, but it is very useful in this case as well. Note that the analyzer in question must have a parameterless constructor. In this case, I just selected an English stemmer as the default one.

With the analyzer properly registered, we can create the index and start querying on it.

As you can see, we are able to find both plural and singular forms of the term we are searching for.

To make things even more interesting, this functionality is available with both Lucene and Corax indexes, as Corax is capable of consuming Lucene Analyzers.

The idea behind full-text search in RavenDB is that you have a full-blown indexing engine at your fingertips, but none of the complexity involved. At the same time, you can utilize advanced features without needing to move to another solution, everything is in a single box.

time to read 2 min | 270 words

RavenDB is typically accessed directly by your application, using an X509 certificate for authentication. The same applies when you are connecting to RavenDB as a user.

Many organizations require that user authentication will not use just a single factor (such as a password or a certificate) but multiple. RavenDB now supports the ability to define Two Factor Authentication for access.

Here is how this looks like in the RavenDB Studio:

You are able to generate a certificate as well as register the Authenticator code in your device.

When using the associated certificate, you’ll not be able to access RavenDB. Instead, you’ll get an error message saying that you need to complete the Two Factor Authentication process. Here is what that looks like:

Once you complete the two factor authentication process, you can select for how long we’ll allow access with the given certificate and whatever to allow just accesses from the current browser window (because you are accessing it directly) or from any client (you want to access RavenDB from another device or via code).

Once the session duration expires, you’ll need to provide the authentication code again, of course.

This feature is meant specifically for certificates that are used by people directly. It is not meant for APIs or programmatic access. Those should either have a manual step to allow the certificate or utilize a secrets manager that can have additional steps and validations based on your actual requirements.

You can read more about this feature in the feature announcement.

time to read 1 min | 101 words

When Oren Eini originally developed RavenDB, he used the Lucene library to implement indexing. Eventually, his team encountered limitations with this strategy, so they created the Corax search engine, which improved query execution time significantly. Oren discusses the challenges involved in creating this engine and the approaches they took to overcome these challenges.

Part 1:

Part 2:

time to read 2 min | 399 words

One of the interesting components of RavenDB Cloud is status reporting. It turns out that when you offer X as a Service, people really care about your operational status.

For RavenDB Cloud, we have https://status.ravendb.net/, which will give you some insights into the overall health of the system. Here are some details from the status page:

 

The interesting thing about this page is that it shows global status, indicating issues affecting large swaths of users. For instance, Azure having issues in a whole region in the image above is a great example of one such scenario. Regular maintenance, which we carry over the span of days, is something that we report, but you’ll usually never notice (due to the High Availability features of RavenDB).

It gets more complicated when we start talking about individual instances. There are many scenarios where the overall system health is great, but a particular database may suffer. The easiest example is if you run out of disk space. That affects that particular instance only.

For that scenario, we are reporting Production Monitoring Alerts within the RavenDB Cloud portal. Here is what this looks like:

 

 

As you can see, we report specific problems on those instances, raising that to your awareness. That was actually needed because, for the most part, RavenDB itself handles those sorts of things via High Availability, which means that even if there are issues, you’re likely to not feel them for a while.

Resilience at the cluster level means that even pretty severe problems are papered over and the system moves on. But there is only so much limping that you can do. If you are running at the bare edge of capacity, eventually you’ll trip over the line.

Those Production Monitoring Alerts allow you to detect and act upon those issues when they happen, not when they bring down production.

This aligns with our vision for RavenDB, the kind of system where you don’t need to have a full-time babysitter monitoring the system. Instead, if there is a problem that the database cannot solve on its own, it will explicitly notify you, in advance.

That leads to a system that is far healthier all around and means that you can focus on building your system, rather than managing database minutiae.

time to read 4 min | 792 words

RavenDB can run on the Raspberry Pi, it is actually an important use case for us when our users are deploying RavenDB as part of Internet of Things systems. We wanted to showcase RavenDB’s performance and decided that instead of scaling up and showing you how well RavenDB does ridiculous loads, we’ll go the other way around. We’ll go small, and let you directly experience how efficient RavenDB is.

You can look at the demo unit directly on this page.

We decided to dial it down yet further, and run RavenDB on the Raspberry Pi Zero.

This tiny computer is about the size of a cigarette lighter and is small enough to comfortably fit on your keychain. Most Raspberry Pis are impressive machines given their cost, more than powerful enough to power real applications.

Here is what this actually looks like, with me as a reference for size 🙂.

However, just installing RavenDB on the Zero isn't much of a challenge or particularly interesting, to be honest. We wanted to do something that would be both fun and useful. One of the features we want users to explore is the ability to run RavenDB in appliance mode. The question is, what sort of an appliance will we build?

A key part of our thinking was that we wanted to show something that works with realistic data sizes. We wanted to have an actual use case for this, beyond just showing a toy. One of the things that I always find maddening about being disconnected is that I feel like half my brain has been cut away.

We set out to fix that, the project is to create a knowledge system inside the Pi Zero that would be truly Plug & Play. That turned out to be quite a challenge, but I think we met it in a very nice manner.

We went to archive.org and got some of the Stack Exchange data sets. In particular, we got the datasets that are most interesting for DevOps scenarios. In particular, we have raspberrypi.stackexchange.com, unix.stackexchange.com, serverfault.com, and superuser.com.

I find it deliciously recursive that we can use the Raspberry Pi Zero to store the dataset about the Raspberry Pi itself. We loaded all those datasets into the Zero, for a total of about 7.5 GB, and over 4.2 million documents were stored there.

Note that this is using RavenDB’s document compression, which reduced the total size by over 50% over the original dataset size.

Next was the time to actually make this accessible. Just working with RavenDB directly to query the data is cool, for sure, but we wanted to be useful.

So we built a portal to access the data. Here is what it looks like when you enter it for the first time: