Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,527
|
Comments: 51,164
Privacy Policy · Terms
filter by tags archive
time to read 1 min | 194 words

While tracing a bug, I ended up with the following minimum reproduction:

The error you’ll get is:

Unable to sort because the IComparer.Compare() method returns inconsistent results. Either a value does not compare equal to itself, or one value repeatedly compared to another value yields different results.

The nasty thing about this is that if we had just 16 items in the array, this code would work. So this would appear to successfully work most times, and then break.

The underlying issue is that Array.Sort will use different sorting algorithms based on the size of the array to be sorted. Under 16 items, it’ll use an insertion sort, but over that, an introspection sort will be used (up to a limit, and then heap sort. Go read the code.).

What is key here is that our comparison function is broken. It doesn’t understand that two values can be equal. Because of that, comparing two equal values result in both of them being smaller than one another. That cause an error, and .NET issues this error. When you know what went wrong, the fix is pretty easy:

Now we properly handle this scenario, and everything will work.

time to read 6 min | 1023 words

Following my posts about search, I wanted to narrow my focus a bit and look into the details of implementing a persistent data structure inside Voron.

Voron is RavenDB’s storage engine and forms the lowest layers of RavenDB. It is responsible for speed, safety, transactions and much more. It is also a very low level piece of code, which has a lot of impact on the design and implementation.

Some of the things that we worry about when worrying Voron code are:

  • Performance – reduce computation / allocations (ideally to zero) for writes.
  • Zero copies – no cost for reads.
  • Safety – concurrent transactions can operate without interfering with one another.
  • Applicability – we tend to implement low level features that enable us to do a lot more on the higher tiers of the code.
  • Scale – handling data that may be very large, millions and billions of results.

In this case, I want to look into what it would take to implement a persistent set. If I was working in memory, I would be using Set<Int64>, but when using a persistent data structure, things are more interesting. The set we use will simply record Int64 values. This is important for a bunch of reasons.

First, Int64 is big, such values are used as file pointers, artificial ids, etc. Even though it seems limiting, we can get a lot more functionality than expected.

Second, if we are using a set of Int64, we can implement that using a bitmap. A set value indicate that the value is in the set, which allows us to do set union, intersection and exclusion cheaply. The only problem here is that a bitmap with Int64 values is… a problem. Imagine that I have the following code:

set.Add(82_100_447_308);

We would need to use 76GB(!) of memory to hold a bitmap for this set. That is obviously not going to be a workable solution for us. Luckily, there are other alternatives. Roaring Bitmaps are efficient in both time and space, so that is great. We just need to have an implementation that can work with a persistent model.

In order to understand how I’m going to go about implementing this feature, you need to understand how Voron is built. Voron is composed of several layers, the paging layer, which managed transactions and ACID and the data structure layer, which managed B+Trees, tables, etc.

In this case, we are implementing something at the data structure layer. And the first hurdle to jump through is decide how the data should look like. On the fact of it, this is a fairly simple decision, most of the decisions has already been made and outline in the previous post. We are going to have a sorted array of segment metadata, which will host individual segments with the set bits. This works if we have a single set, but in our case, we expect lots.

If we are going to use this for storing the posting lists, we have to deal with the following scenarios (talking about the specific documents matching the terms in the index):

  1. Many such lists that have a single item (unique id, date, etc)
  2. Lots of lists that have just a few values (Customer’s field in an order, for example)
  3. Few lists that have many values ( OrderCompleted: true, for example, can be safely expected to be about 99% of the total results)
  4. Many lists that have moderate amount of values (Each of the Tags options , for example)

That means that we have to think very carefully about each scenario. The third and forth options are relatively similar and can probably be best served by the roaring bitmap that we discussed. But what about the first two?

To answer that, we need to compute the metadata required to maintain the roaring set. At a minimum, we are going to have one SegmentMetadata involved, but we’ll also need an offset for that segment’s data, so that means that the minimum size involved has got to be 16 bytes (SegmentMetadata is 8 bytes, and a file offset is the same). There is also some overhead to store these values, which is 4 bytes each. So to store a single value using roaring set we’ll need:

  • 16 bytes for the segment metadata and actual segment’s offset
  • 4 bytes storage metadata for the previous line’s data
  • 2 bytes (single short value) to mark the single flipped bit
  • 4 bytes storage metadata for the segment itself

In short, we are getting to 26 bytes overhead if we just stored everything as a roaring set. Instead of doing that, we are going to try to do better and optimize as much as possible the first two options (unique id and very few matches). We’ll set a limit of 28 bytes (which, together with the 4 bytes storage metadata will round up to nice 32 bytes). Up to that limit, we’ll simple store the document ids we have as delta encoded varint.

Let’s say that we need to store the following document id lists:

List

Encoding

[12394]

[234, 96]

[319333, 340981,342812]

[229, 190, 19, 144, 169, 1, 167, 14]

You can see that the first list, which is 8 bytes in size, we encoded using merely 2 bytes. The second list, composed of three 8 bytes values (24 bytes) was encoded to merely 8 bytes. Without delta encoding, that value would be decoded to: [229, 190, 19, 245, 231, 20, 156, 246, 20], an additional byte. This is because we substract from each number the previous one, hopefully allowing to pack the value in a much more compact manner.

With a size limit of 28 bytes, we can pack quite a few ids in the list. In my experiments, I could pack up to 20 document ids (so 160 bytes, without encoding) into that space with realistic scenario. Of course, we may get a bad pattern, but that would simply mean that we have to build the roaring set itself.

I’m going to go ahead and do just that, and then write a post about the interesting tidbits of the code that I’ll encounter along the way.

time to read 7 min | 1202 words

I run into the following Reddit’s question: My client's business sounds shady. Should I run away? And I thought it was very interesting. Here are the relevant details:

I have a client who wants me to finish developing a web application for him. Basically, his application has 2 types of users, buyers and sellers. The buyers can create an account and upload money to the website which goes into my clients bank account. The amount of money they have to spend is then logged in the database as their "available funds". The users can then purchase services from the sellers.

The sellers, when they sell a product, earn some of the funds from the buyers. So no money actually gets transferred. The database just updates to say that the buyer now has x$ less available, and the seller has an extra x$ available.

Eventually the seller can withdraw their money, at which point my client transfers it from his bank.

The core of the issue is in a comment:

Their funds that they have available is just represented as a number in a database, which himself and the developers have access to. If one wanted to, they could just log into the database, create their own seller account, load it up with funds, then withdraw from my clients bank account. Just one example.

There are a few interesting things here that I want to touch, but this question was posted to a legal advice community, so I should probably preface anything I say with the fact that I’m not a lawyer, merely doing professional software development for quite some time and have dealt numerous times with financial systems and projects.

Today, except for the money that you physically carry in your wallet or stuffed under a mattress, pretty much all money is represented as numbers in some database. And there have been cases where this have been used for theft. However, it is very rarely going to be something as obvious as merely changing numbers around in a database.

For example, let’s imagine that, as the admin of the website web application above, I follow the doomsday scenario and create my own seller account. At this point, I don’t need to move money around, I can just put 10,000,000 in the Amount field and ask for a withdrawal. It will likely work, after all. There is no need to balance the numbers. Of course, this assumes quite a few things:

  • There is no governance on the outgoing money.
  • I don’t care what would happen after.

The fun thing about money transfers in the real world is that for the most part, they are reversible. I once paid a supplier, but I hit an extra 0 and didn’t notice that until I hit the Submit key. I called the bank and cancelled the order. Another time, I switched numbers in a wire transfer and sent money to the wrote account. It took a couple of months before we discovered the issue, but we were able to reconcile everything. It was annoying, but not too hard.

The threat model here is also fairly strange. We have someone that is able to modify the data directly in the database, but can’t also call: SendMoney(me, all) ?

All of the reasons above are partly why no real financial system works in this manner. There is no single Amount field per customer and money being shuffled off between accounts using + or –. Instead, you always have a record of transactions.

You might have heard about the embezzlement by fraction, right?  A programmer will direct the remainder of the result (typically partial cents) to their own account. Why do we need such a thing? After all, if the programmer is already able to write code that would move money around, why just deal with fractional cents?

The answer is that in every financial institution, you are going to have some sort of governance. You’ll run a query / report and the total amount of money in and the total amount of money out should match. If it doesn’t, you have a Big Problem and are going to have to do something about it.  This is why all these systems work based on moving money around and not creating it out of thin air.

Money transfers are easily tracked, monitored and carefully audited. Doing the sort of thing that was suggested in the Reddit question falls under: Embezzlement, Theft, Money Laundering, Tax Evasion and probably a host of other laws. It isn’t some weird trick that you can get away with.

Regardless, unless the system you build is intentionally meant to evade the law, I doubt that the developers building the system would have any issues. The people operating it, of course, need to be sure that they are operating within the law, but that shouldn’t be an issue for the developers. After all, a photo sharing website is a perfectly innocent project, but can be used for several nefarious and illegal purposes. But if someone took Lychee to put up a site for a pig named Napoleon hosted on eu-west-3 (Paris), I doubt that would put the Lychee project in any legal trouble.

The title of this post promised that I would talk about data modeling, so let’s get to it, shall we? If you are still worried about the shadiness of your client, and assuming that you aren’t worried about the possibility of the shady client not paying you, what can you do?

The answer is simple, don’t have anything like an Amount field in the system. Don’t allow it to happen. Instead, build the system using the notion of monetary transactions. That is, every movement of money between accounts (deposit from outside, withdrawal or shifting balance between accounts) must be explicit recorded as such.

Here is a very simple example:

image

You can see the movement of the money, and the total amount that was deducted from the source account matches the deposit + commission.

With RavenDB, you can use a Map/Reduce index on top of this set of transactions to compute:

  • How much money does each account has.
  • That the total amount of money in the system balances out.

Note that in this case, the numbers that you are working on are computed, there is no way to just move things around. Instead, you have to do that using an actual transaction, which will create a traceable record.

To make things more robust, you can take things further. Make sure that on each transaction, you’ll email all the clients involved. That will ensure that there is an external record of the transactions in the system, and they can recover their own account state independently.  This will protect the customer in the case of the shady client trying to do things behind their back. They have their own separate record of all the going on in their account, separate from what is store on the potentially vulnerable database.

These are fairly routine behaviors for most financial systems, and I can’t imagine a client not wanting them. They’ll also serve as a pretty good protection for the developers if there is shadiness involved.

time to read 11 min | 2033 words

I was pointed to this blog post which talks about the experience of using RavenDB in production. I want to start by saying that I love getting such feedback from our users, for a whole lot of reasons, not the least of which is that it is great to hear what people are doing with our database.

Alex has been using RavenDB for a while, so he had the chance to use RavenDB 3.5 and 4.2, that is a good from my perspective, because means that he had the chance to see what the changes were and see how they impacted his routine usage of RavenDB. I’m going to call out (and discuss) some of the points that Alex raise in the post.

Speaking about .NET integration:

Raven’s .NET client API trumps MongoDB .NET Driver, CosmosDB + Cosmonaut bundle and leaves smaller players like Cassandra (with DataStax C# Driver), CouchDB (with MyCouch) completely out of the competition.

When I wrote RavenDB 0.x, way before the 1.0 release, it took two months to build the core engine, and another three months to build the .NET client. Most of that time went on Linq integration, by the way. Yes, it literally took more time to build the client than the database core. We put a lot of effort into that. I was involved for years in the NHibernate project and I took a lot of lessons from there. I’m very happy that it shows.

Speaking about technical support:

RavenDB has a great technical support on Google Groups for no costs. All questions, regardless of the obtained license, get meaningful answers within 24 hours and quite often Oren Eini responds personally.

Contrary to the Google Groups, questions on StackOverflow are often neglected. It’s a mystery why Raven sticks to a such archaic style of tech support and hasn’t migrated to StackOverflow or GitHub.

I care quite deeply about the quality of our support, to the point where I’ll often field questions directly, as Alex notes. I have an article on Linked In that talks about my philosophy in that regard which may be of interest.

As to Alex’s point about Stack Overflow vs Google Groups, the key difference is the way we can discuss things. In Stack Overflow, the focus is on an answer, but that isn’t our usual workflow when providing support. Here is a question that would fit Stack Overflow very well, there is a well defined problem with all the details and we are able to provide an acceptable answer in a single exchange. That kind of interaction, on the other hand, is quite rare. It is a lot more common to have to have a lot more back and forth and we tend to try to give a complete solution, not just answer the initial question.

Another issue is that Stack Overflow isn’t moderated by us, which means that we would be subject to rules that we don’t necessarily want to adhere to. For example, we get asked similar questions all the time, which are marked as duplicated and closed on Stack Overflow, but we want to actually answer people. 

GitHub issues are a good option for this kind of discussion, but they tend to cause people to raise issues, and one of the reasons that we have the google group is to create discussion. I guess it comes down to the different community that spring up from the communication medium.

Speaking about documentation:

RavenDB does have the official docs, which are easily navigable and searchable. Works well for beginners and provides good samples to start with, but there are gaps here and there, and it has far less coverage of the functionality than many popular free and open source projects.

Ultimately, as a developer, I want to google my question and it’s acceptable to have the answer on a non-official website. But if you’re having a deep dive with RavenDB, it’s unlikely to find it in the official docs, nor StackOverflow, nor GitHub.

Documentation has always been a chore for me. The problem is that I know what the software does, so it can be hard to even figure out what we need to explain. This year we have hired a couple more technical writers specifically to address the missing pieces in our documentation. I think we are doing quite well at this point.

What wasn’t available at the time of this post and is available now is the book. All of the details about RavenDB that you could care too and more are detailed there are are available. It is also available to Google, so in many cases your Google search will point you to the right section in the book that may answer your question.

image

I hope that these actions covered the gaps that Alex noted in our documentation. And there is also this blog, of course Smile.

Speaking about issues that he had run into:

It’s reliable and does work well. Unless it doesn’t. And then a fix gets promptly released (a nightly build could be available within 24 hours after reporting the issue). And it works well again.

All these bugs (and others I found) have been promptly fixed.

… stability of the server and the database integrity are sacred. Even a slight risk of losing it can keep you awake at night. So no, it’s a biggy, unless the RavenDB team convinces me otherwise.

I didn’t include the list of issues that Alex pointed to on purpose. The actual issues don’t matter that much, because he is correct, from Alex’s perspective, RavenDB aught to Just Work, and anything else is our problem.

We spend a lot of time on ensuring a high quality for RavenDB. I had a two parts with Jeffery Palermo about just that, and you might be interested in this keynote that goes into some of the challenges that are involved in making RavenDB.

One of the issues that he raised was RavenDB crashing (or causing Windows to crash) because of a bug in the Windows Kernel that was deployed in a hotfix. The hotfix was quietly patched some time later by Microsoft, but in the meantime, RavenDB would crash.  And a user would blame us, because we crashed.

Another issue (RavenDB upgrade failing) was an intentional choice by us in the upgrade, however. We had a bug that can cause data corruption in some cases, we fixed it, but we had to deal with potentially problematic state of existing databases. We chose to be conservative and ask the user to take an explicit action in this case, to prevent data loss. It isn’t ideal, I’m afraid, but I believe that we have done the best that we could after fixing the underlying issue. In doubt, we pretty much always have to fall on the prevent data loss vs. availability side.

Speaking about Linq & JS support:

let me give you a sense of how often you’ll see LINQ queries throwing NotSupportedException in runtime.

But in front of us a special case — a database written in the .NET! There is no need in converting a query to SQL or JavaScript.

I believe that I mentioned already that the initial Linq support for RavenDB literally took more time than building RavenDB itself, right? Linq is an awesome feature, for the consumer. For the provider, it is a mess. I’m going to quote Frans Bouma on this:

Something every developer of an ORM with a LINQ provider has found out: with a LINQ provider you're never done. There are always issues popping up due to e.g. unexpected expressions in the tree.

Now, as Alex points out. RavenDB is written in .NET, so we could technically use something like Serialize.Linq and support any arbitrary expression easily, right?

Not really, I’m afraid, and for quite a few reasons:

  • Security – you are effectively allowing a user to send arbitrary code to be executed on the server. That is never going to end up well.
  • Compatibility – we want to make sure that we are able to change our internals freely. If we are forced to accept (and then execute) code from the client, that freedom is limited.
  • Performance – issuing a query in this manners means that we’ll have to evaluate the query on each document in the relevant collection. A full table scan. That is not a feature that RavenDB even has, and for a very good reason.
  • Limited to .NET only – we currently have client for .NET, JVM, Go, Python, C++ and Node.JS. Having features just for one client is not something that we want, it really complicates our lives.

We think about queries using RQL, which are abstract in nature and don’t tie us down with regards to how we implement them. That means that we can use features such as automatic indexes, build fast queries, etc.

Speaking about RQL:

Alex points out some issues with RQL as well. The first issue relates to the difference between a field existing and having a null value. RavenDB make a distinction between these state. A field can have a null value or it can have a missing value. In a similar way to the behavior of NULL in SQL, which can often create similar confusion. The problem with RavenDB is that the schema itself isn’t required, so different documents can have different fields, so in our case, there is an additional level. A field can have a value, be null or not exist. And we reflect that in our queries. Unfortunately, while the behavior is well defined and documented, just like NULL behavior in SQL, it can be surprising to users. 

Another issue that Alex brings up is that negation queries aren’t supported directly. This is because of the way we process queries and one of the ways we ensure that users are aware of the impact of the query. With negation query, we have to first match all documents the exclude all those that match the negation. For large number of documents, that can be expensive. Ideally, the user have a way to limit the scope of the matches that are being negated, which can really help performance.

Speaking about safe by default:

RavenDB is a lot less opinionated than it used to be. Alex rightfully points that out. As we got more users, we had to expand what you could do with RavenDB.  It still pains me to see people do things that are going to be problematic in the end (extremely large page sizes are one good example), but our users demanded that. To quote Alex:

I’d preferred a slowed performance in production and a warning in the server logs rather than a runtime exception.

Our issue with this approach is that no one looks at the logs and that this usually come to a head at 2 AM, resulting in a support call from the ops team about a piece of software that broke. Because of this, we have removed many such features, while turning them to alerts, and the very few that remained (mostly just the number of requests per session) can be controlled globally by the admin directly from the Studio. This ensures that the ops team can do something if you hit the wall, and of course, you can also configure this from the client side globally easily enough.

As a craftsman, it pains me to remove those limits, but I have to admit that it significantly reduced the number of support calls that we had to deal with.

Conclusion:

Overall, I can say RavenDB is a very good NoSQL database for .NET developers, but the ”good” is coming with a bunch of caveats. I’m confident in my ability to develop any enterprise application with RavenDB applying the Domain-driven design (DDD) philosophy and practices.

I think that this is really the best one could hope for. I think that Alex’s review that is honest and to the point. Moreover, it is focused and detailed. That make very valuable. Because he got to his conclusions not out of brief tour of RavenDB but actually holding it up in the trenches.

Thanks, Alex.

time to read 2 min | 302 words

We released RavenDB Cloud a few months ago, with great response from the community. Since the cloud offer was released, we have been silently working on more features and capabilities.

Many of them you’ll never even notice. We have integrated RavenDB monitoring and RavenDB Cloud’s capability to do live deploys, for example, so your RavenDB Cloud can do things like dynamically grow its disk size at need or the RavenDB instance can alert that it is running out of IOPS and upgrade its IOPS reservation, etc.

Some of the new features, on the other hand, are quite visible. To increase the security of our users, you can now enable two factor authentication for your account. Just go to your Profile inside of the RavenDB Cloud Portal and you’ll see the option:

image

At this point, I don’t think that I need to really say anything more about why 2FA is such an important feature and I encourage all our users to enable it in their accounts.

Speaking of accounts, a hotly requested feature was the ability to have multiple users per account. We have implemented this and you can now invite users to your account.

image

The common scenario for this is when you need to have the ops team as a whole have access to the organization’s RavenDB Cloud. This saves the need to have a shared user and give a much better experience for larger organizations.

We are actively working on our cloud offering, but with a few months to look back, I’m really happy about where we are. If you have any suggestions or requests, I would love to hear them.

time to read 8 min | 1407 words

Iimage care about usability and the user experience for our users. We spend a lot of time on making sure that things are running smoothly. When we created RavenDB Cloud, I knew that it was important to create a good experience for our cloud offering.

One of the most important things that I did was to go and look at other people’s offerings and see where they failed to meet customer expectations. I recently run into this article about AWS Elastic. Similar issues has been raised about it for a while. And it was one of my explicit design goals of what not to do in our cloud offerings.

Summarizing the discussion, it seems like the following major issues across the board.

  • Backups – Being able to have your own backup schedule and destinations. Retention policies based on your needs, etc. AWS Elastic uses hourly backups and you only get 14 days of that. Cosmos DB, to look at an Azure offering, is taking a backup every 4 hours and you have a maximum of two of them.

We have users that needs to be able to go back weeks / months / years and look at the state of their database at a given point in time. The 8 hours backup period for Cosmos DB is really short, but even 14 days on AWS Elastic is short enough that you probably need to roll some other solutions for that.

With RavenDB Cloud, you have automatic backups (hourly) with a retention period that defaults to 14 days. The key here, by the way, is defaults. You are absolutely free to define your own backup policies (per database or per cluster), that means that you can set your own destinations (want to do cross cloud backups, no problem) and your own retention policies.

  • Visibility – This is a very common complaint, it showed up in pretty much all resources that I checked and have been a common cause of complaints among the peers I sampled when we did the background for RavenDB Cloud. In particular, no logs or access to the debug endpoints is a killer for productivity. If you can’t tell what the problem is, you can’t fix it, so you’ll need to call support and have someone look things over. Here are a few choice quotes, that I think goes to the heart of things.

Here is Liz Bennett talking about things in 2017:

I feel equipped to deal with most Elasticsearch problems, given access to administrative Elasticsearch APIs, metrics and logging. AWS’s Elasticsearch offers access to none of that. Not even APIs that are read-only, such as the /_cluster/pending_tasks API.

Without access to logs, without access to admin APIs, without node-level metrics (all you get is cluster-level aggregate metrics) or even the goddamn query logs, it’s basically impossible to troubleshoot your own Elasticsearch cluster.

AWS’s Elasticsearch doesn’t provide access to any of those things, leaving you no other option but to contact AWS’s support team. But AWS’s support team doesn’t have the time, skills or context to diagnose non-trivial issues

And here is Nick Price, just a few days ago:

So your cluster resize job broke (on a service you probably chose so you wouldn’t have to deal with this stuff in the first place), so you open a top severity ticket with AWS support. Invariably, they’ll complain about your shard count or sizing and will helpfully add a link to the same shard sizing guidelines you’ve read 500 times by now. And then you wait for them to fix it. And wait. And wait. The last time I tried to resize a cluster and it locked up, causing a major production outage, it took SEVEN DAYS for them to get everything back online.

They couldn’t even tell if they’d fixed the problem and had to have me verify whether they had restored connectivity between their own systems.

I think that the reason this is the case is that AWS Elastic is a multi tenant service. so each instance is shared among multiple clients. That means that they have to limit the endpoints that you can access, to not leak data from other clients.

With RavenDB Cloud, you get all the usual diagnostic features features that you would usually get. We don’t want you to escalate things to support. The ideal scenario is that if you run into any trouble, you’ll be able to figure things out completely on your own. Hell, we do active monitoring and will contact our customers if we see outliers in the system behavior to gives them a heads up.

You can go to your RavenDB Cloud instance, pull the logs, watch ongoing traffic and even get a stack trace of the running production system. Everything is there, in the box.

  • Flexibility – The most common cited issue with AWS Elastic seems to be related to the issue of not being able to make changes in the environment. Or, rather, you can do that (increasing node count or changing the instance types) but when you do that, you are going to have a full blown migration step. That means that you’ll double the number of nodes you’ll run and incur a really expensive operation to copy all the data. Given that Elastic already has this feature (add a node to an existing cluster) the decision not to support it is likely related to constraints on the AWS Elastic multi tenancy layer.

I’m not really sure what to say here, RavenDB Cloud has no such issue. To be rather more exact, our multi tenant architecture was specifically designed so the outward facing differences between how you’ll operate RavenDB on your own systems vs. how you’ll operate RavenDB on the Cloud will be minimal.

Adding and removing node is certainly possible. And in fact, in my intro video to RavenDB Cloud I showed how I can upgrade a cluster along multiple axes, while it is being actively written to. All with exactly zero interruptions in service. Client code that was busy reading and writing from the cluster didn’t even noticed that. That was certainly not an easy feature to implement, but I considered this to be the baseline of what we had to offer.

  • Support – Both Nick and Liz had a poor experience with AWS Elastic support. You can read the quotes above, or read the full posts for the whole picture.

I don’t like support. We explicitly modeled the company so support is a cost center, not a source of revenue. That means that we want to close each support incident as soon as possible.  What this doesn’t mean is that we do an auto close of all issues on creation. That would give us fantastic closure rates, I imagine, but at a cost.

Instead, we have a multi layered system to deal with things. Consider the scenario Nick run into, a full disk. That is certainly something that you might run into, no?

  1. Automatic recovery - With RavenDB,  a single full disk will simply cause that node to reject writes, nothing else. And the cluster as a whole will continue to operate normally. With RavenDB Cloud, we have a lot more control, and our monitoring systems will automatically alert on disk full and increase its size automatically behind the scenes with no impact on users.
  2. Diagnostics - Active monitoring means that you’ll be notified in the RavenDB Studio about suspicious issues (for example, you run out of IOPS) and be able to investigate them with full visibility. RavenDB does a lot of work to ensure that if something is broken, you’ll know about it.
  3. Front line support – if you need to call our support, the person answering the call is going to be able to help you. They would typically be an engineer that was involved either in the actual building / managing of RavenDB Cloud or (2nd tier) involved in the development of RavenDB itself.

My goal with a support call is to get you back to speed as soon as possible, and the usual metrics for that are measured in minutes, not days.

We are now several months post the launch of RavenDB Cloud and the pickup of customers has been great. What is more important from my end, however, is that we are seeing how this kind of investment in our architecture and setup is paying off.

time to read 5 min | 843 words

In the previous posts in this series, I explored a bit how to generate a full text index on top of the Enron data set. In particular, we looked at (rudimentary) analysis of text in the first post and looked into posting lists (list of matching documents for specific terms) in the second one. It occurred to me that we need to actually have a much better understanding of the kind of requirements that we have from posting lists in general, so let’s look at them, shall we?

  • Add to the list (increasing numbers only).
  • Iterate the list (all, or from starting point).
  • Reduce disk space and memory utilization as much as possible.

The fact that I want to be able to add to the list is interesting. The typical use case in full text search is to generate the full blown posting list from scratch every time. The typical model is to use LSM (Log Structure Merge) and take advantage on the fact that we are dealing with sorted list to merge them cheaply.

Iterating the list is something you’ll frequently do, to find all the matches or to merge two separate lists. Here is the kind of API that I initially had in mind:

As you can see, there isn’t much there, which is intentional. I initially thought about using this an the baseline of a couple of test implementations using StreamVByte, FastPFor as well as Gorrilla compression. The problem is that there is the need to balance compression ratio with the cost of actually going over the list. Given that my test cases showed a big benefit for using Roaring Bitmaps, I decided to look at them first and see what I can get out of it.

RoaringBitamps is a way to store (efficiently) a set of bits, they are very widely used in the industry. The default implementation is also entirely suitable for my purposes. Mostly because they make use of managed memory, and a hard requirement that I have placed on this series is that I want to be able to use persistent memory. In other words, I want to be able to write the data out, then be able to do everything on top of memory mapped data, without having to parse it.

Roaring Bitmaps works in the following manner. Each 64K range of integers is divided into each own 8KB segments. Given that I’m using Voron as a persistence library, these numbers don’t work for my needs. Voron uses an 8KB page size, so we’ll drop these numbers by half. Each range will be 32K of integers and take a maximum of 4KB of disk space. This allows me to store it much more efficiently inside of Voron. Each segment, in turn, has a type. The types can be either:

  • Array – if the number of set bits in the segment is less than 2048, the data will use a simple sorted array implementation, with each value taking 2 bytes.
  • Bitmap – if the number of set bits in the segment is between 2048 and 30,720, the segment will use a total of 4096 bytes and be a standard bitmap.
  • Reversed array – if the number of set bits in the segment is higher than 30,720, we’ll store in the segment the unset bits as a sorted array.

This gives us quite a few advantages:

  • It is straightforward to build this incrementally (remember that we only ever add items in the end).
  • It is quite efficient in terms of space saving in the case of sparse / busy usage.
  • It is cheap (computationally) to work with and process.
  • It is very simple to use from memory mapped file without having to parse / create managed objects.

The one thing that we still need to take into account is how to deal with the segment metadata. How do we know what segment belong to what range. In order to handle that, we’ll define the following:

The idea is that we need to store two important pieces of information. The start location (is always going to be a multiple of 32K) and the number of set bits (which has a maximum of 32K). Therefor, we can pack all of them into a single int64. The struct is merely there for convenience.

In other words, in addition to the segments with the actual set bits, we are also going to have an array of all the segment’s metadata. In practice, we’ll also need another value here, the actual location of the segment’s data, but that is merely another int64, so that is still very reasonable.

As this is currently a mere exercise, I’m going to skip actually building the implementation, but it seems like it should be a fairly straightforward approach. I might do another post about how to actually implement this feature on Voron, because it is interesting. But I think that this is already long enough.

We still have another aspect to consider. So far, we talked only about the posting lists, but we also need to discuss the terms. But that is a topic for the next post in the series.

time to read 3 min | 551 words

In full text search terminology, a posting list is just a list of document ids. These are used to store and find matches for particular terms in the index.

I took the code from the previous post and asked it to give me the top 50 most frequent terms in the dataset and their posting lists. The biggest list had over 200,000 documents, and I intentionally use multiple threads to build things, so the actual list is going to be random from run to run (which adds a little more real-worldedness to the system*).

*Yes, I invented that term. It make sense, so I’m sticking with it.

I took those posting lists and just dumped them to a file, in the simplest possible format. Here are the resulting files:

image

There are a few things to note here. As you can see, the file name is the actual term in the index, the contents of the file is a sorted list of int64 of the document ids (as 8 bytes little endian values).

I’m using int64 here because Lucene uses int32 and thus has the ~2.1 billion document limit, which I want to avoid. It also make it more fun to work with the data, because of the extra challenge.  The file sizes seems small, but the from file contains over 250,000 entries.

When dealing with posting lists, size matter, a lot. So let’s see what it would take to reduce the size here, shall we?

image

Simply zipping the file gives us a massive space reduction, so there is a lot left on the table, which is great.

Actually, I might have skipped a few steps:

  • Posting lists are sorted, because it helps do things like union / intersect queries.
  • Posting lists are typically only added to.
  • Removal are handled separately, with a merge step to clean this up eventually.

Because the value is sorted, the first thing I tried was to use a diff model with variable sized int. Here is the core code:

Nothing really that interesting, I have to admit, but it did cut the size of the file to 242KB, which is nice (and better than ZIP). Variable sized integers are used heavily by Lucene, so I’m very familiar with them. But there are other alternatives.

  • StreamVByte is a new one, with some impressive perf numbers, but only gets us to 282 KB (but it is possible / likely that my implementation of the code is bad).
  • FastPFor compresses the (diffed) data down to 108KB.
  • RoaringBitmap gives us a total of 64KB.

There are other methods, but they tend to go to the esoteric and not something that I can very quickly test directly.

It is important to note that there are several separate constraints here:

  • Final size on disk
  • Computational cost to generate that final format
  • Computation cost to go from the final format to the original values
  • How much (managed) memory is required during this process

That is enough for now, I believe. My next post will deal delve into the actual semantics that we need to implement to get a good behavior from the system. This is likely going to be quite interesting.

time to read 5 min | 837 words

My previous post got an interesting comment:

I smell a Lucene.NET replacement for Raven 5 in the future :-)

I wanted to deal with this topic, but before we get to it, please note. I’m not committing to any features / plans for RavenDB in here, merely outlining my thinking.

I really want to drop Lucene. There are many reasons for this.

  • The current release version of Lucene.NET was released over 7 years ago(!) and it matches a Lucene (JVM) version that was out in 2010.
  • The active development of Lucene.NET is currently focused on released 4.8 (dated 2014) and has been stalled for several years.
  • Lucene’s approach to resource utilization greatly differs from how I would like to approach things in modern .NET Core applications.

There is a problem with replacing Lucene, however. Lucene is, quite simply, an amazing library. It is literally the benchmark that you would compare yourself against in its field. This isn’t the first time that I have looked into this field, mind. Leaving aside the fact that I believe that I reviewed most of the open source full text search libraries, I also wrote quite a few spikes around that.

Lucene is big, so replacing it isn’t something that you can do in an afternoon or a weekend. You can get to the 80% functionality in a couple of weeks, for sure, but it is the remaining 20% that would take 300% of the time Smile. For example, inverted indexes are simple, but then you need to be able to handle phrase queries, and that is a whole other ball game.

We looked into supporting the porting of Lucene.NET directly as well as developing our own option, and so far neither option has gotten enough viability to take action.

2015 – 2018 has been very busy years for us. We have rebuilt RavenDB from scratch, paying attention to many details that hurt us in the past. Some of the primary goals was to improve performance (10x) and to reduce support overhead. As a consequence of that, RavenDB now uses a lot less managed memory. Most of our memory utilization is handled by RavenDB directly, without relying on the CLR runtime or the GC.

Currently, our Lucene usage is responsible for the most allocations in RavenDB. I can’t think of the last time that we had high managed memory usage that wasn’t directly related to Lucene. And yes, that means that when you do a memory dump of a RavenDB’s  process, the top spot isn’t taken by System.String, which is quite exceptional, in my experience.

Make no mistake, we are still pretty happy with what Lucene can do, but we have a lot of experience with that particular version and are very familiar with how it works. Replacing it means a substantial amount of work and introduce risks that we’ll need to mitigate. Given these facts, we have the following alternatives:

  1. Write our own system to replace Lucene.
  2. Port Lucene 8 from JVM to .NET Core, adapting it to our architecture.
  3. Upgrade to Lucene.NET 4.8 when it is out.

The first option gives us the opportunity to build something that will fit our needs exactly. This is most important because we also have to deal with both feature fit and architectural fit. For example, we care a lot about memory usage and allocations. That means that we can build it from the ground up to reduce these costs. The risk is that we would be developing from scratch and it is hard to scope something this big.

The second option means that we would benefit from modern Lucene. But it means having to port non trivial amount of code and have to adapt both from JVM to CLR but also to our own architectural needs. The benefit here is that we can likely not port stuff that we don’t need, but there is a lot of risk involved in such a big undertaking. We will also effectively be either forking the .NET project or have just our own version. That means that the maintenance burden would be on us.

Upgrading to Lucene.NET 4.8 is the third option, but we don’t know when that would be out (stalled for a long time) and it does move us very far in the Lucene’s versions. It also likely going to require a major change in how we behave. Not so much with the coding wise (I don’t expect that to be too hard) but in terms of the different operational properties than what we are used to.

The later two options are going to keep Lucene as the primary sources of allocations in our system, while the first option means that we can probably reduce that significantly for both indexing and querying, but we’ll have to push our own path.

We have made no decisions yet, and we’ll probably won’t for a while. We are playing around with all options, as you can see on this blog, but that is also mostly because it is fun to explore and learn.

time to read 4 min | 645 words

Full text search is a really interesting topic, which I have been dipping my toes into again and again over the years. It is a rich area of research, and there has been quite a few papers, books and articles about the topic. I read a bunch of projects for doing full text search, and I have been using Lucene for a while.

I thought that I would write some code to play with full text search and see where that takes me. This is a side project, and I hope it will be an interesting one. The first thing that I need to do is to define the scope of work:

  • Be able to (eventually) do full text search queries
  • Compare and contrast different persistence strategies for this
  • Be able to work with multiple fields

What I don’t care about: Analysis process, actually implementing complex queries (I do want to have the foundation for them), etc.

Given that I want to work with real data, I went and got the Enron dataset. That is over 517,000 emails from Enron totaling more than 2.2 GB. This is one of the more commonly used test datasets for full text search, so that is helpful. The first thing that we need to do is to get the data into a shape that we can do something about it.

Enron is basically a set of MIME encoded files, so I’ve used MimeKit to speed the parsing process. Here is the code of the algorithm I’m using for getting the relevant data for the system. Here is the relevant bits:

As you can see, this is hardly a sophisticated approach. We are spawning a bunch of threads, processing all half million emails in parallel, select a few key fields and do some very basic text processing. The idea is that we want to get to the point where we have enough information to do full text search, but without going through the real pipeline that this would take.

Here is an example of the output of one of those dictionaries:

As you can see, this is bare bones (I forgot to index the Subject, for example), but on my laptop (8 cores Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz) with 16 GB of RAM, we can index this amount of data in under a minute and a half.

So far, so good, but this doesn’t actually gets us anywhere, we need to construct an inverted index, so we can ask questions about the data and be able to find stuff out. We are already about half way there, which is encouraging. Let’s see how far we can stretch the “simplest thing that could possibly work”… shall we?

Here is the key data structures:

Basically, we have an array of fields, each of which holds a dictionary from each of the terms and a list of documents for the terms.

For the full code for this stage, look at the following link, it’s less than 150 lines of code.

Indexing the full Enron data set now takes 1 minute, 17 seconds, and takes 2.5 GB in managed memory.

The key is that with this in place, if I want to search for documents that contains the term: “XML”, for example, I can do this quite cheaply. Here is how I can “search” over half a million documents to get all those that have the term HTML in them:

image

As you can imagine, this is actually quite fast.

That is enough for now, I want to start actually exploring persistence options now.

The final code bits are here, I ended up implementing stop words as well, so this is a really cool way to show off how you can do full text search in under 200 lines of code..

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. RavenDB Cloud (2):
    26 Nov 2024 - Auto scaling
  2. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  3. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  4. re (33):
    28 May 2024 - Secure Drop protocol
  5. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}