Ayende @ Rahien

Oren Eini aka Ayende Rahien CEO of Hibernating Rhinos LTD, which develops RavenDB, a NoSQL Open Source Document Database.

You can reach me by:

oren@ravendb.net

+972 52-548-6969

, @ Q j

Posts: 6,868 | Comments: 49,212

filter by tags archive
time to read 3 min | 441 words

The upgrade process from RavenDB 3.5 and earlier to RavenDB 4.x is not easy. This is because I made a conscious decision to not have backward compatibility between these versions. I made that decision because we had to be able to make massive changes internally in order to get to the targets that we set to ourselves. I actually discussed that decision in detail in a previous blog post and a talk.

Four years later, I still stand by that decision, but I also regret the spanner that it threw into the works. Migrating RavenDB applications to 4.x from previous versions is harder than it should. In retrospect, we probably should have invested the time in a compatibility layer that would make it easier to migrate.

I wanted to take a moment and talk about RavenDB 5.0, expected in 2020, and our plans for that release. We are going to be doing some minor cleanup of the API. Methods and classes that are marked as [Obsolete] will be removed. These tend to be at the very edge of the explored API and have been marked as such for quite some time. Beyond these change (which you’ll a clear and obvious alternative for), you aren’t going to need to do much at all.

Our goal for converting an application from RavenDB 4.x to 5.x is that the process is for 90% of the projects - Update NuGet packages, compile, you are done. For the 10%, it may mean that you need to make some minor changes. For example, change DisableEntitiesTracking to NoTracking if you are using the low level query API.

We also intend to allow at least the vast majority of operations to just work between 4.x client and 5.x server. In other words, even when you upgrade server versions, you aren’t going to have to upgrade the client version unless you want to use the new features.

There are also additional considerations that we have to take into account:

  • RavenDB now have official clients for: .NET, JVM, Go, Python, Node.JS, C++. As well as a unofficial clients.
  • RavenDB Cloud instances are maintained by us, and will be upgraded to newer versions on a regular schedule.

The cost of making a backward incompatible change at this point is too high for us to take lightly, and we are going to try very hard to avoid it. The move from 3.5 to 4.x was a one time thing that we had to do in order to continue evolving the product, not something that we plan again anytime soon.

We are also offering migration services for clients who want to move their applications from 3.x to 4.x.

RavenDB Cloud

time to read 4 min | 763 words

image

TLDR

RavenDB now offers cloud hosting for RavenDB clusters. Manage your data with this awesome solution built by the RavenDB team. Access through cloud.ravendb.net.

A free option is available.

When I wrote the first few lines of code for RavenDB over a decade ago, Amazon Web Services still had the beta label on it and deploying to production meant a server in the basement. The landscape for server-side software has changed considerably. Nowadays you have to justify not running on the cloud. Originally, RavenDB’s features were driven by the kind of systems and setup found in a typical corporate data center. Now a lot of our features are directly impacted by the operating environment in the various cloud platforms.

The same team that develops RavenDB itself now offers a database as a service solution (DBaaS) that can be found at cloud.ravendb.net. Our service offering is available on all Amazon Web Services and Microsoft Azure regions, with Google Cloud Platform soon to follow.

What do you get?

You get a fully managed service. We take care of all backend chores of your databases while you focus on building your applications to deliver even more value to your business.

We have done everything possible to make sure that the only task you’ll need to do is come up with the data to put into RavenDB Cloud. Tasks such as monitoring, updating or managing the system, and even creating a default backup task per database, are the responsibility of our team and are handled without a fuss behind the scenes.

The DBaaS package includes encryption over the wire as well as encryption at rest -- you can also deploy encrypted databases with your own encryption keys.

The idea of dynamic scaling in managed systems has been a core tenant for cloud architecture, and RavenDB Cloud fits right into this model. In just a few clicks you can provision a cluster, deploy it to anywhere in the world, and start working. If you need more capacity, a simple click will provide you with more resources -- without your code or your customers even being aware.

If you have a Black Friday event or a special discount day coming, you can scale up your system ahead of time. If during that day you are pleasantly surprised with more than anticipated activity, you can power through the spike by scaling immediately and then reduce the capacity back to normal levels once the peak is traversed.

But RavenDB Cloud is not just about reducing the overhead of running databases -- we built RavenDB Cloud to save you money. As a highly tuned system, you can manage your load on fewer resources, which also translates into more savings down the line.

Pricing Model

Our cloud offering has an on-demand subscription as well as discounts for yearly contracts. A 10% introductory discount is now available, lasting to the rest of this year.

Some hosted databases solutions charge you per request or per maximum utilized capacity. Such solutions are complex to understand when it comes to billing time. I intensely dislike complexity -- especially when it comes to bills! Price predictability is important to us. With RavenDB Cloud, you pay for your resources at a flat and known rate to make sure there won’t be any surprises at the end of the month.

The RavenDB instances can be configured a-la-carte, according to your needs. A tailored solution with geo distributed clusters, support for on-premise & cloud integration, widely distributed deployments and custom instance types is only an email away.

RavenDB Cloud has several tiers of clusters available. If you are running a small to medium sized application, you can go with our basic instances and enjoy a reduced cost. For more demanding workloads, you can use more performant instances that have full access to the cloud resources to get maximum performance.

RavenDB Cloud also has a completely free option. Go to cloud.ravendb.net and select the free option. You’ll have your own secured, managed and hassle-free instance of RavenDB in moments. Go ahead and try it out right now.

What about RavenHQ?

RavenHQ has been providing managed RavenDB instances since 2012. It will continue offering RavenDB hosting for version 3.5 and earlier. RavenDB Cloud will provide managed clusters for RavenDB 4.2 and up.

Migrating from a RavenDB 4.x instance hosted on RavenHQ to RavenDB Cloud is simple and can be done in just a few clicks.

Not only will migrating to RavenDB Cloud not cost you anything extra, for most cases your new managed database service will be cheaper than what you had before.

time to read 6 min | 1051 words

A process running on your system is typically a black box. You don’t have a lot of insight into what is going on inside it. Oh, there are all sort of tools you can use to infer things out (looking at system calls, memory consumption, network connections, etc), but by default… it is a mystery.

RavenDB is a database. It is meant to run unattended for long durations and is designed to mostly run itself. That means that when you look at it, you want to be able to figure out exactly what is going on with the system as soon as possible. To that end, we have included a lot of features inside RavenDB that expose the internal state of the system. From tracing each I/O and its duration to providing detailed statistics about costs and amount of effort invested in various tasks.

These features are invaluable to figure out exactly what is going on in RavenDB at a particular point in time. Of course, nothing beats the ability to open a debugger and inspect the state of the system. But that is something that you can only really do on development. It is not something that can be done on production, obviously. Or can it?

Since RavenDB 3.0, we actually had just this feature, being able to ask RavenDB to capture and display its own state in a format that should be very familiar to developers. When we created RavenDB 4.0, we were able to carry on this feature on Windows (at some cost), but it was a complete non starter on Linux.

On Windows, a process can debug another process if they belong to the same user (somewhat of an over simplification, but good enough). On Linux, the situation is a lot more complex. A process can usually only debug another process if the debugger is running as root or is the parent process of the debugee process.

Another complication was that we are using ClrMD, a wonderful library that allow us to introspect live processes (among many other things). It did not have support for Linux, until about a month ago… as soon as we had the most basic of support there, we jumped into action, seeing how we can bring this feature to Linux as well. A lot of our users are running production systems on Linux, and the ability to look at the system and go: “Hmm. I wonder what this is doing” and then being able to tell is something that we consider a major boost to RavenDB.

It took a lot of fighting and learning a lot more about how debugging permissions work on Linux than I ever wanted to know. But we got it working (details below). You can see how this looks like on a live Linux server:

image

As you can see, there is an indexing thread here doing some work on spatial data. We are going to enhance this view further with the ability to see CPU times as well as job names. The idea is that this is something that you will look at and get enough insight to not need to check the logs or try to infer what is going on. You could just tell.

Now, for the gory details of how this works. We changed the implementation on both Windows and Linux to use passive attach to process, which is much faster. The first thing we tried, once we moved to passive attach is to debug ourselves.

This is a nice enough feature, and quite elegant. We debug ourselves, pull the stack traces and display the data. Unfortunately, this doesn’t work on Linux. A process cannot debug itself. All debugging in Linux is based on the ptrace() system call, and the permissions to that are as specified. I can’t imagine the security implications of letting a process debug itself. After all, it is already can do anything the process can do, because it is the process. But I guess that this is an esoteric enough scenario that no one noticed and then the reaction was, use a workaround.

The usual workaround is to have a process that would spawn RavenDB and then it would be able to debug it. That is… possible, but it would be a major shift in how we deploy, not something that I wanted to do. There is also the ptrace_scope flag, which is supposed to control this behavior. In my tests, at least, disabling the security checks via this flag did absolutely nothing.

Running as root worked, just fine, of course. And then the process crashed. On Linux, when trying to debug your own process, there seems to be an interesting interaction between the debugger and debuggee if an exception is thrown. To the point where it will corrupt the CoreCLR state and kill the process. That was a fun bug to trace, sort of. Linux has a escape hatch in the form of PR_SET_PTRACER option that can be used. However, you can’t designate your own process, unfortunately. That combined with the hard crashes, made self debugging a non starter.

But I still want this feature, and without changing too much about how we are doing things.

Here is what we ended up doing. We have a separate process just to capture the stack trace. When you ask RavenDB to get its stack trace, it will spawn this process, but ask it to wait. It will then grant the new process the permissions necessary to debug RavenDB and signal it to continue. At this point, the debugger child process will capture the stack trace and send it back to RavenDB. RavenDB will reset the permission, enhance the stack trace with additional information that we can provide from inside the process and display it to the user.

The actual debugger process is also marked with setcap to provide it additional permissions it needs. This separation means that we isolate these permissions to a single purpose tool that can be invoked and closed, without increasing the attack surface of RavenDB.

The end result is that you can walk to a production RavenDB server, running on Windows or Linux, and get better information than if you just attached to it with the debugger.

time to read 1 min | 173 words

imageThis hits my email, and given the number of questions that our support team fields on the topic, I wanted to make sure it is widely known. You can now use RavenDB 4.2 as the backing store for your NServiceBus systems.

That, in turn, means that you can now use RavenDB 4.2 in your systems in general, which is going to be a much nicer experience overall and put you back on the supported path.

A note about support:

  • RavenDB 3.0 is no longer supported.
  • RavenDB 3.5 is supported until the end of 2020.

Many of the users who use RavenDB with NServiceBus tend to be larger enterprises, where updates to the technology stack may take a while. So it is better to start these things early.

And as an added incentive, based on over 18 months in the field, just by moving to RavenDB 4.2 you are going to get ten fold performance increase from RavenDB 3.5.

time to read 1 min | 102 words

I started talking about this, and all of a sudden I got 45 minutes of pretty cool demo, even if I’m saying it myself.

In the video, I’m setting up RavenDB on a Raspberry PI, connect to it via the Python client API, connect it to the cloud and start working with widely distributed environment with bi-directional replication, ending up with using subscriptions to process data on the Raspberry PI that is coming from the cloud.

I would love your feedback on the video and the type of deployment that is outlined here.

time to read 8 min | 1502 words

One of the silent features of people moving to the cloud is that it make it the relation between performance and $$$ costs evident. In your on data center, it is easy to lose track between the yearly DC budget and the application performance. But on the cloud, you can track it quite easily.

The immediately lead people to try to optimize their costs and use the minimal amount of resources that they can get away with. This is great for people who believe in efficient software.

One of the results of the focus on price and performance has been the introduction of burstable cloud instances. These instances allow you to host machines that do not need the full machine resources to run effectively. There are many systems whose normal runtime cost is minimal, with only occasional spikes. In AWS, you have the T series and Azure has the B series. Given how often RavenDB is deployed on the cloud, it shouldn’t surprise you that we are frequently being run on such instances. The cost savings can be quite attractive, around 15% in most cases. And in the case of small instances, that can be even more impressive.

RavenDB can run quite nicely on a Raspberry PI, or a resource starved container, and for many workloads, that make sense. Looking at AWS in this case, consider the t3a.medium instance with 2 cores and 4 GB at 27.4$ / month vs. a1.large (the smallest non burstable instance) with the same spec (but ARM machine) at 37.2$ per month. For that matter, a t3a.small with 2 cores and 2 GB of memory is 13.7$. As you can see, the cost savings adds up, and it make a lot of sense to want to use the most cost effective solution.

Enter the problem with burstable instances. They are bursty. That means that you have two options when you need more resources. Either you end up with throttling (reducing the amount of CPU that you can use) or you are being charged for the additional extra CPU power you used.

Our own production systems, running all of our sites and backend systems are running on a cluster of 3 t3.large instances. As I mentioned, the cost savings are significant. But what happens when you go above the limit? In our production systems, for the past 6 months, we have never had an instance where RavenDB used more than the provided burstable performance. It helps that the burstable machines allows us to accrue CPU time when we aren’t using it, but overall it means that we are doing a good job of  handling requests efficiently. Here are some metrics from one of the nodes in the cluster during a somewhat slow period (the range is usually 20 – 200 requests / sec).

image

So we are pretty good in terms of latency and resource utilizations, that’s great.

However, the immediately response to seeing that we aren’t hitting the system limits is… to reduce the machine size again, to pay even less. There is a lot of material on cost optimization in the cloud that you can read, but that isn’t the point of this post. One of the more interesting choices you can make with burstable instances is to ask to not go over the limit. If your system is using too much CPU, just take it away until it is done. Some workloads are just fine for this, because there is no urgency. Some workloads, such as servicing requests, are less kind to this sort of setup. Regardless, this is a common enough deployment model that we have to take it into account.

Let’s consider the worst case scenario from our perspective. We have a user that runs the cheapest possible instance a t3a.nano with 2 cores and 512 MB of RAM costing just 3.4$ a month. The caveat with this instance is that you have just 6 CPU credits / hour to your name. A CPU credit is basically 100% CPU for 1 minute. Another way of looking at this is that t t3a.nano instance has a budget of 360 CPU seconds per hour. If it uses more than that, it is charged a hefty fee (about ten times per hour than the machine cost). So we have users that disable the ability to go over the budget.

Now, let’s consider what is going to happen to such an instance when it hits an unexpected load. In the context of RavenDB, it can be that you created a new index on a big collection. But something seemingly simple such as generating a client certificate can also have a devastating impact on such an instance.RavenDB generates client certificates with 4096 bits keys. On my machine (nice powerful dev machine), that can take anything from 300 – 900 ms of CPU time and cause a noticeable spike in one of the cores:

image

On the nano machine, we have measured key creation time of over six minutes.

The answer isn’t that my machine is 800 times more powerful than the cloud machine. The problem is that this takes enough CPU time to consume all the available credits, which cause throttling. At this point, we are in a sad state. Any additional CPU credits (and time) that we earn goes directly to the already existing computation. That means that any other CPU time is pushed back. Way back. This happens at a level below the operating system (at the hypervisor), so there it isn’t even aware of it. What is happening from the point of view of the OS is that the CPU is suddenly much slower.

All of this make sense and expected given the burstable nature of the system. But what is the observed behavior?

Well, the instance appears to be frozen. All the cloud metrics will tell you that everything is green, but you can’t access the system (no CPU to accept new connections) you can’t SSH into it (same) and if you have an existing SSH connection, it appears to have frozen. Measuring performance from inside the machine shows that everything is cool. One CPU is busy (expected, generating the certificate, doing indexing, etc). Another is idle. But the system behaves as if it has no CPU available, which is exactly what is going on, except in a way that we can’t tell.

RavenDB goes to a lot of trouble to be a nice citizen and a good neighbor. This includes scheduling operations so the underlying OS will always have the required resources to handle additional load. For example, background tasks are run with lowered priority, etc. But when running on a burstable machine that is being throttled… well, from the outside it looked like certain trivial operations would just take the entire machine down and it wouldn’t be recoverable short of hard reboot.

Even when you know what is going on, it doesn’t really help you. Because from inside the machine, there is no way to access the cloud metrics in a good enough precision to take action.

We have a pretty strong desire to not get into these kind of situation, so we implemented what I can only refer to as counter measures. When you are running on a burstable instance, you can let RavenDB know what is your CPU credits situation, at which point RavenDB will actively monitor the machine and compute its own tally of the outstanding CPU credits situation. When we notice that the CPU credits are running short, we’ll start pro-actively halting internal processes to give the machine more space to recover from the CPU credits shortage.

We’ll move ETL processes and ongoing backups to other nodes in the cluster and even pause indexing in order to recover the CPU time we need. Here is an example of what you’ll see in such a case:

image

One of the nice things about the kind of promises RavenDB make about its indexes is that it is able to make these kind of decisions without impacting the guarantees that we provide.

If the situation is still bad, we’ll start proactively rejecting requests. The idea is that instead of getting of getting to the point where we are being throttled (and effectively down to the world), we’ll process a request by simply sending 503 Service Unavailable response back. This is going to be very cheap to do, so won’t put additional strain on our limited budget. At the same time, the RavenDB’s client treat this kind of error as a trigger for a failover and will use another node to service this request.

This was a pretty long post to explain that RavenDB is going to give you a much nicer experience on burstable machines even if you don’t have bursting capabilities.

time to read 2 min | 324 words

A few days ago I talked about how you can use RavenDB’s query functions to compute distances during queries.  On the one hand, I’m really happy that you can do that without having to wait for us to update RavenDB, on the other hand, I felt that this should really be us doing this. So I spent some time on spatial this week and we got a whole bunch of nice features out of it.

The first new option is the ability to natively get the distance from a location as part of the query:

image

You no longer need JS tricks to get this information.

As you can see, I’m just projecting the distance here, not sorting or filtering by it. This can be nice if you need to take into account multiple spatial operations. “Find me the nearest coffee, but also show me how far it is from work”, for example.

The next spatial feature is shown in the following query, and I’m going to be that you wouldn’t notice it if I didn’t mark it up:

image

What does this do? Well, this tell RavenDB that we want to sort the movies by distance, but to round it up to 5km increments. This, in turn, means that the secondary sort, by Rating, is now much more meaningful.

The output of this query is all a set of movies in that are up to 5 km from me, 5 – 10km , 10 – 15 km, etc. And inside each bucket, the results are sorted based on the rating descending.

This handle the very common issue of users nor caring whatever something is 1.2 or 1.5 km away and wanting to sort by “close, medium, far” and then by other factors as well.

time to read 14 min | 2615 words

We got a few requests for some guidance on how to optimize RavenDB insert rate. Our current benchmark is standing at 135,000 inserts/sec on a sustained basis, on a machine that cost less than a 1,000$. However, some users tried to write their own benchmarks and got far less (about 50,000 writes / sec). Therefor, this post, in which I’m going to do a bunch of things and see if I can make RavenDB write really fast.

I’m sorry, this is likely to be a long post. I’m going to be writing this as I’m building the benchmark and testing things out. So you’ll get a stream of consciousness. Hopefully it will make sense.

Because of the size of this post, I decided to move most of the code snippets out. I created a repository just for this post, and I’m showing my steps as I go along.

Rules for this post:

  • I’m going to use the last stable version of RavenDB (4.2, at the time of writing)
  • Commodity hardware is hard to quantify, I’m going to use AWS machines because they are fairly standard metric and likely where you’re going to run it.
    • Note that this does mean that we’ll probably have less performance than if we were running on dedicated hardware.
    • Another thing to note (and we’ll see later) is that I/O rate on the cloud is… interesting topic.
  • No special system setup
    • Kernel config
    • Reformatting of hard disk
    • Changing RavenDB config parameters

The first thing to do is to figure out what we are going to write.

The test machine is:  t3a.xlarge with 4 cores, 16 GB RAM. This seemed like a fairly reasonable machine to test with. I’m using Ubuntu 18.04 LTS as the operating system.

The machine has an 8GB drive that I’m using to host RavenDB and a separate volume for the data itself. I create a 512GB gp2 volume (with 1536 IOPS) to start with. Here what this looked like from inside the machine:

image

I’m including the setup script here for completeness, as you can see, there isn’t really anything here that matters.

Do note that I’m going the quick & dirty mode here without security, this is mostly so I can see what the impact of TLS on the benchmark is at a later point.

We are now pretty much ready, I think. So let’s take a look at the first version I tried. Writing 100,000 random user documents like the following:

image

As you can see, that isn’t too big and shouldn’t really be too hard on RavenDB. Unfortunately, I discovered a problem, the write speed was horrible.

image

Oh wait, the problem exists between keyboard and chair, I was running that from my laptop, so we actually had to go about 10,000 KM from client to server. That… is not a good thing.

Writing the data took almost 12 minutes. But at least this is easy to fix. I setup a couple of client machines on the same AZ and tried again. I’m using spot instances, so I got a t3.large instance and a m5d.large instance.

That gave me a much nicer number, although still far from what I wanted to have.

image

On the cloud machines, this takes about 23 - 25 seconds. Better than 12 minutes, but nothing to write home about.

One of the reasons that I wanted to write this blog post is specifically to go through this process, because there are a lot of things that matter, and it sometimes can be hard to figure out what does.

Let’s make a small change in my code, like so:

image

What this does is to remove the call to RavenDB entirely. The only cost we have here is the cost of generating the from the Bogus library. This time, the code completes in 13 seconds.

But wait, there are no RavenDB calls here, why does it take so long? Well, as it turns out, the fake data generation library has a non trivial cost to it,  which impact the whole test. I changed things  so that we’ll generate 10,000 users and then use bulk insert to send them over and over again. That means that the time that we measure is just the cost of sending the data over. With these changes, I got much nicer numbers:

image

While this is going on, by the way, we have an interesting observation about the node while I’m doing this.

image

You can see that while we have two machines trying to push data in as fast as them can, we have a lot of spare capacity. This is key, actually. The issue is what the bottleneck, and we already saw that the problem is probably on the client. We improved our performance by over 300% by simply reducing the cost of generating the data, not writing to RavenDB. As it turns out, we are also leaving a lot of performance on the table because we are doing this single threaded. A lot of the time is actually spent on the client side, doing serialization, etc.

I changed the client code to use multiple threads and tried it again. By the way, you might notice that the client code is… brute forced, in a way. I intentionally did everything in the most obvious way possible, caring non at all about the structure of the code. I just want it to work, so no error handling, nothing sophisticated at all here.

image

This is with both client machines setup to use 4 threads each to send the data. It’s time to dig a bit deeper and see what is actually going on here. The t3.large machine has 2 cores, and looking into what it is doing while it has 4 threads sending data is… instructive…

image

The m5d.large instance also have two cores, and is in a similar state:

image

Leaving aside exactly what is going on here (I’ll discuss this in more depth later in this post), it is fairly obvious that the issue here is on the client side, we are completely saturating the machine’s capabilities.

I created another machine to serve as a client, this time a c5.9xlarge, an instance that has 36 cores and is running a much faster CPU that the previous instances. This time, I a single machine and I used just a single thread, and I got the following results:

image

And at the same time, the server resources utilization was:

image

Note that this is when we have a single thread doing the work… what happens when we increase the load?

Given the disparity between the client (36 cores) and the server (just 4), I decided to start slow and told the client to use just 12 threads to bulk insert the data. The result:

image

Now we are talking, but what about the server’s resources?

image

We got ourselves some spare capacity to throw around, it seems.

At this point, I decided to go all in and see what happens when I’m using all 36 cores for this. As it runs out, we can get faster, which is great, but the rise isn’t linear, unfortunately.

image

At this point, I mostly hit the limits. Regardless of how much load I put on the client, it wasn’t able to hit any higher than this. I decided to look at what the server is doing. Write speed for RavenDB is almost absolutely determined by the ACID nature of the database, we have to wait for the disk to confirm the write. Because this is such an important factor of our performance, we surface all of that information to you. In the database’s stats page, you can go into the IO Stats section, like so:

image

The first glace might be a bit confusing, I’ll admit. We tried to pack a lot of data into a single view.

image

The colors are important. Blue are writes to the journal, which are the thing that would usually hold up the transaction commit. The green (data write / flush) and red (sync) are types of disk operations, and they are shown here to allow you to see if there are any correlation. For example, if you have a big sync operation, it may suck all the I/O bandwidth, and your journal writes will be slow. With this view, you can clearly correlate that information. The brighter the color, the bigger the write, the wider the write, the more time it took. I hope that this is enough to understand the gist of it.

Now, let’s zoom in. Here you can see a single write, for 124KB, that took 200ms.

image

Here is another one:

image

These are problematic for us, because we are stalling. We can’t really do a lot while we are waiting for the disk (actually, we can, we start processing the next tx, but there is a limit to that as well). That is likely causing us to wait when we read from the network and in likely the culprit. You might have noticed that both slow writes happened in conjunction with the sync (the red square below), that indicate that we might have latency because both operations go to the same location at the same time.

On the other hand, here is another section, where we have two writes very near one another and they both very slow, without a concurrent sync. So the interference from the sync is a theory, not a proven fact.

image

We can go and change the gp2 drive we have to an io1 drive with provisioned IOPS (1536, same as the gp2). That would cost me 3 times as much, so let’s see if we can avoid this. Journals aren’t meant to be forever. They are used to maintain durability of the data, once we synced the data to disk, we can discard them.

I created an 8 GB io2 drive with 400 IOPS and attached it to the server instance and then set it up:

Here is what this ended up as:

image

Now, I’m going to setup the journals’ directory for this database to point to the new drive, like so:

And now we have a better separation of the journals and the data, let’s see what this will give us? Not much, it seems, I’m seeing roughly the same performance as before, and the IO stats tells the same story.

image

Okay, time to see what we can do when we change instance types. As a reminder, so far, my server instance was t3a.xlarge (4 cores, 16 GB). I launched a r5d.large instance (2 cores, 16 GB) and set it up with the same configuration as before.

  • 512 GB gp2 (1536 IOPS) for data
  • 8GB io2 (400 IOPS) for journals

Here is what I got when I started hammering the machine:

image

This is interesting, because you can see a few discrepancies:

  • The machine feels faster, much faster
  • We are now bottleneck on CPU, but note the number of writes per second
  • This is when we reduced the number of cores by half!

That seems pretty promising, so I decided to switch instances again. This time to i3en.xlarge instance (4 cores, 30GB, 2 TB NVMe drive). To be honest, I’m mostly interested in the NVMe drive Smile.

Here are the results:

image

As you can see, we are running pretty smoothly with 90K – 100K writes per second sustained.

On the same i3en.xlarge system, I attached the two volumes (512GB gp2 and 8GB io2) with the same setup (journals on the io2 volume), and I’m getting some really nice numbers as well:

image

And now, the hour is nearing 4AM, and while I had a lot of fun, I think this is the time to close this post. The factor in write performance for RavenDB is the disk, but we care a lot more about latency than throughput for these kind of operations.

A single t3a.xlarge machine was able to hit peak at 77K writes/second and by changing the instance type and getting better IO, we were able to push that to 100K writes/sec. Our current benchmark is sitting at 138,000 writes/second, by the way, but it isn’t running on virtual machine but on physical hardware. Probably the most important part of that machine is the fact that is has an NVMe drive (latency, again).

However, there is one question that still remains. Why did we have to spend so much compute power on generating the bulk insert operations? We had to hit the server from multiple machines or use 36 concurrent threads just to be able to push enough data so the server will sweat it.

To answer this, I’m going to do the Right Thing and look at the profiler results. The problem is in the client side, so let’s profile the client and see what is taking so much computation horse power. Here are the results:

image

The cost here is serialization is the major factor here. That is why we need to parallelize the work, otherwise, as we saw, RavenDB is basically going to sit idle.

The reason for this "issue" is that JSON.Net is a powerful library with many features, but it does have a cost. For bulk insert scenarios, you typically have a very well defined set of documents, and you don't need all this power. For this reason, RavenDB exposes an API that allow you to fully control how serialization works for bulk insert:

DocumentStore.Conventions.BulkInsert.TrySerializeEntityToJsonStream

You can use that to significantly speed up your insert processes.

time to read 3 min | 426 words

RavenDB has had support for spatial queries for a long time. In RavenDB 4.0 we did a whole bunch of work to make spatial queries better. In particular, we have separate the concepts of searching and ordering for spatial queries. In most cases, if you are doing a spatial query, you’ll want to sort the results by their distance. The classic example is: “Give me the Pizza stores within 5 km from me”. I’ll usually also want to see them listed by their distance. But there are other ways to go about it. For example, if I want to see a concert by Queen, I want to sort them by distance, but I don’t want to do any spatial filtering.

It gets interesting when you combine different spatial operations at the same time. For example, consider the following query. Here is how you can search for a house in a particular school district, but want to find the one that is nearest to the office. The two spatial queries aren’t related at all to one another.

from index 'Houses/ByLocation' where spatial.within(Location, spatial.wkt($schoolDistrict))	order by spatial.distance(Location, spatial.wkt($office))

One thing that we did not do, however, was report the distance back to the user. In other words, during the query, we compute the distance to the provided location, but we aren’t actually returning it, just use it for sorting. This is mostly an oversight, and we’ll be fixing that. The thought at the time was that since you already get the document from the server, you already have the location data, and can compute it on the client side. However, that isn’t ergonomic to users, who want to just get the data and be done with it. As I said, we’ll be fixing this, but in the meantime, you can use another of RavenDB’s capabilities to handle this, by define a JS function and projecting that directly from the server.

Let’s take a look at the following query:

This query allows us to project the details we need to render the results table to the user directly, without needing to do anything on the client side. It is a good enough solution for right now, and I’m really happy that RavenDB is flexible enough to provide this to users without us needing to do anything. That said, I don’t like the ergonomics, and we intend to basically make this kind of function into an intrinsic operation inside of RavenDB.

FUTURE POSTS

  1. The importance of redundancy - 9 hours from now

There are posts all the way to Jul 23, 2019

RECENT SERIES

  1. Reviewing mimalloc (2):
    22 Jul 2019 - Part II
  2. Production postmortem (26):
    07 Jun 2019 - Printer out of paper and the RavenDB hang
  3. Reviewing Sled (3):
    23 Apr 2019 - Part III
  4. RavenDB 4.2 Features (5):
    21 Mar 2019 - Diffing revisions
  5. Workflow design (4):
    06 Mar 2019 - Making the business people happy
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats