Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,527
|
Comments: 51,163
Privacy Policy · Terms
filter by tags archive
time to read 3 min | 487 words

RavenDB Cloud has a whole bunch of new features that were quietly launched over the past few months. I discuss them in this post. It turns out that the team keeps on delivering new stuff, faster than I can write about it.

The following new auto-scaling feature is a really interesting one because it is pretty simple to understand and has some interesting implications for production.

You need to explicitly enable auto-scaling on your cluster. Here is what that looks like:

Once you enabled auto-scaling - which usually takes under a minute - you can click the Configure button to set your own policies:

Here is what this looks like:

The idea is very simple, we routinely measure the load on the system, and if we detect a high CPU threshold for a long time, we’ll trigger scaling to the next tier (or maybe higher, see the Upscaling / Downscaling step options) to provide additional resources to the system. If there isn’t enough load (as measured in CPU usage), we will downscale back to the lowest instance type.

Conceptually, this is a simple setup. You use a lot of CPU, and you get a bigger machine that has more resources to use, until it all balances out.

Now, let’s talk about the implications of this feature. To start with, it means you only pay based on your actual load, and you don’t need to over-provision for peak load.

The design of this feature and RavenDB in general means that we can make scale-up and scale-down changes without any interruption in service. This allows you to let auto-scaling manage the size of your instances.

In the image above, you may have noticed that I’m using the PB line of products (PB10 … PB50). That stands for burstable instances, which consume CPU credits when in use. How this interacts with auto-scaling is really interesting.

As you use more CPU, you consume all the CPU credits, and your CPU usage becomes high. At this point, auto-scaling kicks in and moves you to a higher tier. That gives you both more baseline CPU credits and a higher CPU credits accrual rate.

Together with zero downtime upscaling and downscaling, this means you can benefit from the burstable instances' lower cost without having to worry about running out of resources.

Note that auto-scaling only applies to instances within the same family. So if you are running on burstable instances, you’ll get scaling from burstable instances, and if you are running on the P series (non-burstable), your auto-scaling will use P instances.

Note that we offer auto-scaling for development instances as well. However, a development instance contains only a single RavenDB instance, so auto-scaling will trigger, but the instance will be inaccessible for up to two minutes while it scales. That isn’t an issue for the production tier.

time to read 4 min | 771 words

In RavenDB, we really care about performance. That means that our typical code does not follow idiomatic C# code. Instead, we make use of everything that the framework and the language give us to eke out that additional push for performance. Recently we ran into a bug that was quite puzzling. Here is a simple reproduction of the problem:


using System.Runtime.InteropServices;


var counts = new Dictionary<int, int>();


var totalKey = 10_000;


ref var total = ref CollectionsMarshal.GetValueRefOrAddDefault(
                               counts, totalKey, out _);


for (int i = 0; i < 4; i++)
{
    var key = i % 32;
    ref var count = ref CollectionsMarshal.GetValueRefOrAddDefault(
                               counts, key, out _);
    count++;


    total++;
}


Console.WriteLine(counts[totalKey]);

What would you expect this code to output? We are using two important features of C# here:

  • Value types (in this case, an int, but the real scenario was with a struct)
  • CollectionMarshal.GetValueRefOrAddDefault()

The latter method is a way to avoid performing two lookups in the dictionary to get the value if it exists and then add or modify it.

If you run the code above, it will output the number 2.

That is not expected, but when I sat down and thought about it, it made sense.

We are keeping track of the reference to a value in the dictionary, and we are mutating the dictionary.

The documentation for the method very clearly explains that this is a Bad Idea. It is an easy mistake to make, but still a mistake. The challenge here is figuring out why this is happening. Can you give it a minute of thought and see if you can figure it out?

A dictionary is basically an array that you access using an index (computed via a hash function), that is all. So if we strip everything away, the code above can be seen as:


var buffer = new int[2];
ref var total = ref var buffer[0];

We simply have a reference to the first element in the array, that’s what this does behind the scenes. And when we insert items into the dictionary, we may need to allocate a bigger backing array for it, so this becomes:


var buffer = new int[2];
ref var total = ref var buffer[0];
var newBuffer = new int[4];
buffer.CopyTo(newBuffer);
buffer = newBuffer;


total = 1;
var newTotal = buffer[0]

In other words, the total variable is pointing to the first element in the two-element array, but we allocated a new array (and copied all the values). That is the reason why the code above gives the wrong result. Makes perfect sense, and yet, was quite puzzling to figure out.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. RavenDB Cloud (2):
    26 Nov 2024 - Auto scaling
  2. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  3. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  4. re (33):
    28 May 2024 - Secure Drop protocol
  5. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}