Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,546
|
Comments: 51,161
Privacy Policy · Terms
filter by tags archive
time to read 15 min | 2973 words

RavenDB is a transactional database, we care deeply about ACID. The D in ACID stands for durability, which means that to acknowledge a transaction, we must write it to a persistent medium. Writing to disk is expensive, writing to the disk and ensuring durability is even more expensive.

After seeing some weird performance numbers on a test machine, I decided to run an experiment to understand exactly how durable writes affect disk performance.

A few words about the term durable writes. Disks are slow, so we use buffering & caches to avoid going to the disk. But a write to a buffer isn’t durable. A failure could cause it to never hit a persistent medium. So we need to tell the disk in some way that we are willing to wait until it can ensure that this write is actually durable.

This is typically done using either fsync or O_DIRECT | O_DSYNC flags. So this is what we are testing in this post.

I wanted to test things out without any of my own code, so I ran the following benchmark.

I pre-allocated a file and then ran the following commands.

Normal writes (buffered) with different sizes (256 KB, 512 KB, etc).


dd if=/dev/zero of=/data/test bs=256K count=1024
dd if=/dev/zero of=/data/test bs=512K count=1024

Durable writes (force the disk to acknowledge them) with different sizes:


dd if=/dev/zero of=/data/test bs=256k count=1024 oflag=direct,sync
dd if=/dev/zero of=/data/test bs=256k count=1024 oflag=direct,sync

The code above opens the file using:


openat(AT_FDCWD, "/data/test", O_WRONLY|O_CREAT|O_TRUNC|O_SYNC|O_DIRECT, 0666) = 3

I got myself an i4i.xlarge instance on AWS and started running some tests. That machine has a local NVMe drive of about 858 GB, 32 GB of RAM, and 4 cores. Let’s see what kind of performance I can get out of it.

Write sizeTotal writesBuffered writes

256 KB 256 MB 1.3 GB/s
512 KB 512 MB 1.2 GB/s
1 MB 1 GB 1.2 GB/s
2 MB 2 GB 731 Mb/s
8 MB 8 GB 571 MB/s
16 MB 16 GB 561 MB/s
2 MB 8 GB 559 MB/s
1 MB 1 GB 554 MB/s
4 KB 16 GB 557 MB/s
16 KB 16 GB 553 MB/s

What you can see here is that writes are really fast when buffered. But when I hit a certain size (above 1 GB or so), we probably start having to write to the disk itself (which is NVMe, remember). Our top speed is about 550 MB/s at this point, regardless of the size of the buffers I’m passing to the write() syscall.

I’m writing here using cached I/O, which is something that as a database vendor, I don’t really care about. What happens when we run with direct & sync I/O, the way I would with a real database? Here are the numbers for the i4i.xlarge instance for durable writes.

Write sizeTotal writesDurable writes

256 KB 256 MB 1.3 GB/s
256 KB 1 GB 1.1 GB/s
16 MB 16 GB 584 GB/s
64 KB 16 GB 394 MB/s
32 KB 16 GB 237 MB/s
16 KB 16 GB 126 MB/s

In other words, when using direct I/O, the smaller the write, the more time it takes. Remember that we are talking about forcing the disk to write the data, and we need to wait for it to complete before moving to the next one.

For 16 KB writes, buffered writes achieve a throughput of 553 MB/s vs. 126 MB/s for durable writes. This makes sense, since those writes are cached, so the OS is probably sending big batches to the disk. The numbers we have here clearly show that bigger batches are better.

My next test was to see what would happen when I try to write things in parallel. In this test, we run 4 processes that write to the disk using direct I/O and measure their output.

I assume that I’m maxing out the throughput on the drive, so the total rate across all commands should be equivalent to the rate I would get from a single command.

To run this in parallel I’m using a really simple mechanism - just spawn processes that would do the same work. Here is the command template I’m using:


parallel -j 4 --tagstring 'Task {}' dd if=/dev/zero of=/data/test bs=16M count=128 seek={} oflag=direct,sync ::: 0 1024 2048 3072

This would write to 4 different portions of the same file, but I also tested that on separate files. The idea is to generate a sufficient volume of writes to stress the disk drive.

Write sizeTotal writesDurable & Parallel writes

16 MB 8 GB 650 MB/s
16 KB 64 GB 252 MB/s

I also decided to write some low-level C code to test out how this works with threads and a single program. You can find the code here.  I basically spawn NUM_THREADS threads, and each will open a file using O_SYNC | O_DIRECT and write to the file WRITE_COUNT times with a buffer of size BUFFER_SIZE.

This code just opens a lot of files and tries to write to them using direct I/O with 8 KB buffers. In total, I’m writing 16 GB (128 MB x 128 threads) to the disk. I’m getting a rate of about 320 MB/sec when using this approach.

As before, increasing the buffer size seems to help here. I also tested a version where we write using buffered I/O and call fsync every now and then, but I got similar results.

The interim conclusion that I can draw from this experiment is that NVMes are pretty cool, but once you hit their limits you can really feel it. There is another aspect to consider though, I’m running this on a disk that is literally called ephemeral storage. I need to repeat those tests on real hardware to verify whether the cloud disk simply ignores the command to persist properly and always uses the cache.

That is supported by the fact that using both direct I/O on small data sizes didn’t have a big impact (and I expected it should). Given that the point of direct I/O in this case is to force the disk to properly persist (so it would be durable in the case of a crash), while at the same time an ephemeral disk is wiped if the host machine is restarted, that gives me good reason to believe that these numbers are because the hardware “lies” to me.

In fact, if I were in charge of those disks, lying about the durability of writes would be the first thing I would do. Those disks are local to the host machine, so we have two failure modes that we need to consider:

  • The VM crashed - in which case the disk is perfectly fine and “durable”.
  • The host crashed - in which case the disk is considered lost entirely.

Therefore, there is no point in trying to achieve durability, so we can’t trust those numbers.

The next step is to run it on a real machine. The economics of benchmarks on cloud instances are weird. For a one-off scenario, the cloud is a godsend. But if you want to run benchmarks on a regular basis, it is far more economical to just buy a physical machine. Within a month or two, you’ll already see a return on the money spent.

We got a machine in the office called Kaiju (a Japanese term for enormous monsters, think: Godzilla) that has:

  • 32 cores
  • 188 GB RAM
  • 2 TB NVMe for the system disk
  • 4 TB NVMe for the data disk

I ran the same commands on that machine as well and got really interesting results.

Write sizeTotal writesBuffered writes

4 KB 16 GB 1.4 GB/s
256 KB 256 MB 1.4 GB/s
2 MB 2 GB 1.6 GB/s
2 MB 16 GB 1.7 GB/s
4 MB 32 GB 1.8 GB/s
4 MB 64 GB 1.8 GB/s

We are faster than the cloud instance, and we don’t have a drop-off point when we hit a certain size. We are also seeing higher performance when we throw bigger buffers at the system.

But when we test with small buffers, the performance is also great. That is amazing, but what about durable writes with direct I/O?

I tested the same scenario with both buffered and durable writes:

ModeBufferedDurable

1 MB buffers, 8 GB write 1.6 GB/s 1.0 GB/s
2 MB buffers, 16 GB write 1.7 GB/s 1.7 GB/s

Wow, that is an interesting result. Because it means that when we use direct I/O with 1 MB buffers, we lose about 600 MB/sec compared to buffered I/O. Note that this is actually a pretty good result. 1 GB/sec is amazing.

And if you use big buffers, then the cost of direct I/O is basically gone. What about when we go the other way around and use smaller buffers?

ModeBufferedDurable

128 KB buffers, 8 GB write 1.7 GB/s 169 MB/s
32 KB buffers, 2 GB 1.6 GB/s 49.9 MB/s
Parallel: 8, 1 MB, 8 GB 5.8 GB/s 3.6 GB/s
Parallel: 8, 128 KB, 8 GB 6.0 GB/s 550 MB/s

For buffered I/O - I’m getting simply dreamy numbers, pretty much regardless of what I do 🙂.

For durable writes, the situation is clear. The bigger the buffer we write, the better we perform, and we pay for small buffers. Look at the numbers for 128 KB in the durable column for both single-threaded and parallel scenarios.

169 MB/s in the single-threaded result, but with 8 parallel processes, we didn’t reach 1.3 GB/s (which is 169x8). Instead, we achieved less than half of our expected performance.

It looks like there is a fixed cost for making a direct I/O write to the disk, regardless of the amount of data that we write.  When using 32 KB writes, we are not even breaking into the 200 MB/sec. And with 8 KB writes, we are barely breaking into the 50 MB/sec range.

Those are some really interesting results because they show a very strong preference for bigger writes over smaller writes.

I also tried using the same C code as before. As a reminder, we use direct I/O to write to 128 files in batches of 8 KB, writing a total of 128 MB per file. All of that is done concurrently to really stress the system.

When running iotop in this environment, we get:


Total DISK READ:         0.00 B/s | Total DISK WRITE:       522.56 M/s
Current DISK READ:       0.00 B/s | Current DISK WRITE:     567.13 M/s
    TID  PRIO  USER     DISK READ DISK WRITE>    COMMAND
 142851 be/4 kaiju-1     0.00 B/s    4.09 M/s ./a.out
 142901 be/4 kaiju-1     0.00 B/s    4.09 M/s ./a.out
 142902 be/4 kaiju-1     0.00 B/s    4.09 M/s ./a.out
 142903 be/4 kaiju-1     0.00 B/s    4.09 M/s ./a.out
 142904 be/4 kaiju-1     0.00 B/s    4.09 M/s ./a.out
... redacted ...

So each thread is getting about 4.09 MB/sec for writes, but we total 522 MB/sec across all writes. I wondered what would happen if I limited it to fewer threads, so I tried with 16 concurrent threads, resulting in:


Total DISK READ:         0.00 B/s | Total DISK WRITE:        89.80 M/s
Current DISK READ:       0.00 B/s | Current DISK WRITE:     110.91 M/s
    TID  PRIO  USER     DISK READ DISK WRITE>    COMMAND
 142996 be/4 kaiju-1     0.00 B/s    5.65 M/s ./a.out
 143004 be/4 kaiju-1     0.00 B/s    5.62 M/s ./a.out
 142989 be/4 kaiju-1     0.00 B/s    5.62 M/s ./a.out
... redacted ..

Here we can see that each thread is getting (slightly) more throughput, but the overall system throughput is greatly reduced.

To give some context, with 128 threads running, the process wrote 16GB in 31 seconds, but with 16 threads, it took 181 seconds to write the same amount. In other words, there is a throughput issue here. I also tested this with various levels of concurrency:

Concurrency(8 KB x 16K times - 128 MB)Throughput per threadTime / MB written

1 15.5 MB / sec 8.23 seconds / 128 MB
2 5.95 MB / sec 18.14 seconds / 256 MB
4 5.95 MB / sec 20.75 seconds / 512 MB
8 6.55 MB / sec 20.59 seconds / 1024 MB
16 5.70 MB / sec 22.67 seconds / 2048 MB

To give some context, here are two attempts to write 2GB to the disk:

ConcurrencyWriteThroughputTotal writtenTotal time

16 128 MB in 8 KB writes 5.7 MB / sec 2,048 MB 22.67 sec
8 256 MB in 16 KB writes 12.6 MB / sec 2,048 MB 22.53 sec
16 256 MB in 16 KB writes 10.6 MB / sec 4,096 MB 23.92 sec

In other words, we can see the impact of concurrent writes. There is absolutely some contention at the disk level when making direct I/O writes. The impact is related to the number of writes rather than the amount of data being written.

Bigger writes are far more efficient. And concurrent writes allow you to get more data overall but come with a horrendous latency impact for each individual thread.

The difference between the cloud and physical instances is really interesting, and I have to assume that this is because the cloud instance isn’t actually forcing the data to the physical disk (it doesn’t make sense that it would).

I decided to test that on an m6i.2xlarge instance with a 512 GB io2 disk with 16,000 IOPS.

The idea is that an io2 disk has to be durable, so it will probably have similar behavior to physical hardware.

DiskBuffer SizeWritesDurableParallelTotalRate

io2              256.00                1,024.00  No                         1.00              256.00    1,638.40
io2          2,048.00                1,024.00  No                         1.00          2,048.00    1,331.20
io2                   4.00    4,194,304.00  No                         1.00    16,384.00    1,228.80
io2              256.00                1,024.00  Yes                         1.00              256.00            144.00
io2              256.00                4,096.00  Yes                         1.00          1,024.00            146.00
io2                64.00                8,192.00  Yes                         1.00              512.00              50.20
io2                32.00                8,192.00  Yes                         1.00              256.00              26.90
io2                   8.00                8,192.00  Yes                         1.00                64.00                7.10
io2          1,024.00                8,192.00  Yes                         1.00          8,192.00            502.00
io2          1,024.00                2,048.00  No                         8.00          2,048.00    1,909.00
io2          1,024.00                2,048.00  Yes                         8.00          2,048.00    1,832.00
io2                32.00                8,192.00  No                         8.00              256.00    3,526.00
io2                32.00                8,192.00  Yes                         8.00              256.00 150.9
io2                   8.00                8,192.00  Yes                         8.00                64.00              37.10

In other words, we are seeing pretty much the same behavior as on the physical machine, unlike the ephemeral drive.

In conclusion, it looks like the limiting factor for direct I/O writes is the number of writes, not their size. There appears to be some benefit for concurrency in this case, but there is also some contention. The best option we got was with big writes.

Interestingly, big writes are a win, period. For example, 16 MB writes, direct I/O:

  • Single-threaded - 4.4 GB/sec
  • 2 threads - 2.5 GB/sec X 2 - total 5.0 GB/sec
  • 4 threads - 1.4 X 4  - total 5.6 GB/sec
  • 8 threads - ~590 MB/sec x 8 - total 4.6 GB/sec

Writing 16 KB, on the other hand:

  • 8 threads - 11.8 MB/sec x 8 - total 93 MB/sec
  • 4 threads - 12.6 MB/sec x 4- total 50.4 MB/sec
  • 2 threads - 12.3 MB/sec x 2 - total 24.6 MB/sec
  • 1 thread - 23.4 MB/sec

This leads me to believe that there is a bottleneck somewhere in the stack, where we need to handle the durable write, but it isn’t related to the actual amount we write. In short, fewer and bigger writes are more effective, even with concurrency.

As a database developer, that leads to some interesting questions about design. It means that I want to find some way to batch more writes to the disk, especially for durable writes, because it matters so much.

Expect to hear more about this in the future.

time to read 7 min | 1357 words

When building RavenDB, we occasionally have to deal with some ridiculous numbers in both size and scale. In one of our tests, we ran into an interesting problem. Here are the performance numbers of running a particular query 3 times.

First Run: 19,924 ms

Second Run: 3,181 ms

Third Run: 1,179 ms

Those are not good numbers, so we dug into this to try to figure out what is going on. Here is the query that we are running:


from index 'IntFloatNumbers-Lucene' where Int > 0

And the key here is that this index covers 400 million documents, all of which are actually greater than 0. So this is actually a pretty complex task for the database to handle, mostly because of the internals of how Lucene works.

Remember that we provide both the first page of the results as well as its total number. So we have to go through the entire result set to find out how many items we have. That is a lot of work.

But it turns out that most of the time here isn’t actually processing the query, but dealing with the GC. Here are some entries from the GC log while the queries were running:


2024-12-12T12:39:40.4845987Z, Type: GC, thread id: 30096, duration: 2107.9972ms, index: 25, generation: 2, reason: Induced
2024-12-12T12:39:53.1359744Z, Type: GC, thread id: 30096, duration: 1650.9207ms, index: 26, generation: 2, reason: Induced
2024-12-12T12:40:07.5835527Z, Type: GC, thread id: 30096, duration: 1629.1771ms, index: 27, generation: 2, reason: Induced
2024-12-12T12:40:20.2205602Z, Type: GC, thread id: 30096, duration: 776.24ms, index: 28, generation: 2, reason: Induced

That sound you heard was me going: Ouch!

Remember that this query actually goes through 400M results. Here are the details about its Memory Usage & Object Count:

  • Number of objects for GC (under LuceneIndexPersistence): 190M (~12.63GB)
  • Managed Memory: 13.01GB
  • Unmanaged Memory: 4.53MB

What is going on? It turns out that Lucene handles queries such as Int>0 by creating an array with all the unique values, something similar to:


string[] sortedTerms = new string[190_000_000];
long[] termPostingListOffset = new long[190_000_000];

This isn’t exactly how it works, mind. But the details don’t really matter for this story. The key here is that we have an array with a sorted list of terms, and in this case, we have a lot of terms.

Those values are cached, so they aren’t actually allocated and thrown away each time we query. However, remember that the .NET GC uses a Mark & Sweep algorithm. Here is the core part of the Mark portion of the algorithm:


long _marker;
void Mark()
{
    var currentMarker = ++_marker;


    foreach (var root in GetRoots())
    {
        Mark(root);
    }


    void Mark(object o)
    {
        // already visited
        if (GetMarket(o) == currentMarker)
            return;


        foreach (var child in GetReferences(node))
        {
            Mark(child);
        }
    }
}

Basically, start from the roots (static variables, items on the stack, etc.), scan the reachable object graph, and mark all the objects in use. The code above is generic, of course (and basically pseudo-code), but let’s consider what the performance will be like when dealing with an array of 190M strings.

It has to scan the entire thing, which means it is proportional to the number of objects. And we do have quite a lot of those.

The problem was the number of managed objects, so we pulled all of those out. We moved the term storage to unmanaged memory, outside the purview of the GC. As a result, we now have the following Memory Usage & Object Count:

  • Number of objects for GC (under LuceneIndexPersistence): 168K (~6.64GB)
  • Managed Memory: 6.72GB
  • Unmanaged Memory: 1.32GB

Looking at the GC logs, we now have:


2024-12-16T18:33:29.8143148Z, Type: GC, thread id: 8508, duration: 93.6835ms, index: 319, generation: 2, reason: Induced
2024-12-16T18:33:30.7013255Z, Type: GC, thread id: 8508, duration: 142.1781ms, index: 320, generation: 2, reason: Induced
2024-12-16T18:33:31.5691610Z, Type: GC, thread id: 8508, duration: 91.0983ms, index: 321, generation: 2, reason: Induced
2024-12-16T18:33:37.8245671Z, Type: GC, thread id: 8508, duration: 112.7643ms, index: 322, generation: 2, reason: Induced

So the GC time is now in the range of 100ms, instead of several seconds. This change helps both reduce overall GC pause times and greatly reduce the amount of CPU spent on managing garbage.

Those are still big queries, but now we can focus on executing the query, rather than managing maintenance tasks. Incidentally, those sorts of issues are one of the key reasons why we built Corax, which can process queries directly on top of persistent structures, without needing to materialize anything from the disk.

FUTURE POSTS

  1. Partial writes, IO_Uring and safety - about one day from now
  2. Configuration values & Escape hatches - 5 days from now
  3. What happens when a sparse file allocation fails? - 7 days from now
  4. NTFS has an emergency stash of disk space - 9 days from now
  5. Challenge: Giving file system developer ulcer - 12 days from now

And 4 more posts are pending...

There are posts all the way to Feb 17, 2025

RECENT SERIES

  1. Challenge (77):
    20 Jan 2025 - What does this code do?
  2. Answer (13):
    22 Jan 2025 - What does this code do?
  3. Production post-mortem (2):
    17 Jan 2025 - Inspecting ourselves to death
  4. Performance discovery (2):
    10 Jan 2025 - IOPS vs. IOPS
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}