Ayende @ Rahien

Hi!
My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by email or phone:

ayende@ayende.com

+972 52-548-6969

, @ Q j

Posts: 6,666 | Comments: 48,512

filter by tags archive

On interns and hiring people at the first stages of their career

time to read 4 min | 738 words

imageWhen looking for candidates, there is an ideal candidate. It is the ability to take one of the people already working for you, with all the domain knowledge and expertise and clone them. Hopefully multiple times. If you do this right, you can probably stick the clones in a basement with a bunch of computers, slide Pizza under the door every so often and get a lot of work done for the price of Pizza.

While this (dystopian) scenario is quite nice in terms of overall effort, I do believe that there are some issues with it. Naturally the biggest hurdle is the medical bills for cloning people, there are also some noise about this being inhumane. The real issue, of course, is the lack of feasible technology to accelerate the growth of the clones. I’m sure this will be solved at some point. My time machine comes back from the shop on Monday (and isn’t that ironic), so I’ll be investigating this further at that point.

Setting the clone wars option aside, there is the need to get new hires. And there are several ways to do go about that. You can try getting people with some or all the skills that you require. Or you can get someone that is a blank slate and train them internally. This post is about the later option.

The question is really what do you actually define as a blank slate. For example, hiring my 3 years old daughter as a software developer would be really nice. She is a blank slate, but given that we are currently teaching her to count to 20, I think that this might be premature.

To be perfectly honest, the amount of knowledge that is required to be an efficient developer is staggering. If I was to start the clock from scratch, I think that I would be sitting there twiddling my thumbs to this day, scared of all the things that I must understand to be effective. In some way, not knowing how much I don’t know was really helpful. It allowed me to go out and learn without being overwhelmed. If I looked at just C# and compare the language from 1.0 to 7.3, for example. Each change made sense at the time, and incrementally added to the language. Some of them were bigger than others (generics, linq) but they came in byte size chunks (typo intended). Trying to grok it all at once… much harder.

We actually hire fairly often directly from college. Either immediately after completing the degree or even beforehand. We usually look for people that have gone beyond the rote learning for the good grade but are actually able to understand why this are happening, not just what API to call. Our most junior hire ever had just finished high school and had a few months free before going to the army, effectively being an intern in the company for a short while.

The approach we take for onboarding a new employee (with no practical experience) and an intern is quite different. For a full time employee, my priority is to get them well situated and familiar with how we work and the overall codebase. That means that the typical first assignments will be things that are on the sidelines. Things that are okay if they take a little longer, since they are used to get the new developer familiar with the landscape of the code. Examples include writing new clients, building internal applications using RavenDB, becnhmarking work and building diagnostics and debug tools for production analysis.

For an intern, however, the situation is different. Given that I’m only going to have the intern for a few short months,  spending 2 – 3 months training to the expected level of a full time employee is going to be a waste. Instead, we try to give the intern experimental and research projects. Things that we wished we could have done if we had the time, but typically do not. Some of them are pretty complex, but the key “feature” in this regard is that they are possible to approach without having to have a deep understanding of RavenDB.  For example, SQL Migration, one of the main features of RavenDB 4.1, was actually initially developed by an intern.

Living in the foundations, missing all the amenities

time to read 2 min | 377 words

imageWe talked to a candidate recently with a CV that included topics such as Assembly, SQL and JavaScript.  The list of skills was quite eclectic and we called the candidate to hear more about them.

The candidate completed a two years degree focused on the foundations of development, but it looked like whoever designed it was looking primarily to get a good foundation more than anything else. In other words, the end result is someone that can write SQL queries, but never built a data driven application, who knows (about? I’m not really clear at what level that was) assembly, but never written a real application. It doesn’t sound bad, I know, but it was like moving into a new house just after the contractor is done with the foundation. Sure, that is a really important part, but you don’t even have walls yet.

In 1999, I did a year long course that was focused on teaching me C and C++. I credit this course for much of my understanding of the basics of programming and how computers actually work. It has been an eye opening experience. I wouldn’t hire my 1999’s self, as I recall, that guy (can I deny knowing him?) wrote the following masterpieces:

  • sparse_matrix<T> in C++ templates that used five (5!) levels of pointer indirection!
  • The original single page application. I wrote an entire BBS system using a a single .VBS script that used three levels of recursive switch statements and included inline HTML, JS and VB code!

These are horrible things to inflict on an innocent computer, but that got me started in actually working on software and understanding things beyond the basics of syntax and action. I usually take the other side, that people are focused far too much on the high level stuff and do not pay attention to what is actually going on under the hood. This was an interesting reversal, because the candidate was the opposite. They had some knowledge about the basics, but nothing to build upon that yet.

And until you actually build upon the foundation, it is just a whole in the ground that was covered in some cement.

Working with legacy embedded types inside documents

time to read 2 min | 338 words

imageDatabase holds data for long periods of time. Very often, they keep the data for longer than single application generation. As such, one of the tasks that RavenDB has to take care of is the ability to process data from older generations of the application (or even from a completely different application).

For the most part, there isn’t much to it, to be honest. You process the JSON data and can either conform to whatever there is in the database or use your platform’s tooling to rename it as needed. For example:

There are a few wrinkles still. You can use RavenDB with dynamic JSON objects, but for the most part, you’ll use entities in your application to represent the documents. That means that we need to store the type of the entities you use. At the top level, we have metadata elements such as:

  • Raven-Clr-Type
  • Raven-Java-Class
  • Raven-Python-Type
  • Etc…

This is something that you can control, using Conventions.FindClrType event. If you change the class name or assembly, you can use that to tell RavenDB how to treat the old values. This require no changes to your documents and only a single modification to your code.

A more complex scenario happens when you are using polymorphic behavior inside your documents. For example, let’s imagine that you have an Order document, as shown on the right. This document has an internal property call Payment which can be any of the following types:

  • Legacy.CreditCardPayment
  • Legacy.WireTransferPayment
  • Legacy.PayPalPayment

How do you load such a document? If you try to just de-serialize it, you’ll get a deserialziation error. The type information about the polymorphic property is encoded in the document and you’ll need these legacy types to successfully load the document.

Luckily, there is a simple solution. You can customize the JSON serializer like so:

And the implementation of the binder is straightforward from that point:

In this manner, you can decide to keep the existing data as is or migrate it slowly over time.

Using GOTO in C#

time to read 2 min | 309 words

After talking about GOTO in C, I thought that I should point out some interesting use cases for using GOTO in C#. Naturally, since C# actually have proper methods for resource cleanups (IDisposable and using), the situation is quite different.

Here is one usage of GOTO in RavenDB’s codebase:

This is used for micro optimization purposes. The idea is that we put the hot spots of this code first, and only jump to the rare parts of the code if the list is full. This keep the size of the method very small, it allow us to inline it in many cases and can substantially improve performance.

Here is another example, which is a bit crazier:

As you can see, this is a piece of code that is full of gotos, and there is quite a bit of jumping around. The answer to why we are doing this is again, performance. In particular, this method is located in a very important hot spot in our code, as you can imagine. Let’s consider a common usage of this:

var val = ReadNumber(buffer, 2);

What would be the result of this call? Well, we asked the JIT to inline the method, and it is small enough that it would comply. We are also passing a constant to the method, so the JIT can simplify it further by checking the conditions. Here is the end result in assembly:

Of course, this is the best (and pretty common for us) case where we know what the size would be. If we have to send a variable, we need to include the checks, but that is still very small.

In other words, we use GOTO to direct as much as possible the actual output of the machine code, explicitly trying to be more friendly toward the machine at the expense of readability in favor of performance.

Errors, exceptions and faults, oh my!

time to read 10 min | 1947 words

If we could code for the happy path only, I think that our lives would have been much nicer. Errors are hard, because you keep having to deal with them, and even basic issues in error handling can take down systems that are composed of thousands of nodes.

I went out to look at research around error handling rates, and I found this paper. It says that about 3% of code (C#, mind) is error handling. However, it counts only the code inside catch / finally as error handling. My recent foray into C allow me another data point. The short  version, with no memory handling is 30 lines of code, the long version, with error handling, is over a 100.

If I had to guess, I would say that error handling is at least 10 – 15 %, and I would be surprised by 25 – 30%. In C# and similar languages, a centralized error handling strategy can help a lot in this regard, I think.

Anyway, let’s explore a few options for error handling:

The C way – return codes. This sucks. I think that this is universally known to suck. In particular, there is no rhyme or reason for return codes. Something you need to check for INVALID_HANDLE_VALUE, sometimes for a value that is different from zero. Sometimes the return code is the error code. In other times you need to call a separate function to get it. It also forces you to have a very localized error handling mode. All error handling should be done all the time, which can easily lead to either a single forgotten return code causing issues down the line (forgetting to check fsync() return code got data corruption in Postgres, for example) or really bad code where you lose sight of what is actually going on because there are so much error handling that the real functionally went into hiding.

The return code model also doesn’t compose very well, in the case of complex operations failing midway. It doesn’t provide contextual information or allow you to get stack traces easily. Each of this is important if you want to have a good error handling strategy (and good debugging / troubleshooting experience).

So the C way of doing things is out .What are we left? We have a few options:

  • Go with multiple return codes
  • Rust with Option<T>, Result<T>
  • Node.js with callbacks
  • C# / Java with Exceptionsmuch

Let’s talk about the Go approach for a bit. I think that this is universally loathed as being very similar to the C method and cause a lot of code repetition. On the other hand, at least we don’t have GetLastError() / errno to deal with. And one advantage off Go in this regard that the defer command allow you to much more cleanly handle state (you can just return and any resource will be cleaned up). This means that the code may be repetitive to write, but it is much easier to review.

The problem with this approach is that it is hard to compose errors. Imagine a method that needs to read a string from the network, parse a number from the string and then update a value in a file. Without error handling, this looks like so:

I haven’t even written the file handling path, mostly because it got too tiring. In this case, there are so many things that can go wrong. The code above handles failure to make the request, failure to read the value from the server, failure to parse the string, etc. With a file, you need to handle failure to open the file, read its content, parse them, do something with the value from the server and file value and then serialize the value back to bytes to be written to the file. About every other word in this previous statement require some form of error handling.  And the problem is that when we have complex system, we don’t just need to handle errors, we need to compose them so they would make sense.

EPERM error from somewhere is pretty useless, so having the file name is huge help in figuring out what the problem was. But knowing that the error is actually because we tried to write to save the data to the on-disk cache give me the proper context for the error.  The problem with errors is that they can happen very deeply in the code path, and the policy for handling such errors belong much higher in the stack.

Rust’s approach for errors is cleaner than Go, you don’t have multiple result types but the result is actually wrapped in a Result / Option value that you need to explicitly handle. Rust also contain some syntax sugar to make this pretty easy to write.

However, Rust error handling just plain sucks when you try to actually compose errors. Imagine the case where I want to do several operations, some of which may fail. I need to report success if all has passed, but error if any had errored. For a bit more complexity, we need to provide good context for the error, so the error isn’t something as simple as “int parse failure” but with enough details to know that it was an int parse failure on the sixth line of a particular file that belong to a certain operation.

The reason I say that Rust sucks for this is that for consuming error, things are pretty simple. But for producing them? The suggestion to library authors is to implement your own Error type. That means that you need to implement the Display trait manually, you need to write a separate From trait for each error that you want to compose up. If your code suddenly need to handle a new error type, you deal with that by writing a lot of boiler plate code. Any change in the error enum require touching multiple places in the code, violating SRP. You can use Box<Error>, it seems, but in this case, you have just “an error occurred” and it is complex to get back the real error and act on it.

A major complication of all the return something option is the fact that they usually don’t provide you with  a stack trace. I think that having a stack trace in the error is extremely helpful to actually analyzing a problem and being able to tell what actually happened.

Callbacks, such as was done with node.js, are pretty horrible. On the one hand, it is much easier to provide the context, because you are called from the error site and can check your current state. However, there is only so much that you can do in such a case, and state management is a pain. Callbacks have proven to be pretty hard to program with, and the industry as a whole is moving to async/await model instead. this give you sequential like mechanism and much better way to reason about the action of the system.

Finally, we have exceptions. There are actually several different models for exceptions. You have Java with checked exceptions, with the associated baggage there (cannot change the interface, require explicit handling, etc). There is the Pony language which has “exceptions”. That is really strange choice of implementation. Pony has exceptions for flow control, but it doesn’t give you any context about the actual error. Just that one happened.  The proper way of handling errors in Pony is to return a union of the result and possible errors (similar to how Rust does it, although the syntax looks nicer and there is less work).

I’m going to talk about C#’s exceptions. Java’s exceptions, except for some of them being checked, are pretty much the same.

Exceptions have the nice property that they are easily composable, it is easy to decide to handle some errors and to pass some up the chain. Generic error handling is also easy.  Exceptions are problematic because they break the flow of the code. An exception in one location can be handled somewhere completely different, and there is no way for you to see that when looking on the code. In fact, I’m not even aware of any IDE / tooling that can provide you this insight.

In languages with exceptions, you also can have exceptions pretty much at any location, which mean that you need to write exception-safe code to make sure that an exception don’t leave your code in an inconsistent state. There is also a decidedly non trivial cost of exceptions. To start with, many optimizations are mitigated by try blocks and throwing exceptions is often very expensive. Part of that is the fact that we need to capture the oh so valuable stack trace, of course.

There is also another aspect to error handling to consider. There are many cases where you don’t care about errors. Any time that you have generic framework code that calls to user code. An HTTP Handler is a good example of that. You call the user’s code to handle the request, and you don’t care about errors. You simple catch that error and return 500 / message to the client. Any error handling strategy must handle both scenarios. The “I really care about every single detail and separate error handling code path for everything” and “I just want to know if there is an error and print it, nothing else”.

In theory, I really love the Rust error handling mechanism, but the complexity of composability and generic handling means that it is a lot less convenient to actually consume and produce errors. Exceptions are great in terms of composability and the amount of detail they provide, but they are also breaking the flow of the code and introduce a separate and invisible code paths that are hard to reason about in many cases. On the other hand, exceptions allow you to bubble errors upward natively and easily, until you get to a location that can apply a particular error handling policy.

A good example from a recent issue we had to deal with. When running on a shared drive, a file delete isn’t going to be processed immediately, there is a gap of time in which the delete command seems to have succeeded, but attempting to re-create the file will fail with EEXISTS (and trying to open the file will give you ENOENT, so that’s fun). In this case, we throw the error up the stack. In our use case, we have this situation only when dealing with temporary files, and given that they are temporary, we can detect this scenario and use another file name to avoid this issue. So we catch a FileNotFoundException and retry with a different file name. This goes through four of five layers of code and was pretty simple to figure out and implement.

Doing that with error codes is hard, and adding another member for the Error type will likely have cascading implications for the rest of the code. On the other hand, throwing a new exception type from a method can also break the contract. Explicitly in languages like Java and implicitly in languages like C#.  In fact, with C#, for example, the implied assumption is always: “Can throw the following exceptions for known error cases, and other exceptions for unexpected”. This is similar to checked exceptions vs. runtime exceptions in Java. But in this case, this is the implicit default and it gives you more freedom overall when writing your code. Checked exceptions sounds great, but they have been proven to be a problem for developers in practices.

Oh well, I guess I won’t be able to solve the error handling problem perfectly in a single blog post.

Blind optimizations: Making MurmurHash3 faster

time to read 3 min | 512 words

I needed to use Bloom Filters, and I didn’t want to use the implementation we already have in RavenDB. That one is too tied up in our infrastructure to be easily used. So I found the Maybe.NET project. it is nice, but it doesn’t have a CoreCLR Nuget package. This meant that I had to go into the code and look at what is going on. And I started going, “nope, that isn’t the way I want it done”.

Now, to be clear, the code in the project is great. It is clear, obvious and idiomatic C#. It is also raising every red flag I have for inefficient code detection that I had built over the past few years of making RavenDB faster. Because this is such as small sample that I thought it would make a good blog post, because I can explain what the code is doing and what changes I’m doing there, and why. Before I can get to the Bloom Filter implementation, I need to use the hash function, and that was just a full stop for me. Let me show you what I mean. The key parts are below, and you can find the full code here.

image

This is a hash method, it produced several hashes for the purpose of the bloom filter. I underlined in red every time that this code allocates. As you can see, this allocates a lot.

There is also the fact that this accepts a generic object as a parameter and serialize that to a byte[]. I’m ignoring the allocations in that part of the code, but I’m assuming that they are significant. So let’s talk about how we can optimize this function?

Well, to start with, I’m going to decide that accepting an object is too high level. This is a hash function, the caller should give us bytes. Now, let’s see what impact that has on us, shall we?

Now this is much better. We don’t allocate anything in the ComputeHashes method and we give the compiler the chance to build really efficient code here. We can probably require that the maxHashValue be a power of two and avoid the mod operation in favor of bit shifting, but I’m not writing RavenDB here and worrying about every little thing. I’ll leave that part as an exercise for the reader.

Now, let’s look at the actual Hash function, shall we?

There is quite a bit going on, but essentially, I’m using the fixed to get the pointer from the span, then compute the hash in 4 bytes at once, then handle the remainder. There is not allocations and this has far fewer instructions that actually need to run. Note that this would be a great place to stop and run unit tests to verify that I didn’t break something, I’m going to assume that I got it write and close this post, I still want to talk about the optimizations that are available for the bloom filter.

RavenDB 4.1 featuresSQL Migration Wizard

time to read 2 min | 234 words

One of the new features coming up in 4.1 is the SQL Migration Wizard. It’s purpose is very simple, to get you started faster and with less work. In many cases, when you start using RavenDB for the first time, you’ll need to first put some data in to play with. We have the sample data which is great to start with, but you’ll want to use you own data and work with that. This is what the SQL Migration Wizard is for.

You start it by pointing it at your existing SQL database, like so:

image

The wizard will analyze your schema and suggest a document model based on that. You can see how this looks like here:

image

In this case, you can see that we are taking a linked table (employee_privileges) and turning that into an embedded collection.  You also have additional options and you’ll be able to customize it all.

The point of the migration wizard is not so much to actually do the real production migration but to make it easier for you to start playing around with RavenDB with your own data. This way, the first step of “what do I want to use it for” is much easier.

Looking for Go / Node.JS / Ruby developers for a short stint

time to read 1 min | 107 words

We are looking to finish work on our client API for RavenDB in Go, Node.JS and Ruby. The current state is that the code is mostly there, and we need help to give it the final push (and spit & polish) and drive it to release status.

For each of these, I estimate that there is about 6 – 8 weeks of work, after which we’ll be managing the rest of this internally. You can see the current state of each client API here:

If you are interested, please send an email to jobs@ravendb.net, this is applicable to both local (Hadera, Israel) or remote work.

Production postmortemThe unavailable Linux server

time to read 6 min | 1019 words

imageAs part of our 4.0 deployment, we have a cluster made of mixed machines, some running Windows, some running Linux. In one particular configuration, we have 2 Windows services and a single Linux machine. We setup the cluster and started directing production traffic to it, and all was good. We left for the weekend and came back the following week, to see what the results were.  Our Linux machine was effectively down. It was not responding to queries and it seemed like it was non responsive.

That was good and bad. It was good because the rest of the cluster just worked, and there was no interruption in service. This is as designed, but it is always nice to see this in real life. It is bad because it isn’t supposed to be happening. What was really frustrating was that we were able to log into the machine and there was nothing wrong there.

No high CPU or memory, no outstanding I/O or anything of the kind that we are used to.

The first clue for us was trying to diagnose the problem from inside the machine, where we able to reproduce it by trying to access the server from the local machine, resulting in the exact same issue as observed externally. Namely, the server would accept the connection and just leave it hanging, eventually timing out.

That was a interesting discovery, since it meant that we can rule out anything in the middle. This is a problem in this machine. But the problem remained quite hard to figure out. We deployed to production in a manner similar to what we expect our users will do, so we used Let’s Encrypt as the certificate authority with auto generated certificates.

So we started by seeing where the problem is, whatever this is on the TCP side or the SSL side, we issued the following command:

openssl s_client -connect b.cluster-name.dbs.local.ravendb.net:443

This command showed immediate connection to the server and the client sending the ClientHello properly, but then just hanging there. What was really interesting is that if we waited about 2 minutes, that SSL connection would complete successfully. But we couldn’t figure out any reason why this would be the case. It occurred to me that it might be related to the system handling of reverse DNS lookup. The two minutes timeout was very suspicious, and I assumed that it might be trying to lookup the client certificate and somehow resolve that. That isn’t how it works in general, although the fact that some SSH (and not SSL/TLS) configuration directly relate to this has led us in a merry chase.

Eventually we pulled strace and looked into what is actually going on. We focused on the following calls:

sudo strace -fp 1017 -s 128 -Tfe open,socket,connect,sendto,recvfrom,write

The interesting bits from there are shown here:

As you can see, we are looking at some DNS resolution, as we can tell from the /etc/resolv.conf and /etc/hosts open() calls. Then we have a connect() to 172.31.0.2 which is an AWS name server. Note that this is done over UDP, as you can see from the SOCK_DGRAM option in the preceding socket() call.

We are getting some data back, and we can see identrust there. And then we see something really suspicious. We have a TCP socket call that goes to 192.35.177.64 on port 80. In other words, this is an HTTP call. What does an HTTP call is doing in the middle of an SSL handshake?

As it turned out, our firewall configuration was blocking outbound connections to port 80. We tested removing that rule and everything came back online and the server was running just fine.

Further inspection revealed that we were calling to: http://apps.identrust.com/roots/dstrootcax3.p7c

And this is where things started to jell together. We are using Let’s Encrypt certificates, and in order to ensure trust, we need to send the full chain to the user. SSL Certificates has the notion of Authority Information Access, which is basically a URL that is registered in the certificate that points to where you can find the certificate that signed this one.

Why is this using HTTP? Because the data that will be fetched is already signed, and it is not a secret. And trying to use HTTPS to fetch it might get us into a loop.

So whenever we had a new SSL connection, we’ll try to connect to IdenTrust to get the full chain to send to the client. The killer here is that if we fail to do so, we’ll send the certificate chain we have (without the missing root), but it will work, since the other side already have this root installed (usually). On Windows, this certificate is installed, so we didn’t see it. On Linux, we didn’t have that certificate installed, so we had to look it up every single time.

The gory details, including dives into the source code are in the GitHub issue. And I do think they are gory. In this case, once we realized what was going on we were able to take steps to handle this. We needed to pre-register the entire chain on the local machine, so any lookup will be able to find it locally, and not do a network call per each SSL connection.

But beyond mutating the global certificate store, there is no real way to prevent that remote call.

Note that this is also true for Windows, although that seems to be implemented much deeper in the stack, and not in managed code, so I wasn’t able to trace where this is actually happening. The really bad thing here is that from the outside, there is no way for us to control or disable this, so this is just something that you have to remember to do when you use certificates, make sure that the entire chain is registered on each machine, otherwise you might have a remote call per connection, or a very long (and synchronous!) hang until the request times out if you are blocking outgoing access.

Getting fatal out of memory errors because we are managing memory too well

time to read 4 min | 621 words

imageWe got a serious situation on one of our test cases. We put the system through a lot, pushing it to the breaking point and beyond. And it worked, in fact, it worked beautifully. Up until the point that we started to use too many resources and crashed. While normally that would be expected, it really bugged us, we had provisions in place to protect us against that. Bulkheads were supposed to be blocked, operations rolled back, etc. We were supposed to react properly, reduce costs of operations, prefer being up to being fast, the works.

That did not happen. From the outside, what happened is that we go to the point where we would trigger the “sky about the fall, let’s conserve everything we can”, but we didn’t see the reaction that we expected from the system. Oh, we were started to use a lot less resources, but the resources that we weren’t using? They weren’t going back to the OS, they were still held.

It’s easiest to talk about memory in this regard. We hold buffers in place to handle requests, and in order to avoid fragmentation, we typically make them large buffers, that are resident on the large object heap.

When RavenDB detects that there is a low memory situation, it starts to scale back. It releases any held buffers, completes ongoing works and starts working on much smaller batches, etc. We saw that behavior, and we certainly saw the slow down as RavenDB was willing to take less upon itself. But what we didn’t see is the actual release of resources as a result of this behavior.

And as it turned out, that was because we were too good about managing ourselves. A large part of the design of RavenDB 4.0 was around reducing the cost of garbage collections by reducing allocations as much as possible. This means that we are running very few GCs. In fact, GC Gen 2 collections are rare on our environment. However, we need these Gen 2 collections to be able to clean up stuff that is in the finalizer queue. In fact, we typically need two such runs before the GC can be certain that the memory is not in use and actually collect it.

In this particular situation, we were careful to code so we will get very few GC collections running, and that led us to crash because we would run out of resources  before the GC could realize that we are actually not really using them at this point.

The solution, by the way, was to change the way we respond to low memory conditions. We’ll be less good about keeping all the memory around and if it isn’t being used, we’ll start discarding it a lot sooner, so the GC has better chance to actually realize that is isn’t being used and recover the memory. An instead of throwing the buffers away all at once when we have low memory and hope that the GC will be fast enough in collecting them, we’ll keep them around and reuse them, avoiding the additional allocations that processing more requests would have required.

Since the GC isn’t likely to be able to actually free them in time, we aren’t affecting the total memory consumed in this scenario but are able to reduce allocations by serving the buffers that are already allocated. This two actions, being less rigorous about policing our memory and not freeing things when we get low memory are confusingly enough to get both reduce the chance of getting into low memory and reduce the chance of actually using too much memory in such a case.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. RavenDB 4.1 features (11):
    04 Jul 2018 - This document is included in your subscription
  2. Codex KV (2):
    06 Jun 2018 - Properly generating the file
  3. I WILL have order (3):
    30 May 2018 - How Bleve sorts query results
  4. Inside RavenDB 4.0 (10):
    22 May 2018 - Book update
  5. RavenDB Security Report (5):
    06 Apr 2018 - Collision in Certificate Serial Numbers
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats