Ayende @ Rahien

Hi!
My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:

ayende@ayende.com

+972 52-548-6969

, @ Q c

Posts: 10 | Comments: 33

filter by tags archive

Concurrent max value

time to read 2 min | 324 words

For a feature in RavenDB, I need to figure out the maximum number of outputs per document an index has. Now, indexing runs in multiple threads at the same time, so we ended up with the following code:

var actualIndexOutput = maxActualIndexOutput;
if (actualIndexOutput > numberOfAlreadyProducedOutputs)
{
    // okay, now let verify that this is indeed the case, in thread safe manner,
    // this way, most of the time we don't need volatile reads, and other sync operations
    // in the code ensure we don't have too stale a view on the data (beside, stale view have
    // to mean a smaller number, which we then verify).
    actualIndexOutput = Thread.VolatileRead(ref maxActualIndexOutput);
    while (actualIndexOutput > numberOfAlreadyProducedOutputs)
    {
        // if it changed, we don't care, it is just another max, and another thread probably
        // set it for us, so we only retry if this is still smaller
        actualIndexOutput = Interlocked.CompareExchange(
            ref maxActualIndexOutput, 
            numberOfAlreadyProducedOutputs,
            actualIndexOutput);
    }
}

The basic idea is that this code path is hit a lot, once per document indexed per index. And it also needs to be thread safe, so we first do an unsafe operation, then a thread safe operations.

The idea is that we’ll quickly arrive at the actual max number of index inputs,  but we don’t have to pay the price of volatile reads or thread synchronization.

Production postmortemThe case of the intransigent new database

time to read 3 min | 480 words

A customer called us to tell that they had a problem with RavenDB. As part of their process for handling new customers, they would create a new database, setup indexes, and then direct all the queries for that customer to that database.

Unfortunately, this system that has worked so well in development died a horrible death in production. But, and this was strange, only for new customers, and only in the create new customer stage. The problem was:

  • The user would create a new database in RavenDB. This just create a db record, and its location on disk. It doesn’t actually initialize a database.
  • On the first request, we initialize the db, creating it if needed. The first request will wait until this happens, then proceed.
  • On their production systems, that first request (which they used to create the indexes they require) would time out with an error.

Somehow, the creation of a new database would take way too long.

The first thought we had was they are creating the database on a path of an already existing database, maybe a big one that had a long initialization period, or maybe one that required recovery. But the customer validated that they were creating the database on an empty folder.

We looked at the logs, and the logs just showed a bunch of time were there was no activity. In fact, we had a single method call to open the database that took over 15 seconds to run. Except that on a new database, this method just create a bunch of files to start things out and is ready really quickly.

That is the point that led us to suspect that the issue was environmental. Luckily, as the result of many such calls, RavenDB comes with a pretty basic I/O Test tool. I asked the customer to run this on their production system, and I got the following:image

And now everything was clear. They were running on an I/O constrained system (a cloud machine), and they were running into an interesting problem. When RavenDB creates a database, it pre-allocate some files for its transactional journal.

Those files are 64MB in size, and the total write for a new Esent RavenDB database with default configuration is just over 65MB. If your write throughput is less than 1MB/sec sustained, that will be problematic.

I let the customer know about the configuration option to take less space at startup (Esent RavenDB databases can go as low as 5MB, Voron RavenDB starts at 256Kb), but I also gave them a hearty recommendation to make sure that their I/O rates improved, because this isn’t going to be the only case where slow I/O will kill them.

Exercise, learning and sharpening skills

time to read 2 min | 235 words

The major portion of the RavenDB core team is co-located in our main office in Israel. That means that we get to do things like work together, throw ideas off one another, etc.

But one of the things that I found stifling in other work places is a primary focus on the current work. I mean, the work is important, but one of the best ways to stagnate is to keep doing the same thing over and over again. Sure, you are doing great and can spin that yarn very well, but there is always that power loom that is going to show up any minute. So we try to do some skill sharpening.

There are a bunch of ways we do that. We try to have a lecture every week (given by one of our devs). The topics so far range from graph theory to distributed gossip algorithms to new frameworks and features that we deal with to the structure of the CPU and memory architecture.

We also have mini debuggatons. I’m not sure if the name fit, but basically, we show a particular problem, split into pairs and try to come up with a root cause and a solution. This post is written while everyone else in the office is busy looking at WinDBG and finding a memory leak issue, in the meantime, I’m posting to the blog and fixing bugs.

Troubleshooting, when F5 debugging can’t help you

time to read 6 min | 1054 words

You might have noticed that we have been doing a lot of work on the operational side of things. To make sure that we give you as good a story as possible with regards to the care & feeding of RavenDB. This post isn’t about this. This post is about your applications and systems, and how you are going to react when !@)(*#!@(* happens.

In particular, the question is what do you do when this happens?

This situation can crop up in many disguises. For example, you might be seeing a high memory usage in production, or experiencing growing CPU usage over time, or see request times go up, or any of a hundred and one different production issues that make for a hell of a night (somehow, they almost always happen at nighttime)

Here is how it usually people think about it.

The first thing to do is to understand what is going on. About the hardest thing to handle in this situations is when we have an issue (high memory, high CPU, etc) and no idea why. Usually all the effort is spent just figuring out what and why.. The problem with this process for troubleshooting issues is that it is very easy to jump to conclusions and have an utterly wrong hypothesis. Then you have to go through the rest of the steps to realize it isn’t right.

So the first thing that we need to do is gather information. And this post is primarily about the various ways that you can do that. In RavenDB, we have actually spent a lot of time exposing information to the outside world, so we’ll have an easier time figuring out what is going on. But I’m going to assume that you don’t have that.

The end all tool for this kind of errors in WinDBG. This is the low level tool that gives you access to pretty much anything you can want. It is also very archaic and not very friendly at all. The good thing about it is that you can load a dump into it. A dump is a capture of the process state at a particular point in time. It gives you the ability to see the entire memory contents and all the threads. It is an essential tool, but also the last one I want to use, because it is pretty hard to do so. Dump files can be very big, multiple GB are very common. That is because they contain the full memory dump of the process. There is also mini dumps, which are easier to work with, but don’t contain the memory dump, so you can watch the threads, but not the data.

The .NET Memory Profiler is another great tool for figuring things out. It isn’t usually so good for production analysis, because it uses the Profiler API to figure things out, but it has a wonderful feature of loading dump files (ironically, it can’t handle very large dump files because of memory issuesSmile) and give you a much nicer view of what is going on there.

For high CPU situations, I like to know what is actually going on. And looking at the stack traces is a great way to do that. WinDBG can help here (take a few mini dumps a few seconds apart), but again, that isn’t so nice to use.

Stack Dump is a tool that takes a lot of the pain away for having to deal with that. Because it just output all the threads information, and we have used that successfully in the past to figure out what is going on.

For general performance stuff “requests are slow”, we need to figure out where the slowness actually is. We have had reports that run the gamut from “things are slow, client machine is loaded” to “things are slow, the network QoS settings throttle us”. I like to start by using Fiddler to start figuring those things out. In particular, the statistics window is very helpful:

image

The obvious things are the bytes sent & bytes received. We have a few cases where a customer was actually sending 100s of MB in either of both directions, and was surprised it took some time. If those values are fine, you want to look at the actual performance listing. In particular, look at things like TCP/IP connect, time from client sending the request to server starting to get it, etc.

If you found the problem is actually at the network layer, you might not be able to immediately handle it. You might need to go a level or two lower, and look at the actual TCP traffic. This is where something like Wire Shark comes into play, and it is useful to figure out if you have specific errors at  that level (for example, a bad connection that cause a lot of packet loss will impact performance, but things will still work).

Other tools that are very important include Resource Monitor, Process Explorer and Process Monitor. Those give you a lot of information about what your application is actually doing.

One you have all of that information, you can form a hypothesis and try to test it.

If you own the application in question, the best way to improve your chances of figuring out what is going on is to add logging. Lots & lots of logging. In production, having the logs to support what is going on is crucial. I usually have several levels of logging. For example, what is the traffic in/out of my system. Next there is the actual system operations, especially anything that happens in the background. Finally, there are the debug/trace endpoints that will expose internal state and allow you to tweak various things at runtime.

Having good working knowledge on how to properly utilize the above mention tools is very important, and should be considered to be much more useful than learning a new API or a language feature.

Async event loops in C#

time to read 4 min | 656 words

I’m designing a new component, and I want to reduce the amount of complexity involved in dealing with it. This is a networked component, and after designing several such, I wanted to remove one area of complexity, which is the use of explicitly concurrent code. Because of that, I decided to go with the following architecture:

image

 

The network code is just reading messages from the network, and putting them in an in memory queue. Then we have a single threaded event loop that simply goes over the queue and process those messages.

All of the code that is actually processing messages is single threaded, which make it oh so much easier to work with.

Now, I can do this quite easily with a  BlockingCollection<T>, which is how I usually did those sort of things so far. It is simple, robust and easy to understand. It also tie down a full thread for the event loop, which can be a shame if you don’t get a lot of messages.

So I decided to experiment with async approaches. In particular, using the BufferBlock<T> from the DataFlow assemblies.

I came up with the following code:

var q = new BufferBlock<int>(new DataflowBlockOptions
{
CancellationToken = cts.Token,
});

This just create the buffer block, but the nice thing here is that I can setup a “global” cancellation token for all operations on this. The problem is that this actually generate bad exceptions (InvalidOperationException, instead of TaskCancelledException). Well, I’m not sure if bad is the right term, but it isn’t the one I would expect here, at least. If you pass a cancellation token directly to the method, you get the behavior I expected.

At any rate, the code for the event loop now looks like this:

private static async Task EventLoop(BufferBlock<object> bufferBlock, CancellationToken cancellationToken)
{
while (true)
{
object msg;
try
{
msg = await bufferBlock.ReceiveAsync(TimeSpan.FromSeconds(3), cancellationToken);
}
catch (TimeoutException)
{
NoMessagesInTimeout();
continue;
}
catch (Exception e)
{
break;
}
ProcessMessage(msg);
}
}

And that is pretty much it. We have a good way to handle timeouts, and processing messages, and we don’t take up a thread. We can also be easily cancelled. I still need to run this through a lot more testing, in particular, to verify that this doesn’t cause issues when we need to debug this sort of system, but it looks promising.

DSLs in Boo, and a look back

time to read 2 min | 268 words

About 6 years ago, I started writing the DSLs in Boo book, it came out in 2010, and today I got an email saying that this is now officially out of print. It was never a hugely popular book, so I’m not really surprised, but it really got me thinking.

I got to build several DSLs for production during the time I was writing this book, but afterward, I pretty much pivoted hard to RavenDB, and didn’t do much with DSLs since. However, the knowledge acquired during the writing of this book has actually been quite helpful when writing RavenDB itself.

I’m not talking about the design aspects of writing a DSLs, or the business decisions that are involved with that, although that is certainly a factor. I’m talking about the actual technical details of working with a language, a parser, etc.

In fact, you won’t see that, probably, but RavenDB indexes and transformers are actually DSLs, and they use a lot of the techniques that I talk about in the book. We start with something that looks like a C# code, but what ends up running is actually something that is far different. The Linq provider, too, rely heavily on those same techniques. We show you one thing but actually do something quite different under the cover.

It is interesting to see how the actual design of RavenDB was influenced by what my own history and the choices I made in various places. If I wasn’t well versed with abusing a language, I would probably have to go with something like CouchDB’s views, for example.

Windows Overlapped I/O and TPL style programming

time to read 23 min | 4490 words

I really like the manner in which C# async tasks work. And while building Voron, I run into a scenario in which I could really make use of Windows async API. This is exposed via the Overlapped I/O. The problem is that those are pretty different models, and they don’t appear to want to play together very nicely.

Since I don’t feel like having those two cohabitate in my codebase, I decided to see if I could write a TPL wrapper that would provide nice API on top of the underlying Overlapped I/O implementation.

Here is what I ended up with:

   1: public unsafe class Win32DirectFile : IDisposable
   2: {
   3:     private readonly SafeFileHandle _handle;
   4:  
   5:     public Win32DirectFile(string filename)
   6:     {
   7:         _handle = NativeFileMethods.CreateFile(filename,
   8:             NativeFileAccess.GenericWrite | NativeFileAccess.GenericWrite, NativeFileShare.None, IntPtr.Zero,
   9:             NativeFileCreationDisposition.CreateAlways,
  10:             NativeFileAttributes.Write_Through | NativeFileAttributes.NoBuffering | NativeFileAttributes.Overlapped, IntPtr.Zero);
  11:  
  12:         if (_handle.IsInvalid)
  13:             throw new Win32Exception();
  14:  
  15:         if(ThreadPool.BindHandle(_handle) == false)
  16:             throw new InvalidOperationException("Could not bind the handle to the thread pool");
  17:     }

Note that I create the file with overlapped enabled, as well as write_through & no buffering (I need them for something else, not relevant for now).

It it important to note that I bind the handle (which effectively issue a BindIoCompletionCallback under the cover, I think), so we won’t have to use events, but can use callbacks. This is much more natural manner to work when using the TPL.

Then, we can just issue the actual work:

   1: public Task WriteAsync(long position, byte* ptr, uint length)
   2: {
   3:     var tcs = new TaskCompletionSource<object>();
   4:  
   5:     var nativeOverlapped = CreateNativeOverlapped(position, tcs);
   6:     
   7:     uint written;
   8:     var result = NativeFileMethods.WriteFile(_handle, ptr, length, out written, nativeOverlapped);
   9:     
  10:     return HandleResponse(result, nativeOverlapped, tcs);
  11: }

As you can see, all the actual details are handled in the helper functions, we can just run the code we need, passing it the overlapped structure it requires. Now, let us look at those functions:

   1: private static NativeOverlapped* CreateNativeOverlapped(long position, TaskCompletionSource<object> tcs)
   2: {
   3:     var o = new Overlapped((int) (position & 0xffffffff), (int) (position >> 32), IntPtr.Zero, null);
   4:     var nativeOverlapped = o.Pack((code, bytes, overlap) =>
   5:     {
   6:         try
   7:         {
   8:             switch (code)
   9:             {
  10:                 case ERROR_SUCCESS:
  11:                     tcs.TrySetResult(null);
  12:                     break;
  13:                 case ERROR_OPERATION_ABORTED:
  14:                     tcs.TrySetCanceled();
  15:                     break;
  16:                 default:
  17:                     tcs.TrySetException(new Win32Exception((int) code));
  18:                     break;
  19:             }
  20:         }
  21:         finally
  22:         {
  23:             Overlapped.Unpack(overlap);
  24:             Overlapped.Free(overlap);
  25:         }
  26:     }, null);
  27:     return nativeOverlapped;
  28: }
  29:  
  30: private static Task HandleResponse(bool completedSyncronously, NativeOverlapped* nativeOverlapped, TaskCompletionSource<object> tcs)
  31: {
  32:     if (completedSyncronously)
  33:     {
  34:         Overlapped.Unpack(nativeOverlapped);
  35:         Overlapped.Free(nativeOverlapped);
  36:         tcs.SetResult(null);
  37:         return tcs.Task;
  38:     }
  39:  
  40:     var lastWin32Error = Marshal.GetLastWin32Error();
  41:     if (lastWin32Error == ERROR_IO_PENDING)
  42:         return tcs.Task;
  43:  
  44:     Overlapped.Unpack(nativeOverlapped);
  45:     Overlapped.Free(nativeOverlapped);
  46:     throw new Win32Exception(lastWin32Error);
  47: }

The complexity here is that we need to handle 3 cases:

  • Successful completion
  • Error (no pending work)
  • Error (actually success, work is done in an async manner).

But that seems to be working quite nicely for me so far.

Cache, it ain’t just remembering stuff

time to read 5 min | 984 words

I mentioned that this piece of code have an issue:

public class LocalizationService
{
    MyEntities _ctx;
    Cache _cache;

    public LocalizationService(MyEntities ctx, Cache cache)
    {
        _ctx = ctx;
        _cache = cache;
        Task.Run(() =>
        {
            foreach(var item in _ctx.Resources)
            {
                _cache.Set(item.Key + "/" + item.LanguageId, item.Text);
            }
        });
    }    

    public string Get(string key, string languageId)
    {
        var cacheKey = key +"/" + languageId;
        var item = _cache.Get(cacheKey);
        if(item != null)
            return item;

        item = _ctx.Resources.Where(x=>x.Key == key && x.LanguageId == languageId).SingleOrDefault();
        _cache.Set(cacheKey, item);
        return item;
    }
}

And I am pretty sure that the lot of you’ll be able to find a lot of additional issues that I’ve not thought about.

But there are at least three major issues in the code above. It doesn’t do anything to solve the missing value problem, it doesn’t have good handling for expiring values and have no way to handle changing values.

Look at the code above, assume that I am making continuous calls to Get(“does not exists”, “nh-YI”), or something like that. The way the code is currently written, it will always hit the database to get that value.

The second problem is that if we have had a cache cleanup run, which expired some values, we will actually load them one at a time, in pretty much the worst possible way from the point of view of performance.

Then we have the problem of how to actually handle updating values.

Let us see how we can at least approach this. We will replace the Cache with a ConcurrentDictionary. That will mean that the data cannot just go away from under us, and since we expect the number of resources to be relatively low, there is no issue in holding all of them in memory.

Because we know we hold all of them in memory, we can be sure that if the value isn’t there, it isn’t in the database either, so we can immediately return null, without checking with the database.

Last, we will add a StartRefreshingResources task, which will do the actual refreshing in an async manner. In other words:

public class LocalizationService
{
    MyEntities _ctx;
    ConcurrentDictionary<Tuple<string,string>,string> _cache = new ConcurrentDictionary<Tuple<string,string>,string>();

    Task _refreshingResourcesTask;

    public LocalizationService(MyEntities ctx)
    {
        _ctx = ctx;
        StartRefreshingResources();
    } 

    public void StartRefreshingResources()
    {
         _refreshingResourcesTask = Task.Run(() =>
        {
            foreach(var item in _ctx.Resources)
            {
                _cache.Set(item.Key + "/" + item.LanguageId, item.Text);
            }
        });
    }

    public string Get(string key, string languageId)
    {
        var cacheKey = Tuplce.Create(key,languageId);
        var item = _cache.Get(cacheKey);
        if(item != null || _refreshingResourcesTask.IsCompleted)
            return item;

        item = _ctx.Resources.Where(x=>x.Key == key && x.LanguageId == languageId).SingleOrDefault();
        _cache.Set(cacheKey, item);
        return item;
    }
}

Note that there is a very subtle thing going on in here. as long as the async process is running, if we can’t find the value in the cache, we will go to the database to find it. This gives us a good balance between stopping the system entirely for startup/refresh and having the values immediately available.

Optimizing the space & time matrix

time to read 6 min | 1075 words

The following method comes from the nopCommerce project. Take a moment to read it.

public virtual string GetResource(string resourceKey, int languageId,
    bool logIfNotFound = true, string defaultValue = "", bool returnEmptyIfNotFound = false)
{
    string result = string.Empty;
    if (resourceKey == null)
        resourceKey = string.Empty;
    resourceKey = resourceKey.Trim().ToLowerInvariant();
    if (_localizationSettings.LoadAllLocaleRecordsOnStartup)
    {
        //load all records (we know they are cached)
        var resources = GetAllResourceValues(languageId);
        if (resources.ContainsKey(resourceKey))
        {
            result = resources[resourceKey].Value;
        }
    }
    else
    {
        //gradual loading
        string key = string.Format(LOCALSTRINGRESOURCES_BY_RESOURCENAME_KEY, languageId, resourceKey);
        string lsr = _cacheManager.Get(key, () =>
        {
            var query = from l in _lsrRepository.Table
                        where l.ResourceName == resourceKey
                        && l.LanguageId == languageId
                        select l.ResourceValue;
            return query.FirstOrDefault();
        });

        if (lsr != null) 
            result = lsr;
    }
    if (String.IsNullOrEmpty(result))
    {
        if (logIfNotFound)
            _logger.Warning(string.Format("Resource string ({0}) is not found. Language ID = {1}", resourceKey, languageId));
        
        if (!String.IsNullOrEmpty(defaultValue))
        {
            result = defaultValue;
        }
        else
        {
            if (!returnEmptyIfNotFound)
                result = resourceKey;
        }
    }
    return result;
}

I am guessing, but I am assuming that the intent here is to have a tradeoff between startup time and the system responsiveness. If you have LoadAllLocaleRecordsOnStartup set to true, it will load all the data from the database, and access it from there. Otherwise, it will load the data in a piece at a time.

That is nice, but it shows a single tradeoff, and that isn’t a really good idea. Not only that, but look how it uses the cache. There are separate entries in the cache for the resources if they are loaded via the GetAllResourceValues() vs. individually. That leaves the cache with a lot less options when it needs to clear the cache. The cache deciding that it can remove a single item would result in a very expensive and long query taking place.

Instead, we can do it like this:

public class LocalizationService
{
    MyEntities _ctx;
    Cache _cache;

    public LocalizationService(MyEntities ctx, Cache cache)
    {
        _ctx = ctx;
        _cache = cache;
        Task.Run(() =>
        {
            foreach(var item in _ctx.Resources)
            {
                _cache.Set(item.Key + "/" + item.LanguageId, item.Text);
            }
        });
    }    

    public string Get(string key, string languageId)
    {
        var cacheKey = key +"/" + languageId;
        var item = _cache.Get(cacheKey);
        if(item != null)
            return item;

        item = _ctx.Resources.Where(x=>x.Key == key && x.LanguageId == languageId).SingleOrDefault();
        _cache.Set(cacheKey, item);
        return item;
    }
}

Of course, this has a separate issue, but I’ll discuss that in my next post.

Diagnosing RavenDB problem in production

time to read 2 min | 208 words

We have run into a problem in our production system. Luckily, this is a pretty obvious case of misdirection.

Take a look at the stack trace that we have discovered:

image

The interesting bit is that this is an “impossible” error. In fact, this is the first time that we have actually seen this error ever.

But looking at the stack trace tells us pretty much everything. The error happens in Dispose, but that one is called from the constructor. Because we are using native code, we need to make sure that an exception in the ctor will properly dispose all unmanaged resources.

The code looks like this:

image

And here we can see the probable cause for error. We try to open a transaction, then an exception happens, and the Dispose is called, but it isn’t ready to handle this scenario, so throws.

The original exception is masked, and you have a hard debug scenario.

FUTURE POSTS

  1. Production postmortem: The case of the memory eater and high load - 3 days from now
  2. Production postmortem: The case of the lying configuration file - 4 days from now
  3. Production postmortem: The industry at large - 5 days from now
  4. The insidious cost of allocations - 6 days from now
  5. Find the bug: The concurrent memory buster - 7 days from now

And 4 more posts are pending...

There are posts all the way to Sep 10, 2015

RECENT SERIES

  1. Find the bug (5):
    20 Apr 2011 - Why do I get a Null Reference Exception?
  2. Production postmortem (10):
    14 Aug 2015 - The case of the man in the middle
  3. What is new in RavenDB 3.5 (7):
    12 Aug 2015 - Monitoring support
  4. Career planning (6):
    24 Jul 2015 - The immortal choices aren't
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats