Ayende @ Rahien

Hi!
My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:

ayende@ayende.com

+972 52-548-6969

, @ Q c

Posts: 6,283 | Comments: 46,769

filter by tags archive

Protocol design implications: REST vs. TCP

time to read 3 min | 444 words

I was going over design documents today, and I noticed some common themes in the changes that we have between RavenDB 3.5 and RavenDB 4.0.

With RavenDB 3.5 (and all previous versions), we always had the communication layer as HTTP REST calls between nodes. When I designed RavenDB, REST was the thing to do, and it is reflected in the design of RavenDB itself. However, 8 years later, we sat down and considered whatever this is really appropriate for everything. The answer was a resounding no. In fact, while over 95% of RavenDB is still pure REST calls, we have moved certain key functions to using TCP directly.

Note that this goes in directly contrast to this post of mine from 2012: Why TCP is evil and HTTP is king.

The concerns in this post are still valid, but we have found that there are a few major reasons why we want to switch to TCP for certain stuff. In particular, the basic approach is that the a client will communicate with the server using HTTP calls, but servers communicate with one another using TCP. The great thing about TCP is that it is a stream oriented protocol, so I don’t need to carry state with me on every call.

With HTTP, each call is stateless, and I can’t assume anything about the other side. That means that I need to send the state, manage the state on the other side, and have to deal with potential issues such as concurrency in the same conversation, restarts of one side that the other side can’t easily detect, repeated validation on each call, etc.

With TCP, on the other hand, I can make a lot of assumptions about the conversation. I have state that I can carry between calls to the other side, and as long as the TCP connection is opened, I can assume that it is valid. For example, if I need to know what is the last item I sent to the remote end, I can query that at the beginning of the TCP connection, as part of the handshake, and then I can just assume that what I sent to the other side has arrived (since otherwise I’ll eventually get an error, requiring me to create a new TCP connection and do another handshake). On the other side, I can verify the integrity of a connection once, without requiring me to repeatedly verify our mutual state on each and every message being passed.

This has drastically simplified a lot of code on both the sending and receiving ends, and reduced the number of network roundtrips by a significant amount.

Building a low level trie with Rust: Part I

time to read 3 min | 481 words

Before getting to grips with a distributed gossip system in Rust, I decided that it would be better to look at something a bit more challenging, but smaller in scope. I decided to implement the low level trie challenge in Rust.

This is interesting, because it is complex enough problem to require thinking even for experienced developers, but at the same time, it isn’t complex, just have a lot of details. It also require us to do a lot of lot level stuff and manipulate memory directly, so that is something that would be an interesting test for a system level programming language.

On the one hand, even with just a few hours with Rust, I can see some elegance coming out of certain pieces.  For example, take a look at the following code:

This is responsible for searching on the trie for a value, and I like that the find_match function traverse the tree and allow me to return both an enum value and a the closest match to this when it fails (so I can continue the process directly from there).

On the other hand, we have pieces of code like this:

image

And any line that has four casts in it is already suspect. And as I’m dealing with the raw memory, I have quite a bit of this.

And I certainly feeling the pain of the borrow checker. Here is where I’m currently stumped.

This is a small and simple example that shows the issue. It fails to compile:

image

I have a method that takes a mutable MyTrie reference, and pass it to a method that expects a immutable reference. This is fine, and would work. But I need to use the value from the find method in the delete_internal method, which again needs a mutable instance. But this fails with:

error[E0502]: cannot borrow `*self` as mutable because it is also borrowed as immutable

I understand the problem, but I am not really sure how to solve it. The problem is that I kinda want the find method to remain immutable, since it is also used on the read method, which can run on immutable instances.Technically speaking, I could copy the values that I want out of the node reference and do a lexical scope that would force the immutable borrow to end, but I’m unsure yet what would be the best option.

It seems like a lot of work to get what I want in spite, and not with the help of, the compiler.

Initial design for strong encryption in RavenDB 4.0

time to read 7 min | 1322 words

The previous post generated some great discussion, and we have done a bit of research in the meantime about what is going to be required in order to provide strong encryption support in RavenDB.

Note: I’m still no encryption expert. I’m basing a lot of what I have here on reading libsodium code and docs.

The same design goals that we had before still hold. We want to encrypt the data at the page level, but it looks like it is going to be impossible to just encrypt the whole page. The reason behind that is that encryption is actual a pure mathematical operation, and given the same input text and the same key, it is always going to generate the same value. Using that, you can create certain attacks on the data by exploiting the sameness of the data, even if you don’t actually know what it is.

In order to prevent that, you would use an initialization vector or nonce (seems to be pretty similar, with the details about them being relevant only with regards to the randomness requirements they have). At any rate, while I initially hoped that I can just use a fixed value per page, that is a big “nope, don’t do that”. So we need some place to store that information.

Another thing that I run into is the problem with modifying the encrypted text in order to generate data that can be successfully decrypted but is different from the original plain text. A nice example of that can be seen here (see the section: How to Attack Unauthenticated Encryption). So we probably want to have that as well.

This is not possible with the current Voron format. Luckily, one of the reasons we built Voron is so we can get it to do what we want. Here is what a Voron page will look after this change:

  Voron page: 8 KB in size, 64 bytes header
+-------------------------------------------------------------------------+
|Page # 64 bits|Page metadata up to 288 bits  |mac 128 bits| nonce 96 bits|
+-------------------------------------------------------------------------+
|                                                                         |
|  Encrypted page information                                             |
|                                                                         |
|       8,128 bytes                                                       |
|                                                                         |
|                                                                         |
+-------------------------------------------------------------------------+

The idea is that when we need to encrypt a page, we’ll do the following:

  • First time we need to encrypt the page, we’ll generate a random nonce. Each time that we encrypt the page, we’ll increment the nonce.
  • We’ll encrypt the page information and put it in the page data section
  • As well as encrypting the data, we’ll also sign both it and the rest of the page header, and place that in the mac field.

The idea is that modifying either the encrypted information or the page metadata will generate an error because the tampering will be detected.

This is pretty much it as far as the design of the actual data encryption goes. But there is more to it.

Voron uses a memory mapped file to store the information (actually, several, with pretty complex interactions, but it doesn’t matter right now). That means that if we want to decrypt the data, we probably shouldn’t be doing that on the memory mapped file memory. Instead, each transaction is going to set aside some memory of its own, and when it needs to access a page, it will be decrypted from the mmap file into that transaction private copy. During the transaction run, the information will be available in plain text mode for that transaction. When the transaction is over, that memory is going to be zeroed. Note that transactions RavenDB tend to be fairly short term affairs. Because of that, each read transaction is going to get a small buffer to work with and if there are more pages accessed than allowed, it will replace the least recently used page with another one.

That leaves us with the problem of the encryption key. One option would be to encrypt all pages within the database with the same key, and use the randomly generated nonce per page and then just increment that. However, that does leave us with the option that two pages will be encrypted using the same key/nonce. That has a low probability, but it should be considered. We can try deriving a new key per page from the master page, but that seems… excessive. But it looks like there is another option is to generate use a block chipper, where we pass different block counter for each page.

This would require a minimal change to crypto_aead_chacha20poly1305_encrypt_detached, allowing to pass a block counter externally, rather than have it as a constant. I asked the question with more details so I can have a more authoritative answer about that. If this isn’t valid, we’ll probably use a nonce that is composed of the page # and the number of changes that the page has went through. This would limit us to about 2^32 modifications on the same page, though. We could limit the a single database file size to mere 0.5 Exabyte rather than 128 Zettabyte, but somehow I think we can live with it.

This just leave us with the details of key management. On Windows, this is fairly easy. We can use the CryptProtectData / CryptUnprotectData to protect the key. A transaction will start by getting the key, doing its work, then zeroing all the pages it touched (and decrypted) and its key. This way, if there are no active transactions, there is no plaintext key in memory. On Linux, we can apparently use Libsecret to do this. Although it seems like it has a much higher cost to do so.

Strong data encryption questions

time to read 3 min | 428 words

Image result for encryption iconWith RavenDB 4.0, we are looking to strengthen our encryption capabilities. Right now RavenDB is capable of encrypting document data and the contents of indexes at rest. That is, if you look at the disk, the data is securely encrypted. However, in memory, we keep quite a bit of information in plain text (mostly in caches of various kinds), and the document metadata isn’t encrypted, so documents keys are visible.

With RavenDB 4.0 we are looking into making some stronger guarantees. That means that we want to keep all data encrypted on disk, and only decrypt it during transaction, after which it will immediately be encrypted back.

Now, encryption and security in general are pretty big fields, and I’m by no means an expert, so I thought that I would outline the initial goals of our research and see if you have anything to add.

  • All encryption / decryption operations are done on data that is aligned on 4KB boundary and is always in multiples of 4 KB. It would be extremely helpful if the encryption would not change the size of the data. Given that the data is always in 4KB increments, I don’t think that this is going to be an issue.
  • We can’t use managed API to do so. Out data is actually residing in unmanaged memory, so ideally we would need something like this:




  • I also need to do this be able to call this from C#, and it needs to run on Windows, Linux and hopefully Mac OS.
  • I’ve been looking at stuff like this page, trying to understand what it means and hoping that this is actually using best practices for safety.

Another problem is that just getting the encryption code right doesn’t help without managing all the rest of it properly. Selecting the appropriate algorithm and mode, making sure that the library we use is both well known and respected, etc. How do we distributed / deploy / update it over multiple platforms?

Any recommendations?

You can see some sample code that I have made here: https://gist.github.com/ayende/13b206b9d83e7aa126df77d6b12711f3

This is basically the sample OpenSSL translated to C# with a bit of P/Invoke. Note that this is meant for our own use, so we don't need padding since we always pass a buffer that is a multiple of 4KB. 

I'm assuming that since this is based on the example on the OpenSSL wiki, it is also a best practice sample. There is a chance that I am mistaken, however, which is why we have this post.

Business features vs. technical features

time to read 4 min | 613 words

A feature is something that your application/service does. Usually we don’t give it a lot more thought, but I recently had an interesting discussion about the exact distinctions between a business feature and a technical feature.

Let us imagine that we are talking about an application that allow to send snail mail, we already seen it before. A user will call the API and then a few days later a physical letter will show up at your door. So far, it is pretty simple. The question is, what can you offer in addition to expand the business.

For example, we might offer:

  • Mail tracking – providing a way to ensure that the recipient actually got the letter.
  • Snail mail to email – getting a physical email, and having that sent to the customer.

Those two are obvious extensions to the core business, and from the point of view of the business, it is great. From a technical perspective, that is a whole lot of complexity. You need to integrate with FedEx to handle the mail tracking, and you need to setup some sort of an automated system that will sort the mail, scan it and upload it to the customer’s account.

The problem is that at this point, you don’t really know what kind of reaction those features are going to have. They are both non trivial and in some cases require major capital expenditure to implement and are pretty hard to properly size upfront.

So you split it. Instead of doing this as a single feature, you have a business feature and a technical feature. A business feature means that your business offers this service, building that requires research to show that we can actually offer that, check whatever there are legal ramifications (some mail can be sensitive, privacy concerns, etc), check what kind of pricing we can charge, etc.  The technical feature is actually implementing all of that.

But the key observation here is that you don’t actually do the technical implementation, at least not just yet. You do the work around the business end of the feature, and then you announce this feature availability. As in, right now you can track the snail mail, or right now you can get your mail scanned and uploaded. This is done with minimal technical work in the backend, and with the caveat that this still experimental and pricing might change.

This isn’t cheating, mind you. Once you announced this feature, you wait to see what kind of reaction we’ll have. One of the options is that users will really love this feature, and start immediately using it. In this case, you have a good problem, people are flocking to give you money. In the meantime, you have Joe and Samantha, from the local high school working for minimum wage in the afternoon to manually do the work. So you can complete the customer expectations, as you are now working to complete the technical side and automate the whole thing (firing Joe and Samantha along the way).

The key here is that you don’t have to do any major upfront investment, in development or in facilities, before you can have this feature. And most of the time, even if it is a major feature, the ramp up time is enough for you to have a pretty good idea about what you actually need to do. And in the meantime, you have a micro service architecture, it is just that the services aren’t called FedExTrackingService and ScanAndSortPhysicalMailService but Joe and Samantha.

In other words, you have mechanical Turk the feature until you can teach you system to properly play chess.

Writing my own synchronization primitive ReaderWriterLock

time to read 3 min | 433 words

We an investigation into locking cost we have when we noticed this:

image

And then we realized that we also need to have hold the reader lock across async/await calls (which is not safe to do with ReaderWriterLockSlim. And as long as we need to do this, we might as well customize it for our needs.

We need something that has as little cost as possible, allows us to support entering the read lock from one thread while exiting from another and need to handle contention well. Our scenario in this case is producers holding the lock while the generate work, and then a flusher that goes and clean this up while holding the write lock.

This means that we also need to consider that we have only two states (readers & single writer), and that the writer will likely hold the lock for a significant amount of time.

Reading the code for RederWriterLockSlim is quite instructive. And I got some really cool ideas from it.

Here is the most important piece in my implementation

image

The _waiters variable holds the key state for the lock. Here is how we take a reader lock:

image

We increment the _waiters using interlocked operation, and then check if we have a waiter pending, if not, we are done and we have the read lock.

This means that the cost of taking a write lock is simply a single interlocked instruction, a mask and comparison. That is pretty cheap.

To take a write lock is just a bit more complex:

image

We declare the fact that we want a writer using the Interlocked.Add, which will block future readers from taking the lock, then we try to get the lock for the writer. If we fail with either, we just wait to do this again after the next lock release.

In other words, the cost for uncontended read is 1 interlocked operation, and the cost for uncontended write is two interlocked operations. Which is pretty cool, all around.

The actual implementation can be found here, and it handle cases such as too many reader locks, and all the rest of the actual details.

10x speedup utilizing Nagle Algorithm in business application

time to read 4 min | 793 words

Nagle algorithm is a pretty annoying thing. Basically, it means that when we write to the network, the TCP stack will wait a bit to see if we have more stuff to send to that destination before actually emitting the packet. Most people run into this when they wonder why the minimum time for a remote operation is 200ms, even on the local network. Then you figure out how to disable Nagle Algorithm, and you are happy again.

Nagle algorithm was designed for remote terminals, where the speed difference between a human typing and the machine sending packet was big enough that each single letter you typed would be sent as a separate packet. That led to a really high overhead, 40 bytes overhead to send just a single byte to the server, and the number of packets that were sent may be high enough to cause the pipe to choke. So you buffer it, the basic algorithm goes like this, if we don’t have enough data to send a full packet, and if 200 ms didn’t pass since the first buffered data, wait up to 200 ms for more data.  In this manner, if you type fast enough, you will send more than a single character to the server, dramatically reducing the cost of talking with the server, and speeding up everything significantly.

So this is Nagle Algorithm, it is a pretty low level TCP detail, and often overlooked by people. If you don’t study networks, you’ll typically only find out about it when you have a perf problem that you can’t figure out. So how is this relating to business applications?

Imagine that you work on a system that does snail mail sending. A user will call the API and then a few days later a physical letter will show up at your door. Nothing really special about that, right? Except that we charge users based on the plan they choose. For simplicity’s sake, we’ll say that we have two plans:

  • Pay as you go – can send as many letters as they want, and we charge a certain amount for each.
  • Budgetted plan – can send up to certain number of letters per month.

In either case, it is pretty important to us to record that the mail was sent, and if the user is on a budget plan (or has spending alerts on his account), we need to respond to it in certain ways.

Overall, there is nothing really surprising here, and the code to handle this is pretty simple, right?

The problem is that under load, we’re going to have query a few requests going on to the billing service, and that is likely to be a bottleneck for us. Note that this is the case because while processing a single RecordMail request is pretty fast, the problem is that we are actually going to have to wait for both the actual processing and the back and forth between the machines. If the cost of processing a RecordMail request is 1 ms, and the cost of going over the network is 0.5 ms, that adds up quickly.

Now, let us see how we can do it better. We’ll start with moving from making the call immediately to placing the details in a queue.

You can see that we use a task completion source to wait for the result of the call. Next, we need to actually handle this, this is done in the following methods:

What this ends up doing is to reduce the network costs. Instead of going to the server once for every request, we’ll go to the server once per 200 ms or every 256 requests. That dramatically reduce network traffic. And the cost of actually sending 256 requests instead of just one isn’t really that different. Note that this gives us a higher latency per request, since we may have to wait longer for the request to go out to the server, but it also gives me much better throughput, because I can process more requests in a shorter amount of time.

A change like that can push your performance up by an order of magnitude or more. It also gives you economy of scale benefits on the other side. Instead of having to do a fixed amount of work around processing each request, we can now do it over a whole bunch of them.

For example, if I need to load the customer entity, instead of doing it once per mail, I can do that once per batch, and it is likely that this customer will be in the batch more than once, reducing the amount of work I have to do.

Optimizing read transaction startup timeRacy data structures

time to read 5 min | 849 words

Finding the appropriate image for this post was hard, you try searching for “racy pictures” in Google Image Search, but you might not want to do it from work Smile.

Anyway, today at lunch we had a discussion about abstractions and at what level you should be working. The talk centered about the difference between working in low level C and working with a high level framework like C# and the relative productivity associated with it.

At one point the following argument was raised: “Well, consider the fact that you never need to implement List, for example”. To which my reaction was: “I did just that last week”.

Now, to forestall the nitpickers, pretty much any C developer will have an existing library of common data structures already in place, I know. And no, you shouldn’t be implementing basic data structures unless you have a really good reason.

In my case, I think I did. The issue is very simple. I need to have a collection of items that are safe for multi threaded reads, but they are mostly only ever accessed from a single thread, and are only ever modified by a single thread. Oh, and they are also extremely performance sensitive.

The reason we started looking into replacing them is that the concurrent data structures that we were using (ConcurrentDictionary & ConcurrentStack, in those cases) were too expensive. And a large part of that was because they gave us a lot more than what we actually needed (fully concurrent access).

So, how do we build a simple list that allow for the following:

  • Only one thread can write.
  • Multiple threads can read.
  • No synchronization on either end.
  • Stale reads allowed.

The key part here is the fact that we allow stale reads.

Here is the concrete scenario, we need to track all active transactions. A transaction is single threaded, but we allow thread hopping (because of async). So we define:

image

And then we have:

image

DynamicArray is just a holder for an array of Nodes. Whenever we need to add an item to the active transactions, we’ll get the local thread value, and do a linear search through the array. If we find a node that has a null Transaction value, we’ll use it. Otherwise, we’ll add a new Node value to the end of the array. If we run out of room in the array, we’ll double the array size.  All pretty standard stuff, so far. Removing a value from the array is also simple, all you need to do is to null the Transaction field on the relevant node.

Why all of this?

Well, only a single thread can ever register a transaction for a particular DynamicArray instance. That means that we don’t have to worry about concurrency here. However, we do need to worry about transactions that need to remove themselves from the list from other threads. That is why we don’t have any concurrency control here. Instead, removing the transaction is done by setting the node’s Transaction field to null. Since only the owning transaction can do that, this is safe.

Other threads, however, need to read this information. They do that by scanning through all the thread values, and then accessing the DynamicArray directly. Now, that means that we need to be safe for concurrent reading. This is done by having the array more or less static on most scenarios. After it get full enough, it will never grow again, and the values will remain there, so effectively other threads will be reading an array of Nodes. We do need to be careful when we expand the array to give more room. We do this by first creating the new array, copying the values to the new array, and only then setting it in the threaded instance.

This way, concurrent code may either see the old array or the new one, but never need to traverse both. And when traversing, it goes through the nodes and check their Transaction value.

Remember that the Transaction is only being set from the original thread, but can be reset from another thread if the transaction moved between threads. We don’t really care, the way it works, we read the node’s transaction field, and then check its value (once we have a stable reference). The idea is that we don’t worry about data races. The worst that can happen is that we’ll see an older view of the data, which is perfectly fine for our purposes.

This is pretty complex, but the code itself is simple enough, and the performance benefit justify it several times over.

RavenDB RetrospectiveThe governors

time to read 5 min | 860 words

imageRavenDB’s core philosophy is that It Just Works and that means that we try very hard to get things right. Conversely, that means that we are also trying to make it hard to do the wrong thing. Basically, we want to push you hard into the pit of success.

Part of that approach is what we call the governors. It is a set of features that will detect and abort known bad behavioral patterns.  I have already talked about Unbounded Result Sets and I recently run into this post, which shows how nasty a problem that can be, and how invisible.

Another governor we have in place is the session’s maximum request limit. A session is meant to be a scope, it has a very short duration and is typically used for a single request / processing a single message, etc. It is supposed to live as long as the business transaction. Because the session is scoped, we can reason that a single session that is making a lot of database operation is probably doing something pretty bad.

For example, it might be calling the database in a loop. Those kind of issues can be truly insidious. Let us look at the following code (taken from here):

image

image

This kind of thing is a silent performance killer. No one is likely to see this is happening, and it will silently increase the number of database operations that your application make, leading to increased DB load, higher page load times and all sort of problems associated with it.

In one particular case, I saw a single page load generate 17,000 queries to the database. The software in question grew over time, and people assumed that this was just it took to run the software. Their database server was a true monster (this was about a decade ago), with dedicated RAM disks, high CPU count and a truly ridiculous amount of memory. Just to explain, we are talking about something like this:

image

But a decade ago, and it had a quite a bit of space. Now, this kind of beasty can do 500K IOPS (I’m drooling just thinking about it), but it is damn expensive. Just to put things in perspective, I spent several weeks at that company working on this particular problem, the cost of those weeks of work didn’t even cover the cost of the drive on that machine.

And on that monster, we were seeing page load times in the tens of seconds, and extremely high system load. I was able to bring it down to about 70 queries per page load, and their database server has pretty much idled ever since (IIRC, they turn that machine into a VM host for all the rest of their software, actually).

This is something that can bite.

To avoid that, we have the max numbers of requests in the session, which will abort excessive amount of database chatter. This have two important effects:

  • It follow the “better let one bad request die rather than take down the entire application”.
  • It put a budget on the number of calls that you can make.

Now, that budget is actually really interesting. Because we have it, we need to think about how we can reduce the number of database calls that we have to process the request. That led to a whole bunch of features around that. Lazy requests, includes and transformers to name just a few.

That had a positive unintended consequence. RavenDB is fast,  really fast, but it is also typically deployed as a network database, that means that each database call actually go over the network, and we all remember our fallacies, right?

image

In our profiling, we found that most often, the real cost in a RavenDB application was the back & forth chatter with the database. Reducing the number of requests we make to the server has an immediate benefit. And RavenDB allows you to do that by pipelining requests with Lazy, predicting requests with Includes or running the whole thing on the server side with Transformers.

And, like all governors, you can control it, RavenDB allows you to decide what the limit should be (on that particular session or globally based on your actual needs and environment.

RavenDB RetrospectiveExplicit indexes & auto indexes

time to read 3 min | 492 words

imageRavenDB doesn’t provide any way for queries to do table scans*.

* That isn’t actually true, we have Data Exploration, which does just that, but we don’t provide an explicit API for it, and it is a DBA driven feature (I wanna get this report with a minimum of fuss without regards to how much it is going to cost me) than an API that is exposed.

What this means is that the cost of query operations in RavenDB is always going to be O(logN), instead of O(N). How does this relate to the topic of RavenDB retrospectives?

One of the things that I kept seeing over and over as a database consultant was that databases are complex, and that it is easy to write a query that works perfectly fine for a period of time, then fall over completely as the size of the data goes over a certain threshold. In particular, queries that use table scans are particularly vulnerable for this issue.

One of the design goals for RavenDB was to avoid that, completely. We did it by simply forbidding any query that doesn’t have an index. initially, that was a pretty annoying requirement, because every time that you needed a new query, you needed to go ahead and create an index. But early on we got the Auto Indexes feature.

Basically, it means that when you can query RavenDB without specifying which index you want to use, at which point the query optimizer will inspect the query and decide which index can serve it. The most interesting point here is that if there isn’t an index that can serve this query, the query optimizer is going to create one on the fly. See the previous post about BASE indexes and how we can afford to do that.

The fun part here is that the query optimizer is actually learning over time, and it will shape its indexes to best fit the kind of queries you are doing. It also makes RavenDB much more robust for New Version Degradation effects. NVD is what happens when you push a new version out, which have slightly different queries, which make previously used indexes ineffective, forcing all your queries to become full table scans. Here is an example of the kind of subtle issues that this can cause. With RavenDB, when you use auto indexes (in other words, when you don’t explicitly state which index to use), the query optimizer will take care of that, and it will create all the appropriate indexes (and retire the unused ones)  for you.

This in particular is a feature that I’m really proud of, it require very little from the user to work with, and it gets the Right Thing Done.

FUTURE POSTS

  1. The metrics calculation methods - 2 days from now
  2. The struggle with Rust - 3 days from now

There are posts all the way to Jan 24, 2017

RECENT SERIES

  1. Answer (9):
    20 Jan 2017 - What does this code do?
  2. Challenge (48):
    19 Jan 2017 - What does this code do?
  3. Implementing low level trie (2):
    14 Dec 2016 - Part II
  4. The performance regression in the optimization (2):
    01 Dec 2016 - Part II
  5. Digging into the CoreCLR (4):
    25 Nov 2016 - Some bashing on the cost of hashing
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats