Ayende @ Rahien

Oren Eini aka Ayende Rahien CEO of Hibernating Rhinos LTD, which develops RavenDB, a NoSQL Open Source Document Database.

Get in touch with me:

oren@ravendb.net

+972 52-548-6969

Posts: 7,212 | Comments: 50,328

Privacy Policy Terms
filter by tags archive
time to read 1 min | 187 words

Following my previous post, which mentioned that you can save significantly on disk space if you store a plain text attachment using gzip, we go a feature request:

Perhaps in future attachments could have built-in compression as well?

The answer to that is no, but I thought that it is worth a post to explain why not.

Let’s consider the typical types of attachments that you’ll store in RavenDB. Based on experience, we usually see:

  • PDF files
  • Word / Excel / Power Point
  • Images (JPEG, PNG, GIF, etc)
  • Videoes
  • Designs (floor plans, CAD / DWG, etc)
  • Text files

Aside from the text files, pretty much all the data you’ll store as an attachment is already compressed. In fact, you’ll be hard pressed today to find any file format that does not already have built-in compression.

Compressing already compressed data is… suboptimal. I will not usually lead to significant space savings and can actually make the file size larger. It also burns CPU cycles unnecessarily.

It is better to shift the responsibility to the users in this case, since they have a lot more information about what they actually put into RavenDB and won’t have to guess.

time to read 3 min | 478 words

In distributed systems, the term Byzantine fault tolerance refers to working in an environment where the other nodes in the system are going to violate the invariants held by the system. Sometimes, that is because of a bug, sometimes because of a hardware issue (Figure 11) and sometimes that is a malicious action.

A user called us to let us know about a serious issue, they have a 100 documents in their database, but the index reports that there are 105 documents indexed. That was… puzzling. None of the avenues of investigation we tried help us. There wasn’t any fanout on the index, for example.

That was… strange.

We looked at the index in more details and we noticed something really strange. There were documents ids that were in the index that weren’t in the database, and they were all at the end. So we had something like:

  • users/1 – users/100 – in the index
  • users/101 – users/105 – not in the index

What was even stranger was that the values for the documents that were already in the index didn’t match the values from the documents. For that matter, the last indexed etag, which is how RavenDB knows what documents to index.

Overall, this is a really strange thing, and none of that is expected to happen. We asked the user for more details, and it turns out that they don’t have much. The error was reported in the field, and as they described their deployment scenario, we were able to figure out what was going on.

On certain cases, their end users will want to “reset” the system. The manner in which they do that is to shut down their application and then delete the RavenDB folder. Since all their state is in RavenDB, that brings them back up at a clean state. Everything works, and this is a documented manner in which they are working.

However, the manner in which the end user will delete is done as: “Delete the database files”, the actual task is done by the end user.

A RavenDB database internally is composed of a few directories, for data and indexes. What would happen in the case where you deleted just the database file, but kept the index files? In that case, RavenDB will just accept that this is the case (soft delete is a a thing, so we have to accept that) and open the index file. That index file, however, came from a different database, so a lot of invariants are broken. I’m actually surprised that it took this long to find out that this is problematic, to be honest.

We explained the issue, and then we spent some time ensuring that this will never be an invisible error. Instead, we will validate that the index is coming from the right source, or error explicitly. This is one bug that we won’t have to hunt ever again.

time to read 3 min | 451 words

A user called us with a strange bug report. He said that the SQL ETL process inside of RavenDB was behaving badly. It would write the data from the RavenDB server to the MySQL database, but then it would immediately delete it.

From the MySQL logs, the user showed:

2021-10-12 13:04:18 UTC:20.52.47.2(65396):root@ravendb:[1304]:LOG: execute <unnamed>: INSERT INTO orders ("id") VALUES ('Tab/57708-A')
2021-10-12 13:04:19 UTC:20.52.47.2(65396):root@ravendb:[1304]:LOG: execute <unnamed>: DELETE FROM orders WHERE "id" IN ('Tab/57708-A')

As you can imagine, that isn’t an ideal scenario for ETL processes. It is also something that should absolutely not happen. RavenDB issues the delete before the insert, obviously. In fact, for your viewing pleasure, here is the relevant piece of code:

image

It doesn’t get any clearer than that, right? We issue any deletes we have, then we send the inserts. But the user insisted that they are seeing this behavior, and I tend to trust them. But we couldn’t reproduce the issue, and the code in question dates to 2017, so I was pretty certain it is correct.

Then I noticed the user’s configuration. To define a SQL ETL process in RavenDB, you need to define what tables we’ll be writing to as well as deciding what data to send to those tables. Here is what this looked like:

image

And here is the script:

image

Do you see the problem? It might be better to use an overload, which make it clearer:

image

In the table listing, we had an “orders” table, but the script sent the data to the “Orders” table.

RavenDB is a case insensitive database, but in this case, what happened behind the scenes is that the code used a case sensitive dictionary to keep track of the tables that we are working with. That meant that what we did was roughly:

And basically, the changes_by_table dictionary had two tables in it. One from the table definitions and one from the script. When we validate that the tables from the script are fine, we do that using case insensitive comparison, so that passed properly.

To make it worst, the order of items in a dictionary is not predictable. If this was the iteration order other way around, everything would appear to be working just fine.

We fixed the bug, but I found it really interesting that three separate people (very experienced with the codebase) had a look and couldn’t figure out how this bug can happen. It wasn’t in the code, it was in what wasn’t there.

time to read 4 min | 791 words

A scenario came up from a user that was quite interesting to explore.

Let’s us assume that we want to put the Gutenberg Project inside of RavenDB. An initial attempt for doing that would look like this:

image

I’m skipping a lot of the details, but the most important field here is the Content field. That contains the actual text of the book.

On last count, however, the size of the book was around 708KB. When storing that as a single field inside of RavenDB, on the other hand, RavenDB will notice that this is a long field and compress it. Here is what this looks like:

image

The 738.55 KB is the size of the actual JSON, the 674.11 KB is after quick compression cycle and the 312KB is the actual size that this takes on disk. RavenDB is actively trying to help us.

But let’s take the next step, we want to allow us to query, using full text search, on the content of the book. Here is what this will look like:

image

Everything works, which is great. But what is going on behind the scenes?

Even a single text field that is large (100s of KB or many MB) puts a unique strain on RavenDB. We need to manage that as a single unit, it significantly bloats the size of the parent document and make it more expensive to work with.

This is interesting because usually, we don’t actually work all that much with the field in question. In the case of the Pride and Prejudice book, the content is immutable and not really relevant for the day to day work with the document. We are better off moving this elsewhere.

An attachment is a natural way to handle this. We can move the content of the book to an attachment. In this way, the text is retained, we can still work and process that, but it is sitting on the side, not making our life harder on each interaction with the document.

Here is what this looks like, note that the size of the document is tiny. Operations on that size would be much faster than a multi MB document:

image

Of course, there is a disadvantage here, how can we index the book’s contents now? We still want that. RavenDB support that scenario explicitly, let’s define an index to do just that:

image

You can see that I’m loading the content attachment and then accessing its content as string, using UTF8 as the encoding mechanism. I tell RavenDB to use full text search on this field, and I’m off to the races.

image

Of course, we could stop here, but why? We can do even better. When working with large text fields, an index such as the one above will force us to materialize the entire field as a single value. For very large values, that can put a lot of pressure in terms of memory usage.

But RavenDB supports more than that. Instead of processing a very large string in one shot, we can do that in an incremental fashion, avoiding big value materialization and the memory pressure associated with that. Here is what you can write:

image

That tells RavenDB that we should process the field in a streaming fashion.

Here is why it matters:

Value Materialized Streaming Value
image image

When we are working on a document that has a < 1 MB attachment, it probably doesn’t matter all that much (although using 25% of the memory is nice), but it matters a lot more when you are working with larger texts.

We can take this one step further still! Instead of storing the attachment text as is, we can compress it, like so:

image

And then in the index, we’ll decompress on the fly:

image

Note that throughout all of that, the queries that you send are exactly the same, we are just taking 20% of the disk space and 25% of the memory that we used to.

time to read 6 min | 1163 words

After spending so much time building my own protocol, I decided to circle back a bit and go back to TLS itself and see if I can get the same thing for it that I make on my own. As a reminder, here is what we achieved:

Trust established between nodes in the system via a back channel, not Public Key Interface. For example, I can have:

On the client side, I can define something like this:

Server=northwind.database.local:9222;Database=Orders;Server Key=6HvG2FFNFIifEjaAfryurGtr+ucaNgHfSSfgQUi5MHM=;Client Secret Key=daZBu+vbufb6qF+RcfqpXaYwMoVajbzHic4L0ruIrcw=

Can we achieve this using TLS? On first glance, that doesn’t seem to be possible. After all, TLS requires certificates, but we don’t have to give up just yet. One of the (new) options for certificates is Ed25519, which is a key pair scheme that uses 256 bits keys. That is also similar to what I have used in my previous posts, behind the covers. So the plan is to do the following:

  • Generate key pairs using Ed25519 as before.
  • Distribute the knowledge of the public keys as before.
  • Generate a certificate using those keys.
  • During TLS handshake, trust only the keys who we were explicitly told to trust, disabling any PKI checks.

That sounds reasonable, right?  Except that I failed.

To be rather more exact, I couldn’t generate a valid X509 certificate from Ed25519 key pair. Using .NET, you can use the CertificateRequest class to generate certificates, but it only supports RSA and ECDsa keys. Safe sizes for those types are probably:

  • RSA – 4096 bits (2048 bits might also be acceptable) – key size on disk: 2,348 bytes.
  • ECDsa – 521 bits  - key size on disk: 223 bytes.

The difference between those and the 32 bytes key for Ed25519 is pretty big. It isn’t much in the grand scheme of things, for sure, but it matters. The key issue (pun intended) is that this is large enough to make it awkward to use the value directly. Consider the connection string I listed above. The keys we use here are small enough that we can just write them inline (simplest and most oblivious thing to do). The keys for either of the more commonly used RSA and ECDsa are too big for that.

Here is a ECDsa key, for example:

MIHcAgEBBEIBuF5HGV5342+1zk1/Xus4GjDx+FR
rbOPrC0Q+ou5r5hz/49w9rg4l6cvz0srmlS4/Ysg
H/6xa0PYKnpit02assuGgBwYFK4EEACOhgYkDgYY
ABABaxs8Ur5xcIHKMuIA7oedANhY/UpHc3KX+SKc
K+NIFue8WZ3YRvh1TufrUB27rzgBR6RZrEtv6yuj
2T2PtQa93ygF761r82woUKai7koACQZYzuJaGYbG
dL+DQQApory0agJ140T3kbT4LJPRaUrkaZDZnpLA
oNdMkUIYTG2EYmsjkTg==

And here is an RSA key:

MIIJKAIBAAKCAgEApkGWJc+Ir0Pxpk6affFIrcrRZgI8hL6yjXJyFNORJUrgnQUw
i/6jAZc1UrAp690H5PLZxoq+HdHVN0/fIY5asBnj0QCV6A9LRtd3OgPNWvJtgEKw
GCa0QFofKk/MTjPimUKiVHT+XgZTnTclzBP3aSZdsROUpmHs2h4eS9cRNoEnrC1u
YUzaGK4OeQNLCNi1LyB6I33697+dNLVPoMJgfDnoDBV12KtpB6/pLjigYgIMwFx/
Qyx9DhnREXYst/CLQs8S/dmF+opvghhdhiUUOUwqGA/mIIbwtnhMQFKWCQXEk7km
5hNg/fyv/qwqvTkqQTZkJdj0/syPNhqnZ9RurFPkiOwPzde8I/QwOkEoOXVMboh4
Ji3Y6wwEkWSwY/9rzUK2799lzTmZlvUu2ZxNZfKxQ84vmPUCvP288KXOCU4FxIUX
lujBu7aXUORtQE9oZxBSxqCSqmCEb7jGwR3JOpFlUZymK7W0jbY4rmfZL8vcDYdG
r0msuXD+ggVjYzpHI7EH5MtQXYJZ2aKan5ZpSL/Lb0HsjkDLrsvMi+72FcwXH+5P
Q1E30uxs5y9xOTSqff9T9x6KPAOwIpmrv4Bc3J0NgEgWiKxG9nM1+f8FkKlCRino
rrF9ZrC+/l/vc67xye+Pr1tLvEFT5ARu/nR1JH/Lv/CsAU9y51wOPqD6dQUCAwEA
AQKCAgBJseTWWcnitqFU8J62mM94ieCL8Q3WYZlP7Zz38lfySeCKeZRtWa/zsozm
XEQY0t7+807pHPLs0OhMHlFv1GQKj09Wg4XvWWgqvLOSucC7QZ6cLfNUoUNhCxGp
dbnAKGuXN9wwx7NBBljl5V4Ruf//UgxRw7YuklWk0ZjoUSrGGDX3siOtaZ17Nxwf
NAB8qWKWwzSgquUmEH+kr4HeZorSRfC/+ntEUaa6y5T28g7Vosb4NYgLxJqiN3te
3B0yY6O3N4bZkyQ6TEblSdua7LCsPUCjbdi6LlZg664RDQqIcVATkwzVC14A95Mj
tjkzqzU5ttxpkmP21cHdX6847QcpERgQ7NzAbjrU5UH8aBOsetaZo/1yDr5U13ah
YcAq9XX6tLeAA0rUsnXKAWBQswtWIU0jXBuRRSE7xDXv+82SWEoPqZMSAv77p+uc
AeogN+zzZPPet/AOERKLcGC9WoC/HT7q/H3zFAsRPoKY6qMfLFntdosc0lmRxvHv
b9NXBzKdDuOiUXhdRMhL5Yld8ivvHuwRnPfcZycplSFrA9E5xo/S3RQj+Re9L0yR
8tNzjl+lcgtk8Q0CSJl6eW2Fjja5ZrvDD8qL97+WFqHR7LTTqZ7TmiT7u1MXW1Il
wTuccWCQ85BzxpRbyzPXLdsxMgPCmjicX/23+srOXAk2z42bOQKCAQEAzM1Mocnd
w0uoETHZH0VX29WaKVqUAecGtrj+YNujzmjLy2FPW10njgBfZgkVQETjFxUS9LBZ
xv/p6fCio3NgXh3q7O/kWLxojuR8JB7n4vxoKGBwinwzi1DHp37gzjp/gGdr4mG9
8b7UeFJY8ZPz0EoXcPr3TL+69vOoLieti/Ou9W7HbpDHXYLKclFkJ0d/0AtDNaM7
kCNvI7HgC5JvCCOdGatmbB09kniQjtvE4Wh4vOg/TtH1KoKGXbC8JnjHNRjJtgqU
1mhbq36Eru8iOVME9jyHAkSPqphqeayEUdeP3C1Bc2xmrlxCQALZrAfH37ZWcf44
UuOO5TMnf5HTLwKCAQEAz9F59/xlVDHaaFpHK6ZRTQQWh6AVBDKUG2KDqRFAGQik
6YqQwJFGSo1Z+FjXzidGEHkqH6KyGtSxS6dTgqqfTC96P1rdrBab5vgdXpfSa/0S
Qke2sH3eZ1vWJe95AD7AuVfsN/6IXIBHP5fWjXthGuo6U3vkkNjdjJGNxjfuMuug
SbxjjVV6kZI6gwX2gfTQDKUT+yRjEnqGAyCcFeZXwWGryF1IseOFaNB2ATVKSqn9
oXI7AaI3ZRX3SyfOfyo3TaZEXabS1tfEg4JwIGNpx8WvRxb/X7WZi46be8u0ya4L
BDJ6ZIOBf7lpvaI1Dr3dzCuPqjGQ3V/xPwGy5D8+CwKCAQEAuFVUUw6pjn0LIabX
QQEd6hzgq7X+H5Q8A7yQIMewMTk7rKvCTH6U+oe1VdZ5DSazqvPp4tjThXyTol9X
U3ymUS/mYiotQf0asvpODgjPOAttCGJ9CPhvQEaN3WEioBwg5IaxoMnOt8bF4CJm
MdG0ElaNsMACVE8BzgJS7nACEURcxkVWNVsURkNRSgGd/oipLqzkamOoWby67MrN
2DyNuSqs3QzbnBXZdHsVya9fDm8EtSroyF3Lp95hZ/SJ9KqiylSsQTBW9IBrefjf
HcDY8fWaMrMZ5V2mXarfsvInCq7VqhwFnAkGhos9ifXGy8MZEG9CcUmakmiFFiCr
vXOYOwKCAQADL9Yr/F3dbapIwWGoBLPod3CVAdpwpwnoZZlZRV9zQtOslShlG5U1
XXeMvGgKzEVhyUnhFFCg4rQZUeaQ8Wbh9zRrtkwB8JLRduqUYcWjTE00YP8nM7bu
ZNUi3cpAO7Ye4X9I2Ilkyb7N9dkfcE3r6L2ePB8kLX8wQacn7AGmHEDoAJCSQUZQ
5yooijXehk+OchWdW1B9nw1hDOX33AFqgMHun6eWusN3+QJmQFf0TykJicPn4YHx
9eVF7MVY49/XO/5+ZSmEi+iCj8SCaqPboWdvsqWV5SYGotg1jMkn8phOpyuDURTy
TXiWpN8la7n0AJMCbCIpkugTLEZ/A41DAoIBAAr73RhOZWDi40D6g+Z2KLHMtLdn
xHMEkT0bzRZYlr0WGQpP/GPKJummDHuv/fRq2qXhML7yh7JK8JFxYU94fW2Ya1tx
lYa5xtcboQpBLfDvvvI4T4H1FE4kXeOoO46AtZ6dFZyg3hgKlaJkR+pFPLr5Aeak
w9+6UCK8v72esoKzCMxQzt3L2euYRt4zTKL3NnrgS7i5w56h2UvP1rDo3P0RVoqc
knS1ToamVL2JaPnf/g+gUUVZyya9pyu9RP8MIcd1cvnxZec8JaN89WWnsA2JJbPw
stYBnWMvLFabPtPXVcsLrWMEmLFI2yn+fU4YTviwRSs/SrprXDdsqZO2xd8=

Note that in both cases, we are looking at the private key only. As you can imagine, this isn’t really something viable. We will need to store that separately, load it from a file, etc.

I tried generating Ed25519 keys using the built-in .NET API as well as the Bouncy Castle one. Bouncy Castle is a well known cryptographic library that is very useful. It also supports Ed25519. I spent quite some time trying to get it to work. You can see the code here. Unfortunately, while I’m able to generate a certificate, it doesn’t appear to be valid. Here is what this looks like:

image

Using RSA, however, did generate viable certificates, and didn’t take a lot of code at all:

We store the actual key in a file, and we generate a self signed certificate on the fly. Great. I did try to use the ECDsa option, which generates a much smaller key, but I run into sever issues there. I could generate the key, but I couldn’t use the certificate, I run into a host of issues around permissions, somehow.

You can try to figure out more details from this issue, what I took from that is that in order to use ECDsa on Windows, I would need to jump through hoops. And I don’t know if Ed25519 will even work or how to make it.

As an aside, I posted the code to generate the Ed25519 certificates, if you can show me how to make it work, it would be great.

So we are left with using RSA, with the largest possible key. That isn’t fun, but we can make it work. Let’s take a look at the connection string again, what if we change it so it will look like this?

Server=northwind.database.local:9222;Database=Orders;Server Key Hash=6HvG2FFNFIifEjaAfryurGtr+ucaNgHfSSfgQUi5MHM=;Client Key=client.key

I marked the pieces that were changed. The key observation here is that I don’t need to hold the actual public key here, I just need to recognize it. That I can do by simply storing the SHA256 signature of the public key, that ensures that I always have the same length, regardless of what key type I’m using. For that matter, I think that this is something that I want to do regardless, because if I do manage to fix the other key types, I could still use the same approach. All values in SHA256 will hash to the same length, obviously.

After all of that, what do we have?

We generate a keypair and store, we let the other side know about the public key hash as the identifier. Then we dynamically generate a certificate with the stored key. Let’s say that we do that once per startup. That certificate is going to be different each run, but we don’t actually care, we can safely authenticate the other side using the (persistent) key pair by validating the public key hash.

Here is what this will look like in code from the client perspective:

And here is what the server is doing:

As you can see, this is very similar to what I ended up with in my secured protocol, but it utilizes TLS and all the weight behind it to achieve the same goal. A really important aspect of this is that we can actually connect to the server using something like openssl s_client –connect, which can be really nice for debugging purposes.

However, the weight of TLS is also an issue. I failed to successfully create Ed25519 certificates, which was my original goal. I couldn’t get it work using ECDsa certificates and had to use RSA ones with the biggest keys. It was obvious that a lot of those issues are because we are running on a particular operating system, which means that this protocol is subject to the whims of the environment still. I have also not done everything that is required to ensure that there will not be any remote calls as part of the TLS handshake in this case, that can actually be quite complex to ensure, to be honest. Given that these are self signed (and pretty bare boned) certificates, there shouldn’t be any, but you know what they say about assumptions Smile.

The end goal is that we are now able to get roughly the same experience using TLS as the underlying communication mechanism, without dealing with certificates directly. We can use standard tooling to access the server, which is great.

Note that this doesn’t address something like browser access, which will not be trusted, obviously.  For that, we have to go back to Let’s Encrypt or some other trusted CA, and we are back in PKI land.

time to read 5 min | 826 words

One of the things that I find myself paying a lot of attention to is the error handling portion of writing software. This is one of the cases where I’m sounding puffy even to my own ears, but from over two decades of experience, I can tell you that getting error handling right is one of the most important things that you can do for  your systems. I spend a lot of time on getting errors right. That doesn’t just mean error handling, but error reporting and giving enough context that the other side can figure out what we need to do.

In a secured protocol, that is a bit harder, because we need to safeguard ourselves from eavesdroppers, but I spent significant amounts of time thinking on how to do this properly. Here are the ground rules I set out for myself:

  • The most common scenario is client failing to connect to the server.
  • We need to properly report underlying issues (such as TCP errors) while also exposing any protocol level issues.
  • There is an error during the handshake and errors during processing of application messages. Both scenarios should be handled.

We already saw in the previous post that there is the concept of the data messages and alert messages (of which there can only be one). Let’s look how that works for the handshake scenario. I’m focusing on the server side here, because I’m assuming that this one is more likely to be opaque. A client side issue can be much more easily troubleshooted. And the issue isn’t error handling inside the code, it is distributed error handling. In other words, if the server has an issue, how it reports to the client?

The other side, where the client wants to report an issue to the server, is of no interest to us. From our perspective, a client can cut off at any point (TCP connection broke, etc), so there is no meaning to trying to do that gracefully or give more data to the server. What would the server do with that?

Here is the server portion of establishing a secured connection:

I’m using Zig to write this code and you can see any potential error in the process marked with a try keyword. Looking at the code, everything up to line 24 (the completeAuth() call) is mechanically sending and receiving data. Any error up to that point is something that is likely network related (so the connection is broken). You can see that the protocol call challenge() can fail as does the call to generateKey() – in both cases, there isn’t much that I can do about it. If the generateKey() call fails, there is no shared secret (for that matter, it doesn’t look like that can fail, but we’ll ignore that). As for the challenge() call, the only way that can fail is if the server has failed to encrypt its challenge properly. That is not something that the client can do much about. And anyway, there isn’t a failing codepath there either.

In other words, aside from network issues, which will break the connection (meaning we cannot send the error to the client anyway), we have to wait until we process the challenge from the client to have our first viable failure. In the code above, I”m just calling try, which means that we’ll fail the connection attempt, close the socket and basically just hang up on the client. That isn’t nice to do at all. Here is what I replaced line 24 with:

What is going on here is that by the time that I got the challenge response from the client, I have enough information to send derive the shared key. I can use that to send an alert to the other side, letting them know what the failure was. A client will complete the challenge, and if there is a handshake failure, we proceed to fail gracefully with meaning error.

But there is another point to this protocol, an alert message doesn’t have to show up only in the hand  shake part. Consider a long running response that run into an error. Here is how you’ll usually handle that in TCP / HTTP scenarios, assume that we are streaming data to the client and suddenly run into an issue:

How do you send an error midstream? Well, you don’t. If you are lucky, you’ll have the error output and have some way to get the full message and manually inspect it. That is a distressingly common issue, by the way, and a huge problem for proper error reporting with long running responses.

With the alert model, we have effectively multiple channels in the same TCP stream that we can utilize to send a clear and independent error for the client. Much nicer overall, even if I say so myself.

And it just occurred to me that this mimics quite nicely the same approach that Zig itself uses for error handling Smile.

time to read 7 min | 1248 words

We now have managed to do a proper handshake and both client and server has a shared key. The client has also verified that the server is who they thought it should be, the server knows who the client is and can lookup whatever authorization such a client is ought to get. The next step we have to take is actually starting sending data over the wire. I mentioned earlier that while conceptually, we are dealing with a stream of data, in practice, we have to send the data as independent records. That is done so we can properly verify that they weren’t meddled with along the way (either via cosmic radiation or malicious intent).

We’ll start with the writing data, which is simple. We initiate the write side of the connection using CryptoWriter:

We allocate a buffer that is 32KB in size (16KB x 2). The record size we selected is 16KB. Unlike TLS, this is an inclusive size, so the entire thing must fit in 16KB. We need to allocate 32KB because the API we use does not support in place encryption. You’ll note that we reserved some space in the header (5 bytes, to be exact) for our own needs. You’ll note that we initialize the stream and send the stream header to the other side in here, that is the only reference for cryptography in the initialization. The actual writing isn’t really that interesting, we are pushing all the data to the buffer, until we run out of space, then we call flush(). I’ve written this code in plenty of languages, and it is pretty straightforward, if tedious.

There isn’t anything happening here, until we call to flush(RecordTypes.Data) – that is an indication to the other side that this is application data, rather than some protocol level message. The flush() method is where things gets really interesting.

There is a lot of code here, I know. Let’s see if I can take it in all, there are some preconditions that should be fairly obvious, then we write the size of the plain text value as well as the record type to the header (that part of the header will be encrypted, mind). The next step is interesting, we invoke a callback to get an answer about how much padding we should use. There is a lot of information about padding. In general, just looking at the size of the data can tell you about what is going on, even if there is nothing else you can figure out. If you know that the “Attack At Dawn”  is 14 chars long, and with the encryption overhead that turns to a 37 bytes message, that along can tell you much.

Assume that you can’t figure out the contents, but can sniff the sizes. That can be a problem. There are certain attacks that rely on leaking the size of messages to work, the BREACH attack, for example, relies on being able to send text that would collide with secret pieces of the message. Analyzing the size of the data that is sent will tell us when we managed to find a match (because the size will be reduced). To solve that, you can define a padding policy. For example, all messages are always exactly 16KB in size, and you’ll send an empty message every second if there is no organic traffic. Alternatively, you may select to randomize the message size (to further confuse things). At any rate, this is a pretty complex topic,and not something that I wanted to get too much into. Being able to let the user decide gives me both worlds. This is a match to SSL_CTX_set_record_padding_callback() on OpenSSL.

The rest is just calling to libsodium to do the actual encryption, setting the encrypted envelope size and sending it to the other side. Note that we use the other half of the buffer here to store the encrypted portion of the data.

In addition to sending application data, we can send alerts to the other side. That is an protocol level error message. I’ll actually have a separate post to talk about error handling, but for now, let’s see how sending an alert looks like:

Basically, we overwrite whatever there is on the buffer, and we flush it immediately to the other side. We also set the alert_raised flag, which will prevent any further usage of the stream. Once an error was sent, we are done. We aren’t closing the stream because that is the job for the calling code, which will get an error and close us during normal cleanup procedures.

The reading process is a bit more involved, on the other hand. We start by mirroring the write, pulling the header from the network and initializing the stream:

The real fun starts when we need to actually read things, let’s take a look at the code and then I’ll explain it in details:

We first check if an alert was raised, if it was, we immediately abort, since the stream is now dead. If there are any plain text bytes, we can return them directly from the buffer. We’ll look into that as well as how we read from the network shortly. For now, let’s focus on what we are doing here.

We read enough from the network to know what is the envelope length that we have to read. That value, if you’ll remember, is the first value that we send for a record and is not encrypted (there isn’t much point, you can look at the packet information to get that if you wanted to). We then make sure that we read the entire record to the buffer. We decrypt the data from the incoming buffer to the plain_text buffer (that is what the read_buffer()  function will use to actually return results).

The rest of the code is figuring out what we actually got. We check what is the actual size of the data we received. We may have received a zero length value, so we have to handle this.We check whatever we got a data record or an alert. If the later, we mark it as such and return an error. If this is just the data, we setup the plain text buffer properly and go to the read_buffer() call to return the values.  That is a lot of code, but not a lot of functionality. Simple code is best, and this match that scenario.

Let’s see how we handle the actual buffer and network reads:

Not much here, just need to make sure that we handle partial reads as well as reading multiple records in one shot.

We saw that when we get an alert, we return an error. But the question is, how do we get the actual alert? The answer is that we store the message in the plain text buffer and record the alert itself. All future calls will fail with an error. You can then call to the alert()  function to get the actual details:

This gives us a nice API to use when there are issues with the stream. I think that matches well with the way Zig handles errors, but I can’t tell whatever this is idiomatic Zig.

That is long enough for now, you can go and read the actual code, of course. And I will welcome any comments. In the next (and likely last) post in the series, I”m going to go over error handling at the protocol level.

time to read 6 min | 1159 words

After figuring out the design, let’s see what it would take to actually write a secured communication channel, sans PKI, in code. I’m going to use Zig as the language of choice here. It is as low level as C, but so much nicer to work with. To actually implement the cryptographic details, I’m going to lean on libsodium to do all the heavy lifting. It took multiple iterations of the code to get to this point, but I’m pretty happy with how it turned out.

I’ll start from the client code, which connects to a remote server and establish a secured TCP channel, here is what this looks like:

The function connects to a server, expecting it to use a particular public key, and will authenticate using a provided key pair. The bulk of the work is done in the crypto.clientConnection() call, where we are following the handshake I outlined here. The result of the call is an AuthenticatedConnection structure, containing both the encrypted stream as well as the public key of the other side. Note that from the client side, if the server doesn’t authenticate using the expected key, the call will fail with an error, so for clients, it is usually not important to check the public key, that is already something that we checked.

The actual stream we return expose a reader and writer instances that you can use to talk to the other side. Note that we are using buffered data, so writing to the stream will not do anything until the buffer is full (about 16KB) or flush() is called.

The other side is the server, of course, which looks like this:

On the server side, we have the crypto.serverConnection() call, it accepts a new connection from a listening socket and starts the handshake process. Note that this code, unlike the client, does not verify that the other side is known to us. Instead, we return that to the caller which can then check the public key of the client. This is intentional, because at this point, we have a secure channel, but not yet authentication. The server can then safely tell the other side that they authorize them (or not) using the channel with not one being able to peek what is going on there.

Let’s dig a bit deeper into the implementation. We’ll start from the client code, which is simpler:

The handshake protocol itself is handled by the protocol.Client. The way I have coded it, we are reading known lengths from the network into in memory structure and using them directly. I can do that because the structures are basically just bunch of packed []u8 (char arrays), so the in memory and network representation are one and the same. That makes things simpler. You can see that I’m calling readNoEof on the structures as bytes. That ensure that I get the whole message from the network and then the actual operations that I need to make are handled.

Here is the sequence of operations:

pki

After sending the hello, the server will respond with a challenge, the client replies and both sides now know that they other side is who they say they are.

Let’s dig a bit deeper, shall we, and see how we have the hello message:

There isn’t much here, we set the version field to a known value, we copy our own session public key (which was just generated and tells no one nothing about us) and then we copy the expected server public key, but we aren’t sending that over the wire in the clear. Instead, we encrypt that. We encrypt it with the client session public key (which we just send over) as well as the expected middlebox key (remember, those might be different). The idea is that the server on the other end may decide to route the request, but at the same time, we want to ensure that we are never revealing any information to 3rd parties.

The actual encryption is handled via the EncryptedBoxBuffer structure, you can see that I’m using Zig’s comptime support to generate a structure with a compile time variant size. That make is trivial to do certain things without really needing to think about the details. It used to be more complex, and be able to support arbitrary embedded structures, but I simplified it to a single buffer. For that matter, for most of the code here, the size I’m using is fixed (32 bytes / 256 bits). The key here is that all the details of nonce generation, MAC validation, etc are hidden and handled. I also don’t really need to think about the space for that, since this directly part of the structure.

It gets more interesting when we look at how the client respond to the challenge from the server:

We copy the server’s session public key to our own state, then we decrypt the server’s long term public key using the public key that we were sent alongside the client’s own secret key. Without both of them, we cannot decrypt the information that was sealed using the server’s secret key and the client’s public key. Remember that we have a very important distinction here:

  • Session key pair – generated per connection, transient, meaningless. If you know what the session public key is, you don’t get much.
  • Long term key pair – used for authentication of the other side. If you know what the long term public key, you may figure out who the client or server are.

Because of that, we never send the long term public keys in the clear. However, just getting the public key isn’t enough, we need to ensure that the other side actually holds the full keypair, not just saying that it does.

We handle that part asking that the server will encrypt the client’s public session key using its long term secret key. Because the public session key is something that the client controls, the fact that the server can produce a value that decrypt to that using the stated public key ensures that it holds the secret portion as well. To answer the challenge, we do much the same thing in reverse. In other words, we are encrypting the server’s public session key with our own long term key and sending that to the server.

The final step is actually generating the symmetric keys for the channel, which is done using:

We are using the client’s session key pair as well as the server’s public key to generate a shared secret. Actually, a pair of secrets, one for sending and one for receiving. On the other side, you do pretty much the same in reverse.

You can see the full source code here.

This is only a partial work, of course, we still need to deal with the issue of actually sending data after the handshake, I’ll deal with that in my next post.

time to read 5 min | 898 words

In the previous post, I talked a lot about the manner in which both client and server will authenticate one another safely and securely. The reason for all the problem is that we want to ensure that we are talking to the entity we believe we do, protect ourselves from man in the middle, etc. The entire purpose of the handshake exchange is to establish that the person on the other side is the right one and not a malicious actor (like the coffee shop router or the corporate firewall). Once we establish who is on the other side, the rest is pretty easy. Each side of the connection generated a key pair specifically for this connection. They then managed to send each other both the other side’s public key as well as prove that they own another key pair (trust in which was established separately, in an offline manner).

In other words, on each side, we have:

  • My key pair (public, secret)
  • Other side public key

With those, we can use key exchange to derive a shared secret key. The gist of this is that we know that this statement holds:

op(client_secret, server_public) == op(server_secret, client_public)

The details on the actual op() aren’t important for understanding, but I’m using sodium, so this is scalar multiplication over curve 25519. If this tells you anything, great. Otherwise, you can trust that people who do understand the math says that this is safe to do. Diffie-Hellman is the search term to use to understand how this works.

Now that we have a shared secret key, we can start sending data to one another, right? It would appear that the answer to that is… no. Or at least, not yet. The communication channel that we build here is based is built on top of TCP, providing two way communication for client and server. The TCP uses the stream abstraction to send data over the wire. That does not work with modern cryptographic algorithm.

How can that be? After is literally this thing called stream cipher, after all. If you cannot use a stream cipher for stream, what is it for?

A stream cipher is a basic building block for modern cryptography. However, it also has a serious problem. It doesn’t protect you from modification of the ciphertext. In other words, you will “successfully” decrypt the value and use it, even though it was modified. Here is a scary scenario of how you can abuse that badly.

Because of such issues, all modern cryptographic algorithms uses Authenticated Encryption. In other words, to successfully complete their operation, they require that the cipher stream will match a cryptographic key. In other words, conceptually, the first thing that a modern cipher will do on decryption is something like:

That isn’t quite how this looks like, but it is close enough to understand what is going on. If you want to look at how a real implementation does it, you can look here. The python code is nicer, but this is basically the same concept.

So, why does that matter for us? How does this relate to having to dealing with streams?

Consider the following scenario, in this model, in order to successfully decrypt anything, we first need to validate the MAC (message authentication code) for the encrypted value. But in order to do that, we have to have the whole value, not just part of that. In other words, we cannot use a real stream, instead, we need to send the data in chunks. The TLS protocol has the same issue, that is handled via the notion of records, with a maximum size of about 16KB. So a TLS stream is actually composed of records that are processed independently from one another. That also means that before you get to the TLS a buffered stream is a must, otherwise we’ll send just a few plain text bytes for a lot of cryptographic envelope. In other words, if you call tls.Write(buffer[0..4]), if you don’t have buffering, this will send a packet with a cryptographic envelope that is much bigger than the actual plain text value that you sent. 

Looking at the TLS record layer, I think that I’ll adopt many of the same behaviors. Let’s consider a record:

So each record is composed of an envelope, that simply contains the length, then we have the cipher text itself. I’m intending to use the libsodium’s encrypted stream, because it lets me handle things like re-keying on the fly transparently, etc. We read the record from the network, decrypt and then need to decide what to do.

If this is an alert, we raise it to the user (this is critical for good error reporting). Note that in this way I can send (an encrypted) stream to the other side to give a good error for the caller. For data, we just pass it to the caller. Note that there is one very interesting aspect here. We have two Len fields. This is because we allow padding, that can help avoid attacks such as BREACH and mitigate traffic analysis.  We ensure that the padding is always set to zero, similar in reason to the TLS model, to avoid mistakes and to force implementation correctness.

I think that this is enough theory for now. In my next post, I want to get to actually implementing this.

As usual, I would love to hear your feedback and comments.

FUTURE POSTS

  1. Heisenbug: The concurrent exception in the transaction that will only occur if you observe it - 2 hours from now

There are posts all the way to Oct 22, 2021

RECENT SERIES

  1. A PKI-less secure communication channel (6):
    12 Oct 2021 - Using TLS
  2. Postmortem (3):
    27 Sep 2021 - Partial RavenDB Cloud outage
  3. Production postmortem (31):
    17 Sep 2021 - The Guinness record for page faults & high CPU
  4. RavenDB 5.2 (2):
    06 Aug 2021 - Simplifying atomic cluster wide transactions
  5. re (28):
    23 Jun 2021 - The performance regression odyssey
View all series

RECENT COMMENTS

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats