Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,520
|
Comments: 51,141
Privacy Policy · Terms
filter by tags archive
time to read 6 min | 1163 words

After spending so much time building my own protocol, I decided to circle back a bit and go back to TLS itself and see if I can get the same thing for it that I make on my own. As a reminder, here is what we achieved:

Trust established between nodes in the system via a back channel, not Public Key Interface. For example, I can have:

On the client side, I can define something like this:

Server=northwind.database.local:9222;Database=Orders;Server Key=6HvG2FFNFIifEjaAfryurGtr+ucaNgHfSSfgQUi5MHM=;Client Secret Key=daZBu+vbufb6qF+RcfqpXaYwMoVajbzHic4L0ruIrcw=

Can we achieve this using TLS? On first glance, that doesn’t seem to be possible. After all, TLS requires certificates, but we don’t have to give up just yet. One of the (new) options for certificates is Ed25519, which is a key pair scheme that uses 256 bits keys. That is also similar to what I have used in my previous posts, behind the covers. So the plan is to do the following:

  • Generate key pairs using Ed25519 as before.
  • Distribute the knowledge of the public keys as before.
  • Generate a certificate using those keys.
  • During TLS handshake, trust only the keys who we were explicitly told to trust, disabling any PKI checks.

That sounds reasonable, right?  Except that I failed.

To be rather more exact, I couldn’t generate a valid X509 certificate from Ed25519 key pair. Using .NET, you can use the CertificateRequest class to generate certificates, but it only supports RSA and ECDsa keys. Safe sizes for those types are probably:

  • RSA – 4096 bits (2048 bits might also be acceptable) – key size on disk: 2,348 bytes.
  • ECDsa – 521 bits  - key size on disk: 223 bytes.

The difference between those and the 32 bytes key for Ed25519 is pretty big. It isn’t much in the grand scheme of things, for sure, but it matters. The key issue (pun intended) is that this is large enough to make it awkward to use the value directly. Consider the connection string I listed above. The keys we use here are small enough that we can just write them inline (simplest and most oblivious thing to do). The keys for either of the more commonly used RSA and ECDsa are too big for that.

Here is a ECDsa key, for example:

MIHcAgEBBEIBuF5HGV5342+1zk1/Xus4GjDx+FR
rbOPrC0Q+ou5r5hz/49w9rg4l6cvz0srmlS4/Ysg
H/6xa0PYKnpit02assuGgBwYFK4EEACOhgYkDgYY
ABABaxs8Ur5xcIHKMuIA7oedANhY/UpHc3KX+SKc
K+NIFue8WZ3YRvh1TufrUB27rzgBR6RZrEtv6yuj
2T2PtQa93ygF761r82woUKai7koACQZYzuJaGYbG
dL+DQQApory0agJ140T3kbT4LJPRaUrkaZDZnpLA
oNdMkUIYTG2EYmsjkTg==

And here is an RSA key:

MIIJKAIBAAKCAgEApkGWJc+Ir0Pxpk6affFIrcrRZgI8hL6yjXJyFNORJUrgnQUw
i/6jAZc1UrAp690H5PLZxoq+HdHVN0/fIY5asBnj0QCV6A9LRtd3OgPNWvJtgEKw
GCa0QFofKk/MTjPimUKiVHT+XgZTnTclzBP3aSZdsROUpmHs2h4eS9cRNoEnrC1u
YUzaGK4OeQNLCNi1LyB6I33697+dNLVPoMJgfDnoDBV12KtpB6/pLjigYgIMwFx/
Qyx9DhnREXYst/CLQs8S/dmF+opvghhdhiUUOUwqGA/mIIbwtnhMQFKWCQXEk7km
5hNg/fyv/qwqvTkqQTZkJdj0/syPNhqnZ9RurFPkiOwPzde8I/QwOkEoOXVMboh4
Ji3Y6wwEkWSwY/9rzUK2799lzTmZlvUu2ZxNZfKxQ84vmPUCvP288KXOCU4FxIUX
lujBu7aXUORtQE9oZxBSxqCSqmCEb7jGwR3JOpFlUZymK7W0jbY4rmfZL8vcDYdG
r0msuXD+ggVjYzpHI7EH5MtQXYJZ2aKan5ZpSL/Lb0HsjkDLrsvMi+72FcwXH+5P
Q1E30uxs5y9xOTSqff9T9x6KPAOwIpmrv4Bc3J0NgEgWiKxG9nM1+f8FkKlCRino
rrF9ZrC+/l/vc67xye+Pr1tLvEFT5ARu/nR1JH/Lv/CsAU9y51wOPqD6dQUCAwEA
AQKCAgBJseTWWcnitqFU8J62mM94ieCL8Q3WYZlP7Zz38lfySeCKeZRtWa/zsozm
XEQY0t7+807pHPLs0OhMHlFv1GQKj09Wg4XvWWgqvLOSucC7QZ6cLfNUoUNhCxGp
dbnAKGuXN9wwx7NBBljl5V4Ruf//UgxRw7YuklWk0ZjoUSrGGDX3siOtaZ17Nxwf
NAB8qWKWwzSgquUmEH+kr4HeZorSRfC/+ntEUaa6y5T28g7Vosb4NYgLxJqiN3te
3B0yY6O3N4bZkyQ6TEblSdua7LCsPUCjbdi6LlZg664RDQqIcVATkwzVC14A95Mj
tjkzqzU5ttxpkmP21cHdX6847QcpERgQ7NzAbjrU5UH8aBOsetaZo/1yDr5U13ah
YcAq9XX6tLeAA0rUsnXKAWBQswtWIU0jXBuRRSE7xDXv+82SWEoPqZMSAv77p+uc
AeogN+zzZPPet/AOERKLcGC9WoC/HT7q/H3zFAsRPoKY6qMfLFntdosc0lmRxvHv
b9NXBzKdDuOiUXhdRMhL5Yld8ivvHuwRnPfcZycplSFrA9E5xo/S3RQj+Re9L0yR
8tNzjl+lcgtk8Q0CSJl6eW2Fjja5ZrvDD8qL97+WFqHR7LTTqZ7TmiT7u1MXW1Il
wTuccWCQ85BzxpRbyzPXLdsxMgPCmjicX/23+srOXAk2z42bOQKCAQEAzM1Mocnd
w0uoETHZH0VX29WaKVqUAecGtrj+YNujzmjLy2FPW10njgBfZgkVQETjFxUS9LBZ
xv/p6fCio3NgXh3q7O/kWLxojuR8JB7n4vxoKGBwinwzi1DHp37gzjp/gGdr4mG9
8b7UeFJY8ZPz0EoXcPr3TL+69vOoLieti/Ou9W7HbpDHXYLKclFkJ0d/0AtDNaM7
kCNvI7HgC5JvCCOdGatmbB09kniQjtvE4Wh4vOg/TtH1KoKGXbC8JnjHNRjJtgqU
1mhbq36Eru8iOVME9jyHAkSPqphqeayEUdeP3C1Bc2xmrlxCQALZrAfH37ZWcf44
UuOO5TMnf5HTLwKCAQEAz9F59/xlVDHaaFpHK6ZRTQQWh6AVBDKUG2KDqRFAGQik
6YqQwJFGSo1Z+FjXzidGEHkqH6KyGtSxS6dTgqqfTC96P1rdrBab5vgdXpfSa/0S
Qke2sH3eZ1vWJe95AD7AuVfsN/6IXIBHP5fWjXthGuo6U3vkkNjdjJGNxjfuMuug
SbxjjVV6kZI6gwX2gfTQDKUT+yRjEnqGAyCcFeZXwWGryF1IseOFaNB2ATVKSqn9
oXI7AaI3ZRX3SyfOfyo3TaZEXabS1tfEg4JwIGNpx8WvRxb/X7WZi46be8u0ya4L
BDJ6ZIOBf7lpvaI1Dr3dzCuPqjGQ3V/xPwGy5D8+CwKCAQEAuFVUUw6pjn0LIabX
QQEd6hzgq7X+H5Q8A7yQIMewMTk7rKvCTH6U+oe1VdZ5DSazqvPp4tjThXyTol9X
U3ymUS/mYiotQf0asvpODgjPOAttCGJ9CPhvQEaN3WEioBwg5IaxoMnOt8bF4CJm
MdG0ElaNsMACVE8BzgJS7nACEURcxkVWNVsURkNRSgGd/oipLqzkamOoWby67MrN
2DyNuSqs3QzbnBXZdHsVya9fDm8EtSroyF3Lp95hZ/SJ9KqiylSsQTBW9IBrefjf
HcDY8fWaMrMZ5V2mXarfsvInCq7VqhwFnAkGhos9ifXGy8MZEG9CcUmakmiFFiCr
vXOYOwKCAQADL9Yr/F3dbapIwWGoBLPod3CVAdpwpwnoZZlZRV9zQtOslShlG5U1
XXeMvGgKzEVhyUnhFFCg4rQZUeaQ8Wbh9zRrtkwB8JLRduqUYcWjTE00YP8nM7bu
ZNUi3cpAO7Ye4X9I2Ilkyb7N9dkfcE3r6L2ePB8kLX8wQacn7AGmHEDoAJCSQUZQ
5yooijXehk+OchWdW1B9nw1hDOX33AFqgMHun6eWusN3+QJmQFf0TykJicPn4YHx
9eVF7MVY49/XO/5+ZSmEi+iCj8SCaqPboWdvsqWV5SYGotg1jMkn8phOpyuDURTy
TXiWpN8la7n0AJMCbCIpkugTLEZ/A41DAoIBAAr73RhOZWDi40D6g+Z2KLHMtLdn
xHMEkT0bzRZYlr0WGQpP/GPKJummDHuv/fRq2qXhML7yh7JK8JFxYU94fW2Ya1tx
lYa5xtcboQpBLfDvvvI4T4H1FE4kXeOoO46AtZ6dFZyg3hgKlaJkR+pFPLr5Aeak
w9+6UCK8v72esoKzCMxQzt3L2euYRt4zTKL3NnrgS7i5w56h2UvP1rDo3P0RVoqc
knS1ToamVL2JaPnf/g+gUUVZyya9pyu9RP8MIcd1cvnxZec8JaN89WWnsA2JJbPw
stYBnWMvLFabPtPXVcsLrWMEmLFI2yn+fU4YTviwRSs/SrprXDdsqZO2xd8=

Note that in both cases, we are looking at the private key only. As you can imagine, this isn’t really something viable. We will need to store that separately, load it from a file, etc.

I tried generating Ed25519 keys using the built-in .NET API as well as the Bouncy Castle one. Bouncy Castle is a well known cryptographic library that is very useful. It also supports Ed25519. I spent quite some time trying to get it to work. You can see the code here. Unfortunately, while I’m able to generate a certificate, it doesn’t appear to be valid. Here is what this looks like:

image

Using RSA, however, did generate viable certificates, and didn’t take a lot of code at all:

We store the actual key in a file, and we generate a self signed certificate on the fly. Great. I did try to use the ECDsa option, which generates a much smaller key, but I run into sever issues there. I could generate the key, but I couldn’t use the certificate, I run into a host of issues around permissions, somehow.

You can try to figure out more details from this issue, what I took from that is that in order to use ECDsa on Windows, I would need to jump through hoops. And I don’t know if Ed25519 will even work or how to make it.

As an aside, I posted the code to generate the Ed25519 certificates, if you can show me how to make it work, it would be great.

So we are left with using RSA, with the largest possible key. That isn’t fun, but we can make it work. Let’s take a look at the connection string again, what if we change it so it will look like this?

Server=northwind.database.local:9222;Database=Orders;Server Key Hash=6HvG2FFNFIifEjaAfryurGtr+ucaNgHfSSfgQUi5MHM=;Client Key=client.key

I marked the pieces that were changed. The key observation here is that I don’t need to hold the actual public key here, I just need to recognize it. That I can do by simply storing the SHA256 signature of the public key, that ensures that I always have the same length, regardless of what key type I’m using. For that matter, I think that this is something that I want to do regardless, because if I do manage to fix the other key types, I could still use the same approach. All values in SHA256 will hash to the same length, obviously.

After all of that, what do we have?

We generate a keypair and store, we let the other side know about the public key hash as the identifier. Then we dynamically generate a certificate with the stored key. Let’s say that we do that once per startup. That certificate is going to be different each run, but we don’t actually care, we can safely authenticate the other side using the (persistent) key pair by validating the public key hash.

Here is what this will look like in code from the client perspective:

And here is what the server is doing:

As you can see, this is very similar to what I ended up with in my secured protocol, but it utilizes TLS and all the weight behind it to achieve the same goal. A really important aspect of this is that we can actually connect to the server using something like openssl s_client –connect, which can be really nice for debugging purposes.

However, the weight of TLS is also an issue. I failed to successfully create Ed25519 certificates, which was my original goal. I couldn’t get it work using ECDsa certificates and had to use RSA ones with the biggest keys. It was obvious that a lot of those issues are because we are running on a particular operating system, which means that this protocol is subject to the whims of the environment still. I have also not done everything that is required to ensure that there will not be any remote calls as part of the TLS handshake in this case, that can actually be quite complex to ensure, to be honest. Given that these are self signed (and pretty bare boned) certificates, there shouldn’t be any, but you know what they say about assumptions Smile.

The end goal is that we are now able to get roughly the same experience using TLS as the underlying communication mechanism, without dealing with certificates directly. We can use standard tooling to access the server, which is great.

Note that this doesn’t address something like browser access, which will not be trusted, obviously.  For that, we have to go back to Let’s Encrypt or some other trusted CA, and we are back in PKI land.

time to read 5 min | 826 words

One of the things that I find myself paying a lot of attention to is the error handling portion of writing software. This is one of the cases where I’m sounding puffy even to my own ears, but from over two decades of experience, I can tell you that getting error handling right is one of the most important things that you can do for  your systems. I spend a lot of time on getting errors right. That doesn’t just mean error handling, but error reporting and giving enough context that the other side can figure out what we need to do.

In a secured protocol, that is a bit harder, because we need to safeguard ourselves from eavesdroppers, but I spent significant amounts of time thinking on how to do this properly. Here are the ground rules I set out for myself:

  • The most common scenario is client failing to connect to the server.
  • We need to properly report underlying issues (such as TCP errors) while also exposing any protocol level issues.
  • There is an error during the handshake and errors during processing of application messages. Both scenarios should be handled.

We already saw in the previous post that there is the concept of the data messages and alert messages (of which there can only be one). Let’s look how that works for the handshake scenario. I’m focusing on the server side here, because I’m assuming that this one is more likely to be opaque. A client side issue can be much more easily troubleshooted. And the issue isn’t error handling inside the code, it is distributed error handling. In other words, if the server has an issue, how it reports to the client?

The other side, where the client wants to report an issue to the server, is of no interest to us. From our perspective, a client can cut off at any point (TCP connection broke, etc), so there is no meaning to trying to do that gracefully or give more data to the server. What would the server do with that?

Here is the server portion of establishing a secured connection:

I’m using Zig to write this code and you can see any potential error in the process marked with a try keyword. Looking at the code, everything up to line 24 (the completeAuth() call) is mechanically sending and receiving data. Any error up to that point is something that is likely network related (so the connection is broken). You can see that the protocol call challenge() can fail as does the call to generateKey() – in both cases, there isn’t much that I can do about it. If the generateKey() call fails, there is no shared secret (for that matter, it doesn’t look like that can fail, but we’ll ignore that). As for the challenge() call, the only way that can fail is if the server has failed to encrypt its challenge properly. That is not something that the client can do much about. And anyway, there isn’t a failing codepath there either.

In other words, aside from network issues, which will break the connection (meaning we cannot send the error to the client anyway), we have to wait until we process the challenge from the client to have our first viable failure. In the code above, I”m just calling try, which means that we’ll fail the connection attempt, close the socket and basically just hang up on the client. That isn’t nice to do at all. Here is what I replaced line 24 with:

What is going on here is that by the time that I got the challenge response from the client, I have enough information to send derive the shared key. I can use that to send an alert to the other side, letting them know what the failure was. A client will complete the challenge, and if there is a handshake failure, we proceed to fail gracefully with meaning error.

But there is another point to this protocol, an alert message doesn’t have to show up only in the hand  shake part. Consider a long running response that run into an error. Here is how you’ll usually handle that in TCP / HTTP scenarios, assume that we are streaming data to the client and suddenly run into an issue:

How do you send an error midstream? Well, you don’t. If you are lucky, you’ll have the error output and have some way to get the full message and manually inspect it. That is a distressingly common issue, by the way, and a huge problem for proper error reporting with long running responses.

With the alert model, we have effectively multiple channels in the same TCP stream that we can utilize to send a clear and independent error for the client. Much nicer overall, even if I say so myself.

And it just occurred to me that this mimics quite nicely the same approach that Zig itself uses for error handling Smile.

time to read 7 min | 1248 words

We now have managed to do a proper handshake and both client and server has a shared key. The client has also verified that the server is who they thought it should be, the server knows who the client is and can lookup whatever authorization such a client is ought to get. The next step we have to take is actually starting sending data over the wire. I mentioned earlier that while conceptually, we are dealing with a stream of data, in practice, we have to send the data as independent records. That is done so we can properly verify that they weren’t meddled with along the way (either via cosmic radiation or malicious intent).

We’ll start with the writing data, which is simple. We initiate the write side of the connection using CryptoWriter:

We allocate a buffer that is 32KB in size (16KB x 2). The record size we selected is 16KB. Unlike TLS, this is an inclusive size, so the entire thing must fit in 16KB. We need to allocate 32KB because the API we use does not support in place encryption. You’ll note that we reserved some space in the header (5 bytes, to be exact) for our own needs. You’ll note that we initialize the stream and send the stream header to the other side in here, that is the only reference for cryptography in the initialization. The actual writing isn’t really that interesting, we are pushing all the data to the buffer, until we run out of space, then we call flush(). I’ve written this code in plenty of languages, and it is pretty straightforward, if tedious.

There isn’t anything happening here, until we call to flush(RecordTypes.Data) – that is an indication to the other side that this is application data, rather than some protocol level message. The flush() method is where things gets really interesting.

There is a lot of code here, I know. Let’s see if I can take it in all, there are some preconditions that should be fairly obvious, then we write the size of the plain text value as well as the record type to the header (that part of the header will be encrypted, mind). The next step is interesting, we invoke a callback to get an answer about how much padding we should use. There is a lot of information about padding. In general, just looking at the size of the data can tell you about what is going on, even if there is nothing else you can figure out. If you know that the “Attack At Dawn”  is 14 chars long, and with the encryption overhead that turns to a 37 bytes message, that along can tell you much.

Assume that you can’t figure out the contents, but can sniff the sizes. That can be a problem. There are certain attacks that rely on leaking the size of messages to work, the BREACH attack, for example, relies on being able to send text that would collide with secret pieces of the message. Analyzing the size of the data that is sent will tell us when we managed to find a match (because the size will be reduced). To solve that, you can define a padding policy. For example, all messages are always exactly 16KB in size, and you’ll send an empty message every second if there is no organic traffic. Alternatively, you may select to randomize the message size (to further confuse things). At any rate, this is a pretty complex topic,and not something that I wanted to get too much into. Being able to let the user decide gives me both worlds. This is a match to SSL_CTX_set_record_padding_callback() on OpenSSL.

The rest is just calling to libsodium to do the actual encryption, setting the encrypted envelope size and sending it to the other side. Note that we use the other half of the buffer here to store the encrypted portion of the data.

In addition to sending application data, we can send alerts to the other side. That is an protocol level error message. I’ll actually have a separate post to talk about error handling, but for now, let’s see how sending an alert looks like:

Basically, we overwrite whatever there is on the buffer, and we flush it immediately to the other side. We also set the alert_raised flag, which will prevent any further usage of the stream. Once an error was sent, we are done. We aren’t closing the stream because that is the job for the calling code, which will get an error and close us during normal cleanup procedures.

The reading process is a bit more involved, on the other hand. We start by mirroring the write, pulling the header from the network and initializing the stream:

The real fun starts when we need to actually read things, let’s take a look at the code and then I’ll explain it in details:

We first check if an alert was raised, if it was, we immediately abort, since the stream is now dead. If there are any plain text bytes, we can return them directly from the buffer. We’ll look into that as well as how we read from the network shortly. For now, let’s focus on what we are doing here.

We read enough from the network to know what is the envelope length that we have to read. That value, if you’ll remember, is the first value that we send for a record and is not encrypted (there isn’t much point, you can look at the packet information to get that if you wanted to). We then make sure that we read the entire record to the buffer. We decrypt the data from the incoming buffer to the plain_text buffer (that is what the read_buffer()  function will use to actually return results).

The rest of the code is figuring out what we actually got. We check what is the actual size of the data we received. We may have received a zero length value, so we have to handle this.We check whatever we got a data record or an alert. If the later, we mark it as such and return an error. If this is just the data, we setup the plain text buffer properly and go to the read_buffer() call to return the values.  That is a lot of code, but not a lot of functionality. Simple code is best, and this match that scenario.

Let’s see how we handle the actual buffer and network reads:

Not much here, just need to make sure that we handle partial reads as well as reading multiple records in one shot.

We saw that when we get an alert, we return an error. But the question is, how do we get the actual alert? The answer is that we store the message in the plain text buffer and record the alert itself. All future calls will fail with an error. You can then call to the alert()  function to get the actual details:

This gives us a nice API to use when there are issues with the stream. I think that matches well with the way Zig handles errors, but I can’t tell whatever this is idiomatic Zig.

That is long enough for now, you can go and read the actual code, of course. And I will welcome any comments. In the next (and likely last) post in the series, I”m going to go over error handling at the protocol level.

time to read 6 min | 1159 words

After figuring out the design, let’s see what it would take to actually write a secured communication channel, sans PKI, in code. I’m going to use Zig as the language of choice here. It is as low level as C, but so much nicer to work with. To actually implement the cryptographic details, I’m going to lean on libsodium to do all the heavy lifting. It took multiple iterations of the code to get to this point, but I’m pretty happy with how it turned out.

I’ll start from the client code, which connects to a remote server and establish a secured TCP channel, here is what this looks like:

The function connects to a server, expecting it to use a particular public key, and will authenticate using a provided key pair. The bulk of the work is done in the crypto.clientConnection() call, where we are following the handshake I outlined here. The result of the call is an AuthenticatedConnection structure, containing both the encrypted stream as well as the public key of the other side. Note that from the client side, if the server doesn’t authenticate using the expected key, the call will fail with an error, so for clients, it is usually not important to check the public key, that is already something that we checked.

The actual stream we return expose a reader and writer instances that you can use to talk to the other side. Note that we are using buffered data, so writing to the stream will not do anything until the buffer is full (about 16KB) or flush() is called.

The other side is the server, of course, which looks like this:

On the server side, we have the crypto.serverConnection() call, it accepts a new connection from a listening socket and starts the handshake process. Note that this code, unlike the client, does not verify that the other side is known to us. Instead, we return that to the caller which can then check the public key of the client. This is intentional, because at this point, we have a secure channel, but not yet authentication. The server can then safely tell the other side that they authorize them (or not) using the channel with not one being able to peek what is going on there.

Let’s dig a bit deeper into the implementation. We’ll start from the client code, which is simpler:

The handshake protocol itself is handled by the protocol.Client. The way I have coded it, we are reading known lengths from the network into in memory structure and using them directly. I can do that because the structures are basically just bunch of packed []u8 (char arrays), so the in memory and network representation are one and the same. That makes things simpler. You can see that I’m calling readNoEof on the structures as bytes. That ensure that I get the whole message from the network and then the actual operations that I need to make are handled.

Here is the sequence of operations:

pki

After sending the hello, the server will respond with a challenge, the client replies and both sides now know that they other side is who they say they are.

Let’s dig a bit deeper, shall we, and see how we have the hello message:

There isn’t much here, we set the version field to a known value, we copy our own session public key (which was just generated and tells no one nothing about us) and then we copy the expected server public key, but we aren’t sending that over the wire in the clear. Instead, we encrypt that. We encrypt it with the client session public key (which we just send over) as well as the expected middlebox key (remember, those might be different). The idea is that the server on the other end may decide to route the request, but at the same time, we want to ensure that we are never revealing any information to 3rd parties.

The actual encryption is handled via the EncryptedBoxBuffer structure, you can see that I’m using Zig’s comptime support to generate a structure with a compile time variant size. That make is trivial to do certain things without really needing to think about the details. It used to be more complex, and be able to support arbitrary embedded structures, but I simplified it to a single buffer. For that matter, for most of the code here, the size I’m using is fixed (32 bytes / 256 bits). The key here is that all the details of nonce generation, MAC validation, etc are hidden and handled. I also don’t really need to think about the space for that, since this directly part of the structure.

It gets more interesting when we look at how the client respond to the challenge from the server:

We copy the server’s session public key to our own state, then we decrypt the server’s long term public key using the public key that we were sent alongside the client’s own secret key. Without both of them, we cannot decrypt the information that was sealed using the server’s secret key and the client’s public key. Remember that we have a very important distinction here:

  • Session key pair – generated per connection, transient, meaningless. If you know what the session public key is, you don’t get much.
  • Long term key pair – used for authentication of the other side. If you know what the long term public key, you may figure out who the client or server are.

Because of that, we never send the long term public keys in the clear. However, just getting the public key isn’t enough, we need to ensure that the other side actually holds the full keypair, not just saying that it does.

We handle that part asking that the server will encrypt the client’s public session key using its long term secret key. Because the public session key is something that the client controls, the fact that the server can produce a value that decrypt to that using the stated public key ensures that it holds the secret portion as well. To answer the challenge, we do much the same thing in reverse. In other words, we are encrypting the server’s public session key with our own long term key and sending that to the server.

The final step is actually generating the symmetric keys for the channel, which is done using:

We are using the client’s session key pair as well as the server’s public key to generate a shared secret. Actually, a pair of secrets, one for sending and one for receiving. On the other side, you do pretty much the same in reverse.

You can see the full source code here.

This is only a partial work, of course, we still need to deal with the issue of actually sending data after the handshake, I’ll deal with that in my next post.

time to read 5 min | 898 words

In the previous post, I talked a lot about the manner in which both client and server will authenticate one another safely and securely. The reason for all the problem is that we want to ensure that we are talking to the entity we believe we do, protect ourselves from man in the middle, etc. The entire purpose of the handshake exchange is to establish that the person on the other side is the right one and not a malicious actor (like the coffee shop router or the corporate firewall). Once we establish who is on the other side, the rest is pretty easy. Each side of the connection generated a key pair specifically for this connection. They then managed to send each other both the other side’s public key as well as prove that they own another key pair (trust in which was established separately, in an offline manner).

In other words, on each side, we have:

  • My key pair (public, secret)
  • Other side public key

With those, we can use key exchange to derive a shared secret key. The gist of this is that we know that this statement holds:

op(client_secret, server_public) == op(server_secret, client_public)

The details on the actual op() aren’t important for understanding, but I’m using sodium, so this is scalar multiplication over curve 25519. If this tells you anything, great. Otherwise, you can trust that people who do understand the math says that this is safe to do. Diffie-Hellman is the search term to use to understand how this works.

Now that we have a shared secret key, we can start sending data to one another, right? It would appear that the answer to that is… no. Or at least, not yet. The communication channel that we build here is based is built on top of TCP, providing two way communication for client and server. The TCP uses the stream abstraction to send data over the wire. That does not work with modern cryptographic algorithm.

How can that be? After is literally this thing called stream cipher, after all. If you cannot use a stream cipher for stream, what is it for?

A stream cipher is a basic building block for modern cryptography. However, it also has a serious problem. It doesn’t protect you from modification of the ciphertext. In other words, you will “successfully” decrypt the value and use it, even though it was modified. Here is a scary scenario of how you can abuse that badly.

Because of such issues, all modern cryptographic algorithms uses Authenticated Encryption. In other words, to successfully complete their operation, they require that the cipher stream will match a cryptographic key. In other words, conceptually, the first thing that a modern cipher will do on decryption is something like:

That isn’t quite how this looks like, but it is close enough to understand what is going on. If you want to look at how a real implementation does it, you can look here. The python code is nicer, but this is basically the same concept.

So, why does that matter for us? How does this relate to having to dealing with streams?

Consider the following scenario, in this model, in order to successfully decrypt anything, we first need to validate the MAC (message authentication code) for the encrypted value. But in order to do that, we have to have the whole value, not just part of that. In other words, we cannot use a real stream, instead, we need to send the data in chunks. The TLS protocol has the same issue, that is handled via the notion of records, with a maximum size of about 16KB. So a TLS stream is actually composed of records that are processed independently from one another. That also means that before you get to the TLS a buffered stream is a must, otherwise we’ll send just a few plain text bytes for a lot of cryptographic envelope. In other words, if you call tls.Write(buffer[0..4]), if you don’t have buffering, this will send a packet with a cryptographic envelope that is much bigger than the actual plain text value that you sent. 

Looking at the TLS record layer, I think that I’ll adopt many of the same behaviors. Let’s consider a record:

So each record is composed of an envelope, that simply contains the length, then we have the cipher text itself. I’m intending to use the libsodium’s encrypted stream, because it lets me handle things like re-keying on the fly transparently, etc. We read the record from the network, decrypt and then need to decide what to do.

If this is an alert, we raise it to the user (this is critical for good error reporting). Note that in this way I can send (an encrypted) stream to the other side to give a good error for the caller. For data, we just pass it to the caller. Note that there is one very interesting aspect here. We have two Len fields. This is because we allow padding, that can help avoid attacks such as BREACH and mitigate traffic analysis.  We ensure that the padding is always set to zero, similar in reason to the TLS model, to avoid mistakes and to force implementation correctness.

I think that this is enough theory for now. In my next post, I want to get to actually implementing this.

As usual, I would love to hear your feedback and comments.

time to read 17 min | 3332 words

Following our recent hiccup with certificate expiration, I spent some time thinking about what we could do better. One of the paths that this led me to was to consider how I would design the underlying communication channel for RavenDB if I had a blank slate. Currently, RavenDB uses TLS over TCP and HTTPS (which is the same thing) as the sole communication mechanism between servers and clients and between the servers in the cluster. That relies on TLS to ensure the safety of the information, as well as client certificates for authentication. TLS, of course, require the use of server certificates, which means that we have mutual authentication between clients and servers. However, the PKI infrastructure that is required to support that is freaking complex. It is mostly invisible, except when it isn’t, when something fails.

The idea in this design exercise is to consider how I would do things differently. This is a thought exercise only, not something that we intend to put into any kind of system at this point in time. The use of TLS has proven itself to be very successful and was greatly beneficial. I consider such design exercises to be vital to the overall health of a project (and my own mind), because it allows me to dive deeply into a topic and consider this from a different view point. Therefor, I’m going to proceed based on the RavenDB’s set of requirements, even though this is all theoretical.

That disclaimer aside, what do we actually need from an secure communication channel?

  • Build on top of TCP – nothing else would do, and while UDP is nice to consider, that isn’t relevant for RavenDB’s scenario, so not worth considering. RavenDB makes a lot of use of the streaming nature of TCP connections. It allows us to make a lot of assumptions on the state of the other side. The key aspect we take advantage of is the fact that for a given connection, if I send you a document, I can assume that you already go (and processed successfully) all previous documents. That saves a lot of back & forth to maintain distributed state.
  • Encrypted over the wire – naturally that means that we need to satisfy the same level of security as TLS.
  • Provide mutual authentication of clients and servers – including in a hostile network environment.

Let’s consider what we want to achieve here. The situation is not deployment of servers and clients by many independent organizations (each distrusting all others). Instead, we are setting up a cluster of RavenDB nodes that will talk to one another as well as any number of clients that will talk to those servers. That means that we can safely assume that there is a background channel which we trust. That remove the need to setup PKI and having a trusted third party that we’ll talk to. Instead, we are going to use public key cryptography to do authentication between nodes and clients.

Here is how it is going to look like. When setting up a cluster, the admin will generate a key pair, like so:

  • Server Secret: I_lfn5vna3p1OxyJ_kCJzRaBOWD-vio6hvpL6b2qYs8
  • Server Public: oXQJcrZfMNoDDl1ZVSuJlKbREsd5yoprViQOTqmSSCk

The secret portion is going to remain written to the server’s configuration file, and the public portion will be used when connecting to the server, to ensure that we are talking to the right one. In the same sense, we’ll have the client generate a key pair as well:

  • Client Secret: TVwQXoiYfvuToz5NY8D27bIeJR-LgR4y8gCM4UE3ZSc
  • Client Public: 5nNpLTSQmqzh3yttyD1DyM2a2caLORtecPj5LQ2tIHs

With those in place, we can now setup the following configuration on the server side:

Note that the settings.json contains the key pair of the server, but only the public key of the authorized clients. Conversely, the connection string for RavenDB would be:

Server=crypto.protocol.ravendb.example;
ServerPublicKey=oXQJcrZfMNoDDl1ZVSuJlKbREsd5yoprViQOTqmSSCk;
ClientSecretKey=TVwQXoiYfvuToz5NY8D27bIeJR-LgR4y8gCM4UE3ZSc;
ClientPublicKey=5nNpLTSQmqzh3yttyD1DyM2a2caLORtecPj5LQ2tIHs;

In this case, the client connection string has the key pair of the client, and just the public key of the server. The idea is that we’ll use these to validate that either end is actually who we think they are.

The details of public key cryptography is beyond the topic of this blog post (or indeed, my own understanding, if you get down to it), but the best metaphor that I found was the color mixing one. I’ll remind you that in public key cryptography, we have:

  • Client Secret Key (CSK), Client Public Key (CPK)
  • Server Secret Key (SSK), Server Public Key (SPK)

We can use the following operations:

  • Encrypt(CPK, SSK) –> Decrypt(SPK, CSK)
  • Encrypt(SPK, CSK) –> Decrypt(CPK, SSK)

In other words, we can use a public / secret from both ends to encrypt and decrypt the data. Note that so far, everything I did was pretty bog standard Intro to Cryptography 101. Let’s see how we take those idea and turn them into an actual protocol. The details are slightly more involved, and instead of using just two key pairs, we actually need to use five(!), let’s look at them in turn.

The couple of key pairs are the one that we are familiar with, the server’s and the client’s. However, we are going to tag them with long term key pairs and show them as:

image_thumb1[1]

The problem with using those keys is that we have to assume that they will leak at some point. In fact, one of the threat model that TLS has is dealing with adversaries that can record all network communication between parties for arbitrary amount of time. Given that this is encrypted, and assuming that no one can deal break the encryption algorithm itself, we need to worry about key leakage after the fact. In other words, if we use a pair of key to communicate securely, but the communication was recorded, it is enough to capture a single key (from either server or client) to be able to decrypt past conversations. That is not ideal. In order to handle that, we introduce the notion of session keys. Those are keys that are in no way or shape are related to the long term keys. They are generated using secured cryptographic method and are used of a single connection. Once that connection is closed, they are discarded.

image_thumb2

The idea is that even if you manage to lay your hands on the long term keys, the session keys, which are actually used to encrypt the communication, are long gone (and were never kept) anyway. For more details, the Wiki article on Perfect Forward Secrecy does a great job explaining the details.

I’m counting four pairs of keys so  far, but I mentioned that we’ll use five in this protocol, what is that about? I’m going to introduce the idea of a middlebox key. A middlebox is a server that the client will connect to, the client wants to be able to provide just enough information to the middlebox to route the request to the right location, but without providing any external observer with any idea about what is the final destination of the client is. In essence, this is ESNI (Encrypted Server Name Indication). A key aspect of this is that the client does not trust the middlebox, and the only thing a malicious middlebox can do is to record what is the final destination of the connection. It cannot eavesdrop on the details or modify them in any way.

image_thumb3[1]

With all of that in place, and hopefully clear, let’s talk about the handshake that is required to make both sides verify that the other one is legit. The connection starts with a hello message, with the following details:

  • Client –> Server
  • Overall size: 108 bytes
  • Algorithm – crypto_box (sodium) - Key exchange: X25519 Encryption: XSalsa20 stream cipher Authentication: Poly1305 MAC

Field #

Size

Content

Encrypted using

1

4

Version

Plain text

2

32

Client’s session public key

Plain text

3

32

Expected server public key

Middlebox’s public key + Client’s session secret key
clip_image0014_thumb

4

16

MAC for field 3

 

5

24

Nonce for field 3

 

This requires some explanation. I know enough to know my limitation with cryptography. I’m going to lean on well known and tested library, libsodium for the actual cryptographic details and try to do as little as possible on my own. The hello message details contains just three actual fields , but the third field is encrypted. Modern encryption practices are meant to make it as hard as possible to misuse. That means that pretty much any encryption algorithm that you are likely to use will use Authenticated Encryption. This is to ensure that any modification to the cypher text will fail the decryption process, rather than give corrupted results.

To handle that scenario, we need to send a MAC (message authentication code), which you can see as field 4 on the message. The last field is a random value that will be used to ensure that when we encrypt the same data with the same keys, we'll not output the same value. That can have catastrophic impact on the safety of your system. You can think of the last two fields as part of the encryption envelope we need to properly encrypt the data.

As the first field, we have the protocol version, which allows to change the protocol over time. Note that this is the only choice that we have, there is no negotiation or choice involved here at all. If we want to change the cryptographic details of the protocol, we’ll need to create a new version for that. This is in contrast with how TLS works, where we have both clients and servers offering their supported options and having to pick which one to use. That ends up being complex, so it is simpler to tie it down. Wireguard works in a similar manner, for example.

You’ll notice that the client’s session public key is sent in the clear. That is fine, it is the public key, after all, and we ensure that each separate connection will generate a new key pair, there is nothing that can be gleaned from this data.

Now, let’s go back to the fields that are actually meaningful, the client’s session public key and the expected server public key. What is that about?

The client will first generate a key pair and send to the server the public portion of that key pair. Along with another keypair, we’ll be able to establish communication. However, what other key pair?  In order to trust the remote server, we need to know its public key in advance. The administrator will be able to tell us that, of course, but requiring this is a PITA. We may want to implement TUFU (Trust Upon First Use), like SSH does, or we may want to tie ourselves to a particular key. In any event, at the protocol level, we cannot require that the public key for the server will be known before the first message, not if we want to apply it.

To solve this issue, we have to consider why we have this expected server public key in the message in the first place. This is there to provide the middlebox a secure manner to discover what server the client wants to connect to. How the client discover the public key of the middlebox is intentionally left blank here. You can use the same manner as ESNI and grab the public key from a DNS entry, for example. Regardless, a key aspect of this is that the expected server public key is meant to be advisory only. If we are able to successfully decrypt it, then we know what server public key the client is looking for. We can lookup in some table and route the connection directly, without being able to figure out anything else on the contents of any future traffic.

If we cannot successfully decrypt this, we can just ignore this and assume that the client is expecting any key (at any rate, the client itself will do its own validation down the line). In many cases, by the way, I expect that the middlebox and the end server will be one and the same, this middlebox feature is meant for some advanced scenarios, likely never to be relevant here.

The server will reply to the hello message with a challenge, here is how it looks like:

  • Server –> Client
  • Overall size: 168 bytes
  • Algorithm – crypto_box (sodium)

Field #

Size

Content

Encrypted using

1

32

Server’s session public key

Plain text

2

32

Server’s long term public key

Client’s session public key + Server’s session secret key

image_thumb

3

16

MAC for field 2

 

4

24

Nonce for field 2

 

5

32

Client’s session public key

Client’s session public key + Server’s session long term key

image_thumb2[1]

6

16

MAC for field 6

 

7

24

Nonce for field 6

 

Here we are starting to see some more interesting details. The server is sending its session public key, to complete the key exchange between the client and server. As before, this is a transient value, generated on a per connection basis and has no relation to the actual long term key pair. There it nothing that you can figure out from the plain text public key, so we don’t mind sending it.

We send the long term key on field 2, on the other hand, encrypted. Why are we encrypting this? To prevent an outside observer from figuring out what server we are using (if we are using a middlebox).

The idea is that once we exchange the public keys for the session key pairs for both sides, we’ll encrypt the long term public key using this and let the client know. We’ll also encrypt the client’s session’s public key. This time, however, we’ll encrypt using the server long term key as well as the client’s session public key. The idea is that the server is encrypting a value that the client chose (the client’s session public key, which is also transient) and encrypt that with Authenticated Encryption. If the client can successfully decrypt that, we know that the session’s public key was encrypted using the long term secret key. In this manner, we prove that we own the long term key pair.

The client, upon receiving this message, will do the following:

  • Decrypt field 2 – verifying their authenticity using the MAC in field 3.
  • Decrypt field 5 – using the public key we got from the server.

Assuming that those two decryption procedures were successful, we can compare the plain text value for field 3 and field 6. If they are the same, we know that the server has the long term key pair (both public and secret). If it didn’t have the secret portion of the key, the server would be unable to properly encrypt the value so we’ll be able to read it. The fact that it does this encryption with the client’s session key (which differs on each call) means that you can’t do reply / caching or any such tricks.

The last thing that the client needs to do now is to figure out if the long term public key they got from the server is a match to the public key that they need. That can be part of a TUFU system, or we can reject the connection if the public key does not match.

  • Client –> Server
  • Overall size: 136 bytes
  • Algorithm – crypto_box (sodium)

Field #

Size

Content

Encrypted using

1

32

Client’s long term public key

Server’s session public key + client’s session secret key

image_thumb3

2

16

MAC for field 1

 

3

24

Nonce for field 1

 

4

24

Server’s session public key

Server’s session public key + client’s long term secret key

image_thumb4

2

16

MAC for field 4

 

3

24

Nonce for field 4

 

At this point, the same pattern applies. The server will decrypt the client’s long term public key from field 1 using the session keys. It will then use its own secret session key in conjunction with the client’s long term public key to decrypt the value in field 4. The act of successfully decrypting the value in field 4 serves as a proof that the client indeed holds the secret key for the long term value. At the end of processing this message, the server know who is the client and verified that they posses the relevant key pair.

From there, we are left with the simple act of doing key exchange using the session keys. Now both client and server know who the other side is and have agreed on the cryptographic keys that they will use to communicate with one another.

I mentioned that I’m not an expert cryptographer, right? The design of this protocol isn’t innovative in any way. It takes heavily from the design of TLS 1.3, the most successful cryptographic protocol on the planet, which was design by people who actually know their craft here. What I’m mostly doing here is making assumptions, because I can:

  • I don’t need PKI infrastructure, the communicating nodes all have a separate channel to establish trust by distributing the public keys.
  • There is no need for negotiation between the client & server, we fixed all the parameters at the protocol version.
  • The messages exchanged are all pretty small, that means that we can put them all on a single packet.

Most importantly of all, the entire system relies on local state, there is absolutely nothing here that relies or uses any external party. That is kind of amazing, when you think about it, and obviously one of the major reasons why I’m doing this exercise.

The tables and description above make it see exactly what is going on, even if they give all the details. I find that code make sense of code samples. Here is some sample code, showing how the server works:

The server will read the first message and then send a reply, the client will respond to the challenge, and the server will read the data and validate it. This is meant to be pseudo code, mind you, not real code. Just to get you to figure out how this interacts. Here is the client side of things:

I hope that the code sample would make it clearer what is going on. I haven’t mentioned the key generation for the follow up communication. All I talked about here is the ability to setup a key exchange after validating the keys from both sides. At the same time, the long term keys aren’t used for anything except authentication, so we get perfect forward secrecy. The idea with the middlebox key also allows us to natively support more complex routing and topologies, which is nice (but also probably YAGNI for this exercise).

I would love to get your feedback and thoughts about this idea.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  2. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  3. re (33):
    28 May 2024 - Secure Drop protocol
  4. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
  5. Production Postmortem (51):
    12 Dec 2023 - The Spawn of Denial of Service
View all series

RECENT COMMENTS

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}