Building a serverless secured dead drop

time to read 33 min | 6568 words

I ran into this fascinating article (I wrote another blog post discussing it) and that got me thinking. How would I approach building a dead-drop implementation? For that matter, what do we need from a dead-drop system?

I think that the following are reasonable (loosely based on what Secure Drop aims for):

  • Completely anonymous:
  • No accounts, no registrations.
  • No server-side state about users.
  •  Prevent metadata tracking:
  • It’s not just message contents that are hidden.
  • Cannot tell if A talks to B or C.
  • Accessible via Tor to protect against traffic analysis.
  • Cannot tell who is talking to me at all.
  • Assume that the server may be compromised by a malicious entity.
  • No special trust should be granted to the server.

Reasoning

(You can skip this part to read the actual implementation below)

Let’s consider an actual usage scenario. We have Whistleblower Will (WW from now on) who wants to send sensitive information to Journalist Jane (JJ from now on). Let’s assume that the adversary in this case is a Big Bad Behemoth. I believe that the current villain de jour is Boeing, which is a huge company and had a couple of strange incidents with whistleblowers recently.

If WW wants to send stuff to JJ, why can’t he just email jj@nice.journalist from his personal email ww1994@aol.com ? Practically speaking, email today is being sent using TLS anyway, so we can assume that no one can read the message in the middle. Moreover,  even pretty sophisticated traffic analysis would find it difficult to track the email, as WW is talking to AOL (or GMail, or their likely email provider), and then AOL is communicating with the nice.journalist server (which is likely hosted by Exchange, Gmail, etc.) In other words, any such message would likely be lost in the noise of regular traffic.

However, while that is likely to happen, it isn’t guaranteed. Given the risk of life & limb involved in our scenario, we would like to ensure that this is the case. I’m writing this post because it is an interesting scenario, not because I actually have a use case. As usual in my encryption posts, this is merely my musings on the matter, don’t take me as an authority on the subject. That said, I’m actually quite interested in realistic threat models here. Please provide any feedback, I would love to know more about this.

One thing to pay attention to with regards to this scenario, however, is that if my threat model is a Bad Company that is one thing, but what if my threat model includes a nation state? I would point you to this wonderful article: This World of Ours by James Mickens which manages to be both hilarious and informative. The level of capability that you face when your opponent is a nation state is high.

When thinking about nation-states and whistleblowers… Snowden is the first name that comes to mind. In this case, the issue isn’t whether there is some plaintext being sent over the wire for everyone who cares to listen. Assume you are sending such an email from AOL to Gmail. Both companies will provide any data they have, including the full contents of any messages, if provided with an appropriate court order.

Moving from legal (but dubious) actions to the other side, I’m fairly certain that it would take very little investigative work to find an appropriate person who:

  • Works at an email provider.
  • Has access to the email contents.
  • Is able to make use of an appropriate cash infusion in their life.

I would also further assume that the actual amount required is surprisingly low.

Another thing to consider is that our whistleblower may want to provide the information to the journalist, but may absolutely not want to be identified. That includes being identified by the journalist.

In short, the whole mindset when building something like a dead drop is extremely paranoid. With that in mind, let’s see how we can build such a system.

Implementation

The concept behind this system is that a journalist will publish their public key in some manner, probably in their newspaper. That is a fairly simple and obvious step, of course. The issue with needing a dead drop isn’t about being able to hide the contents of the messages. If that were the case, a PGP encrypted message would suffice. The real issue is that we want to hide the fact that we are even communicating at all.

Note that this isn’t about any form of instant messaging. This is about dropping a message (text, files, etc.) and having it sent securely to the other side. The key aspect is that not only are the contents hidden, but also the fact that we even sent it in the first place. And even if you are watching the source or the destination, you can’t tell who the other side is. For reference, think of spycraft techniques in the Cold War era.

Given a journalist public key, the whistleblower will package all the data they want to send in a zip file and encrypt that using the public key. That is easy enough, the problem now is that we need to push that file somewhere and then have the journalist get it. That is where our system has a role.

In the design of the system, I tried to intentionally reduce the amount of information that we provide to the server. That way, even if the server is malicious, there isn’t much that they can do with what they have.

I decided to use a serverless architecture here, for two primary reasons. The first reason is that I’m currently teaching a Cloud Development course and that was a nice project to play with. The second reason is that by running everything as a serverless function, I reduced the amount of data that I can easily aggregate since invocations are independent of one another.

From a client perspective, here is the manner in which I’m able to send my sensitive information to a journalist. We need two pieces of information, first is the address of the dead drop, in this case I’m using a dummy .onion service and the journalist public key. The public key is used to encrypt the information so only the journalist can see it.

Let’s look at the code first, and then discuss what is going on there:


BASE_URL = "https://deaddrop0j22dp4vl2id.onion" # example only
JOURNALIST_PUB_KEY = base64.b64decode('GVT0GzjFRvMxcDh9c6jpmXkHoGB5KoIp9vyU3RozT2A=')


data = open('secrets.zip', 'rb').read() # file to send
searler = SealedBox(PublicKey(JOURNALIST_PUB_KEY))
enc_file = searler.encrypt(data)


with torpy.http.requests() as s:
    res = s.get(BASE_URL + "/upload-url").json()
    s.put(res.get('url'),files={'file':enc_file}).raise_for_status()
    file_id = res.get('id').encode('ascii') + b'='
    enc_id = searler.encrypt(base64.urlsafe_b64decode(base64_id))
    s.put(BASE_URL +"/register-id", data=enc_id).raise_for_status()

The first step is to encrypt the data to send (in this case, the file secrets.zip) using SealedBox. That is the act of encrypting the data using a public key so only the corresponding private key can open it.

The next step is to use Tor to call GET /upload-url to get a JSON object back, with url and id properties. The output of this request looks something like this:


{
  "id": "ArMvWgDBEXyVap-O7VbD-ELzDJ0ZB_2ir9E51RVv9-4",
  "url": "https://cloud-dead-drop.s3.amazonaws.com/uploads/ArMvWgDBEXyVap-O7VbD-ELzDJ0ZB
_2ir9E51RVv9-4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIARAM4NNC5DUOXEDI7%2F20240522%2Fil-central-1%2Fs3%2Faws4_request&X-Amz-Date=20240522T18412
7Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Security-Token=IQoJb3JpZ2
luX2VjEE-redacted-Z1BIyzgmj%2F9NhhNqdIPwnSV%2F6nvRhWrthEz0H8jRNU6U%2BoPh7zZTtQ
IrU5ahNmpWjLNGUnqNMYfNCNU%2FRX%2BUyERFwlMT7yrYIbxUyWUDwde1IXOHjkTns07kXmBlLG1u
Bvt6RDrE0xjFs%3D&X-Amz-Signature=c9498fb-redacted-9150ce4cb85"

}

The output is basically a random name of the file and an S3 presigned URL. Note that the file id is a base64 value, with the padding removed, which is why I have to add back the = character when decoding the value.

We can use that URL to upload the file to S3 (or a compatible service), and then we PUT /register-id with the encrypted file id, again using the journalist public key.

In other words, we have broken the upload process into four separate stages:

  • Encrypt the file
  • Request an upload url and get a random file id
  • Upload the encrypted file
  • Encrypt the file id and register it

The entire process is done over Tor, ensuring that our physical location is not leaked.

It’s interesting to note that just using Tor isn’t enough. A Harvard student making bomb threats was arrested because he used Tor via the campus WiFi. It was easy enough to narrow down “all Tor users in a particular time frame from location” and go through them one at a time.

SecureDrop has a whole list of steps to take in order to increase your safety at this stage, by the way, Tor is basically just the start here, and physical security is of paramount importance.

This looks like a really convoluted way to do things, I know, but there is some logic behind everything. Let’s look at the actual implementation and see how all those pieces look from the other side. In terms of the architectural diagram, here is what we use to generate the upload URL:

The following code implements the logic for the upload-url endpoint. As you can see, there is really nothing here. We generate a random 32-byte token, which will serve as the file id, generate the pre-signed URL, and hand it over to the caller.


upload_bucket = os.environ.get('UPLOAD_BUCKET')
s3 = boto3.client('s3')
def generate_upload_url(event, context):
    id = secrets.token_urlsafe(32)
    resp = s3.generate_presigned_url('put_object',
      Params={'Bucket': upload_bucket, 'Key': 'uploads/' + id },
      ExpiresIn=3600)  
    body = json.dumps({'id': id, 'url': resp})
    return {'statusCode': 200,'body': body}

The caller is free to make one or more such calls and use (or not) the pre-signed URLs that it got. The backend doesn’t have any say in this, nor any way to influence the caller.

The intent here is that we split the responsibilities, instead of having an upload that the server can gather information about. If we assume a malicious server, then requesting an upload URL doesn’t provide much information, just the Tor exit node IP that was used.

Proper setup would mean that the S3 bucket we use is not logging anything. Even if we assume that it is doing so, the only data in the log would be the Tor exit node IP. A malicious server would also have access to the actual uploaded file, but that is not usable without the private key of the journalist.

At this point we have an uploaded file, but how do we actually get the journalist to know about it? This is the part where the real fun happens. First, let’s look at the architectural diagram, then we’ll discuss how it works in detail.

There are a lot of moving pieces here. The design is intentionally meant to be hard to pierce since we are trying to ensure that even if the system is operated by a malicious entity, it will still retain much of its secured capabilities. (Again, this is a mental exercise for me, something to do for fun. If your life/liberty is at stake here, you probably want to get a second opinion on this design).

How do we let the journalist know that we have a new file for them (and along the way, not let anyone else know about it)? The whistleblower has the file id from the server, the one used as the file name for the upload.

I’m using Libsodium SealedBox. SealedBox is a way to encrypt a value given a public key in such a way that only the owner of the associated private key can access it.

SealedBox has an envelope size of 48 bytes. In other words, encrypting a 32-byte id + 48 envelope gives us exactly 80 bytes.

The whistleblower will use SealedBox to encrypt the file name, and then call the register id endpoint. Here is the associated lambda backend for the register-id endpoint:


queue_url = os.environ.get('NOTIFICATIONS_QUEUE')
sqs = boto3.client('sqs') 


def register_id(event, context):
    return register_id_internal(event['body'])
def register_id_internal(msg):
    # 32 bytes payload + SealedBox = 80 bytes -> base 64 == 108 bytes
    if len(msg) != 108: 
        return {'statusCode': 400, 'body': 'Invalid ID'}
    sqs.send_message(QueueUrl=queue_url, MessageBody=msg)
    return {'statusCode': 204}

It… doesn’t do much (you may have noticed a theme here), we are simply verifying that the size of the value matched, and then we send that to an SQS queue.

By the way, if we are sending 80 bytes, why are we receiving 108 bytes? This is because we are using lambda, and the binary data is being base64’ed by the lambda infrastructure.

Just registering the (encrypted) file id in the queue isn’t really helpful. It is just… sitting there, who is going to read or operate on that? That is where the two timers come into place. Note that the architecture diagram has events that happen every minute and every 5 minutes.  Every minute, we have a lambda running to maybe publish a decoy value. Its code looks like this:


queue_url = os.environ.get('NOTIFICATIONS_QUEUE')
sqs = boto3.client('sqs') 


def maybe_publish_decoy(event, context):
    if secrets.randbelow(4) != 0:
        return # 75% of the time, do nothing
    # 25% of the time, generate a decoy message
    return register_id_internal(
        base64.urlsafe_b64encode(secrets.token_bytes(80)).decode('ascii')
    )

The maybe_publish_decoy() lambda will usually do nothing, but 25% of the time it will register a decoy value in an SQS queue. Note that the actual message we generate is just random bytes, nothing meaningful.

Remember that when a user registers a file id, the file also ends up in the queue. Because both user ids and decoy ids end up in the same queue, and they both look like random bits, there is no way for an external observer to tell which is which.

For that matter, there is no way to tell for the system itself. Once a decoy is posted to the queue, there is no way to know whether it is a decoy value or a real one.

The last component in our system is actually working with the messages in the queue. Those are handled by the publish_ids() lambda, which is invoked every 5 minutes. Let’s look at the code:


MAX_MSGS = 8
upload_bucket = os.environ.get('UPLOAD_BUCKET')
def publish_ids(event, context):
    while True:
        result = sqs.receive_message(QueueUrl=queue_url,
                                     MaxNumberOfMessages=MAX_MSGS)
        msgs = result.get('Messages', [])
        ids = [msg['Body'] for msg in msgs]
        while len(ids) < MAX_MSGS:
             rnd = secrets.token_bytes(80)
             fake_id = base64.urlsafe_b64encode(rnd).decode('ascii')
            ids.append(fake_id)


        ids = sorted(ids, key=lambda _: secrets.randbelow(1024))
        output = byte('\n'.join(ids), 'ascii')
        now = datetime.datetime.now(datetime.timezone.utc).isoformat()
        s3.put_object(Bucket=upload_bucket, Key='ids/' + now, Body=output)


        if len(msgs) == 0:
            break
        sqs.delete_message_batch(QueueUrl=queue_url, Entries=[
           {'Id': msg['MessageId'], 'ReceiptHandle': msg['ReceiptHandle']}
           for msg in msgs
        ])

Every 5 minutes, the publis_ids() lambda runs, and it starts by reading messages from the queue. We read a batch of maximum 8 messages at a time, and round up with made up ids if there are not enough messages to store. Here is what such a file looks like:


K7VruHnGhlpzWssB92OpMUjAw0-FoDyED_6p4w2LMgcV7JrsVB4SQdH7VNQzAT-jywYZsVhHM8lNF-JiWWUgXONK_Qb2DJw29aLVqw9rvIs=
HG3vHyYVCC42gzeZqgugwIciPqzeQEdNcQrFdqcpUcY5dMRInZKA_ZSFBuyvPdAfJnZm8wkS-jE0cdXZUZmp1wx2CZYWcPGu1uXocdWn2D4=
OpJWZRfQkMGuXRci8x8YrHx0REE4PdBZctj27gXjH0JvRtaSFMweL47q9nB9r6XomGnOfu5632JbEuMKPOEkkdYiVvst-1Qpw1TNzTPcQmY=
drjhGt6d-aV3h8_BjC81cE5kayXiWikgD8qxWEPYL0T4l8BrW-MadhanXcr465vIs7eBzK-DdwrmtqO8rQsrHN60-f2KirpN-qHpdlpxSbk=
rDKdp0CHSm4-Dvf8BOToLQSv79GpfqLnV3fXLECwUK9HdVEDeRK-T3SycyDmwvjUgjkH0vNMB9Yx_AeaHIS87hD2mCpyEGYKNpGMsnWZlHg=
CQ-5sgobc29-1x6adr09tOgk2yb4WNirzZ2dflQOHkXKDY0uk5B9pq_KKDjNoyWZVsRazgvqRPz3mqan2yKb3P0xAQDmF2CjyN6hMR3bjsQ=
44DYBoGFiPeN8dP7FGn579W7vFgUp8-lblI7nfFP3a0TUqo5sjCnV_Ozr4aPXbdVam6kpyhkpqkQeSeroQNP_x7iq2dpNskjx2x4WO8ezJ0=
TMC5ralR9BjHwTf0xk36kuUcbseD6HVkZgK3e1bpckyk62O_trNINa7FMNLLEwUZeQRvUBuj1CRhNWiz0wRjBvv_hxpbi9ToFymJXkz1ocA=

We then write the file to another S3 folder, note that we use the iso format for the file name, which gives us lexical sorting for the values by time. Here is what this looks like on the bucket itself:

As you can see, roughly every 5 minutes, we have some values being written out. The file size is always the same, and we can’t tell based on the presence of a file if there were messages posted at a given time.

In the time frame shown here, I didn’t post any messages, but you can see that we still have two files written at 04:15, likely because there were enough messages in the queue to force us to run a second file (basically, a race condition between the decoy lambda and the publish lambda). That is a desirable outcome, by the way.

This is… pretty much it, I have to say.  There aren’t any additional behaviors or things to explore. This Rube Goldberg machine is meant to create a system that breaks apart the different sections and loses information as we move forward.

I’ll cover the impact of this design in the case of a malicious server later on in the post. For now, I want to cover how the journalist can read the data. The server currently holds two folders:

  • uploads/ - allows anonymous GET for files, auto-expires in 14 days, uploads require a pre-signed URL
  • ids/ - allow anonymous GET and LIST, auto-expires in 14 days, only written to be the backend (from the queue)

A journalist is going to be running a check every 5 - 10 minutes, using Tor, on the contents of the ids/ bucket, like so:


def read_messages(session, base_url, reveal, last_file = ""):
    while True:
        res = session.get(base_url, params={
            'prefix': 'ids/', 
            'start-after': last_file, 
            'list-type': 2})
        res.raise_for_status()
        dict_data = xmltodict.parse(res.content)
        contents = dict_data.get('ListBucketResult').get('Contents')
        if len(contents) == 0:
            time.sleep(5 * 60)
            continue
        for file in contents:
            last_file = file.get('Key')
            data = session.get(base_url + last_file)
            for line in io.BytesIO(data.content).readlines():
                id = base64.urlsafe_b64decode(line)
                try:
                    file_name = reveal.decrypt(id)
                    yield file_name
                except:
                    pass

What we are doing here is scanning the id/ folder, getting all the id files that we haven’t seen yet. For each of those files, we get it, and try to decrypt each of the lines in it. When done processing all the files in the folder, we’ll wait 5 minutes, and then read the next file.

This relies on the fact that S3 buckets return items in lexical sort order, and our publish_ids() lambda generates the file names in the ids/ folder using lexically sorted timestamps.

If there are many journalists listening on the system, each one of them will be making 1 - 2 remote calls every 5 minutes, seeing if there are any ids in there that they can decrypt. Note that this gives the server absolutely no information about what data each journalist is able to access. It also drastically reduces the amount of information that you need to deal with and distribute.

Each journalist will need to go through roughly 250 KB per day of ids to scan for messages aimed at them. That is assuming that we have a low load system, with < 8 messages / every 5 minutes. Note that this is still over 2,300 messages/day.

Assuming that we have a very high load and have to push 100,000 messages a day, the amount of data that each journalist will have to scan is under 10 MB. Those are good numbers, especially since we don’t intend this to be a high-traffic system.

With the file id in hand, the journalist can now download and decrypt the data, like so:


BASE_URL = "https://deaddrop0j22dp4vl2id.onion" # example only
PRIVATE_KEY = 'Iei28jYsIl5E/Kks9BzGeg/36CKsrojEh65IUE2eNvA='
key = PrivateKey(base64.b64decode(PRIVATE_KEY))
reveal = SealedBox(key)
with torpy.http.requests() as s:
    for msg in read_messages(s, BASE_URL, reveal):
        res = s.get(BASE_URL + "uploads/" + msg)
        try:
            print(reveal.decrypt(res.content))
        except:
            pass

We get the file ids, download them from the S3 bucket, and decrypt them. And now the journalist can start actually looking at the data. To reply, the whistleblower would need to send their own public key to the journalist, and subscribe in the same manner to the updates.

Note that this is not meant to be an online protocol, and you can scan the data once a day or once a week, without any real problems. That makes this system attractive since you can schedule a weekly trip to a remote location where you can anonymously check if you have anything new in your “mailbox”, without anyone being able to tell.

With the raw technical details out of the way, let’s consider some of the implications of this sort of system design.

The cautious playbook

A journalist would publish their public key, likely in their paper or website. They are interested in getting such information from anonymous sources. At the same time, they need to ensure that no one can tell if someone sent them information. A whistleblower wants to send information, but be protected from anything knowing that they sent it. Ideally, we should even limit the knowledge that any information was sent.

The way SecureDrop recommends, access to the system is only via Tor and usually from a location that isn’t near your usual haunts, which you traveled to without a phone, paid in cash, using a live-cd Tails instance.

The idea of adding additional layers beyond the system encryption is that even with a powerful adversary, the number of hurdles they have to go through is very high. You have to break the system encryption, and then Tor, and then you reach some coffee shop in the middle of nowhere and need to go over footage to try to identify someone.

The crazy thing is that this is actually viable. It isn’t science fiction today to obtain the security footage (which we can safely assume exists), run it through a facial recognition system and get a list of people to check. So we need to have multiple layers of defense in place.

From the point of view of this dead drop system, all the data is encrypted, the only information you have is about analyzing traffic patterns. A good way to alleviate even that is to not run everything at once.

The whole point of breaking it into discrete steps is that you can execute it in isolation. You can get the upload url, upload the file, and then actually register the id a day later, for example. That makes timing analysis a lot harder.

Or consider a few bots that would (via different Tor exit nodes for each operation):

  • Request upload urls on an ongoing basis
  • Occasionally upload random files to those urls
  • Read the ids/ folder files
  • Download some of those urls

If we have a few of those bots, and they send each other “messages” that generate decoy traffic, it is going to be much harder to track who and what is happening, since you’ll have activity (anonymized via Tor) that hides the real actions.

I think that those bots are likely to be another important layer of security, defeating traffic analysis and masking actual usage by people.

Consequences of a malicious takeover

Let’s consider for a moment what will happen if the system is taken over by a malicious party. In this case, we assume that there is a valid system that was taken over. Therefore, we need to figure out the impact of old messages that were sent and new messages that will be sent from now on.

The design of the system calls for the following important configurations:

  • Disabling logging on access (S3, endpoints, etc).
  • All files (ids, uploads, etc) are deleted within 14 days.
  • We are routing data in such a way that information is lost (registered id will be merged with decoys, etc)

Assuming that a proper system was taken over by a malicious party, the only additional information that they now have is all the contents of the uploads/ folder, which aren’t visible to other parties (you have to know the file name to download).

Given that the files are encrypted, there isn’t much that is leaked. And the bots I talked about will mask the real traffic with dummy files.

Once the system has been taken over, we can assume that the following happens: There is correlation now between calls to upload-url and the actual uploaded file. You can also correlate a registered id with an upload, assuming sufficiently low traffic (which is likely).

Those are the only additional bits of information that you gain from having access to the system. When we register an id to be published, the whistleblower sends the encrypted value, which the server has no way of correlating to the recipient.

The server can now do traffic analysis, for example, monitor that someone is reading the ids/ folder and then downloading a file from the uploads/ folder. But that is of limited utility, that would be:

  • Masked by the bots traffic.
  • Only give you the Tor exit node.

A malicious party could also disable expiration and retain long-term all the uploaded files, in case they get the keys at a later time, but beyond that, I can’t think of anything else that they will get from actually having control over the system.

In particular, for the whistleblower, there is no data leakage about who they are or who they are talking to. Even with the collaboration of the journalist, if the whistleblower didn’t provide that information, they remain anonymous.

Side channels

Another aspect of this system is that we don’t need to go with the id registration and publication to journalists. The fact that this is stored (and encrypted) means that you now don’t need to pass potentially a lot of data to a journalist, but can just give them the id itself. That is 108 bytes in base64 format, and doesn’t convey any additional information beyond that.

The question is, of course, if you can pass the id, why not just pass the encrypted file directly.

Attacks and mitigations

The design of the system makes certain attacks impossible to execute or impossible to hide. For example, you can perform denial of service attacks on a journalist by sending many messages that they would have to go through (you have the public key, after all). But that is obviously detectable.

In general, denial of service attacks on the system can be mitigated by requiring proof of work to submit the files.

What about blocking access from a source or to a particular journalist? There is no identifying the source, so you cannot block based on that. You can also not block the journalist, since the server doesn’t have any idea who the destination is.

You can block access to the system, simply by ignoring id registrations. From the outside world, it will look like someone just didn’t send us any messages. However, that is easily detectable. A journalist can send a message to herself, and upon not receiving it, detect that the system is not operationable.

Key leakage

What happens if the key pair of a journalist leaks? That would be catastrophic for the journalist and the sources because it would enable decryption of any messages that were sent. This is mitigated by only keeping messages for a maximum of 14 days, but we must assume that an adversary has copies of all the messages that were ever sent.

In this scenario, key leadage would compromise all communications intended for the journalist. Technically speaking, we can try to do a key exchange using this system, and have a temporary key assigned for a particular conversation. The problem is that this sort of system is mostly offline, with days or weeks between interactions.

That means that we need to persist the keys (and can thus assume that a key pair leak will also leak any “temporary” keys). Something that you can do is publish not just a public key but several over time, with built-in expiry. Let’s say that you publish your key in January, and replace that with a new key in February. In March, you’ll destroy the January key, so you don’t have a way to leak that.

Having rotating keys is a cute idea, but I think in practice this is too complex. People have a hard enough time remembering things like their passwords, requiring them to remember (and change) multiple passphrases is too much. On a yearly basis, however, that makes a lot more sense. But then again, what does “destroy the old key” mean exactly.

Summary

Well, this post has gone on entirely too long. I actually started writing it to play around with serverless system architecture, but I got sidetracked into everything else. It is long enough that I won’t try to dive into the serverless aspect in this post. Maybe in a future one.

As a reminder, this is a nice design, and the blog post and research consumed quite a few very enjoyable evenings, but I’m not a security or cryptography expert. If you require a system like this, I would recommend consulting an actual professional.