Ayende @ Rahien

Oren Eini aka Ayende Rahien CEO of Hibernating Rhinos LTD, which develops RavenDB, a NoSQL Open Source Document Database.

You can reach me by:

oren@ravendb.net

+972 52-548-6969

, @ Q j

Posts: 6,840 | Comments: 49,138

filter by tags archive
time to read 1 min | 81 words

Yesterday I talked about the design of the security system of RavenDB. Today I re-read one of my favorite papers ever about the topic.

This World of Ours by James Mickens

This is both one of the most hilarious paper I ever read (I had someone check up on me when I was reading that, because of suspicious noises coming from my office) and a great insight into threat modeling and the kind of operating environment that your system will run at.

time to read 4 min | 671 words

A decade(!) ago  I wrote that you should avoid soft deletes. Today I run into a question in the mailing list and I remembered writing about this, it turned out that there has been quite the discussion on this at the time.

The context of the discussion at the time was deleting data from relational systems, but the same principles apply. The question I just fielded asked how you can translate a Delete() operation inside the RavenDB client to a soft delete (IsDeleted = true) operation. The RavenDB client API supports a few ways to interact with how we are talking to the underlying database, including some pretty interesting hooks deep into the pipeline.

What it doesn’t offer, though, is a way to turn a Delete() operation into and update (or an update to a delete). We do have facilities in place that allow you to detect (and abort) on invalid operations. For example, invoices should never be deleted. You can tell the RavenDB client API that it should throw whenever an invoice is about to be deleted, but you have no way of saying that we should take the Delete(invoice) and turn that into a soft delete operation.

This is quite intentionally by design.

Having a way to transform basic operations (like delete –> update) is a good way to be pretty confused about what is actually going on in the system. It is better to allow the user to enforce the required behavior (invoices cannot be deleted) and let the calling code handle this different.

The natural response here, of course, is that this places a burden on the calling code. Surely we want to be able to follow DRY and not write conditionals when the user clicks on the delete button. But this isn’t an issue where this is extra duplicated code.

  • An invoice is never deleted, it is cancelled. There are tax implications on that, you need to get it correct.
  • A payment is never removed, it is refunded.

You absolutely want to block deletions of those type of documents, and you need to treat them (very) different in code.

In the enusing decade since the blog posts at the top of this post were written, there have been a number of changes. Some of them are architecturally minor, such as the database technology of choice or the guiding principles for maintainable software development. Some of them are pretty significant.

One such change is the GDPR.

“Huh?!” I can imagine you thinking. How does the GDPR applies to an architectural discussion of soft deletes vs. business operations. It turns out that it is very relevant. One of the things that the GDPR mandates (and there are similar laws elsewhere, such as the CCPA) the right to be forgotten. So if you are using soft deletes, you might actually run into real problems down the line. “I asked to be deleted, they told me they did, but they secretly kept my data!”.  The one thing that I keep hearing about the GDPR is that no one ever found it humorous. Not with the kind of penalties that are attached to it.

So when thinking about deletes in your system, you need to consider quite a few factors:

  • Does it make sense, from a business perspective, to actually lose that data? Deleting a note from a customer’s record is probably just fine. Removing the record of the customer at all? Probably not.
  • Do I need to keep this data? Invoices are one thing that pops to mind.
  • Do I need to forget this data? That is the other way, and what you can forget and how can be really complex.

At any rate, for all but the simplest scenarios, just marking IsDeleted = true is likely not going to be sufficient. And all the other arguments that has been raised (which I’m not going to repeat, read the posts, they are good ones) are still in effect.

time to read 2 min | 394 words

Image result for ravendb clusterRavenDB allows you to tune, per transaction, what level of safety you want your changes to have. At the most basic level, you can select between a single node transaction and a cluster wide transaction.

We get questions from customers about the usage scenario for each mode. It seems obvious that we really want to always have the highest level of safety for our data, so why not make sure that all the data is using cluster wide transactions at all times?

I like to answer this question with the lottery example. In the lottery example, there are two very distinct phases for the system. First, you record lottery tickets as they are purchased. Second, you run the actual lottery and select the winning numbers. (Yes, I know that this isn’t exactly how it works, bear with me for the sake of a clear example).

While I’m recording purchased lottery ticket, I always want to succeed in recording my writes. Even if there is a network failure of some kind, I never want to lose a write. It is fine if only one node will accept this write, since it will propagate the data to the rest off the cluster once communication is restored. In this case, you can use the single node transaction mode, and rely on RavenDB replication to distribute the data to the rest of the cluster. This is also the most scalable approach, since we can operate on each node separately.

However, for selecting the winning numbers (and tickets), you never want to have any doubt or the possibility of concurrency issues with that. In this case, I want to ensure that there is just one lottery winner selection transaction that actually commit, and for those purposes, I’m going to use the cluster transaction mode. In this way, we ensure that a quorum of the cluster will confirm the transaction for it to go through. This is the right thing to do for high value, low frequency operations.

We also have additional settings available, beyond the single node / full cluster quorum, which is write to a single node and wait for the transaction to propagate the write to some additional nodes. I don’t really have a good analogy for this use case using the lottery example, though. Can you think of one?

time to read 4 min | 735 words

imageAbout a month ago I wrote about a particular issue that we wanted to resolve. RavenDB is using X509 certificates for authentication. These are highly secured and are a good answer for our clients who need to host sensitive information or are working in highly regulated environments. However, certificates have a problem, they expire. In particular, if you are following common industry best practices, you’ll replace your certificates every 2 – 3 months. In fact, the common setup of using RavenDB with Let’s Encrypt will do just that. Certificates will be replaced on the fly by RavenDB without the need for an administrator involvement.

If you are running inside a single cluster, that isn’t something you need to worry about. RavenDB will coordinate the certificate update between the nodes in such a way that it won’t cause any disruption in service. However, it is pretty common in RavenDB to have multi cluster topologies. Either because you are deployed in a geo-distributed manner or because you are running using complex topologies (edge processing, multiple cooperating clusters, etc). That means that when cluster A replaces its certificate, we need to have a good story for cluster B still allowing it access, even though the certificate has changed.

I outlined our thinking in the previous post, and I got a really good hint,  13xforever suggested that we’ll look at HPKP (HTTP Public Key Pinning) as another way to handle this. HPKP is a security technology that was widely used, run into issues and was replaced (mostly by certificate transparency). With this hint, I started to investigate this further. Here is what I learned:

  • A certificate is composed of some metadata, the public key and the signature of the issuer (skipping a lot of stuff here, obviously).
  • Keys for certificates can be either RSA or ECDSA. In both cases, there is a 1:1 relationship between the public and private keys (in other words, each public key has exactly one private key).

Given these facts, we can rely on that to avoid the issues with certificate expiration, distributing new certificates, etc.

Whenever a cluster need a new certificate, it will use the same private/public key pair to generate the new certificate. Because the public key is the same (and we verify that the client has the private key during the handshake), even if the certificate itself changed, we can verify that the other side know the actual secret, the private key.

In other words, we slightly changed the trust model in RavenDB. From trusting a particular certificate, we trust that certificate’s private key. That is what grants access to RavenDB. In this way, when you update the certificate, as long as you keep the same key pair, we can still authenticate you.

This feature means that you can drastically reduce the amount of work that an admin has to do and lead you to a system that you setup once and just keeps working.

There are some fine details that we still had to deal with, of course. An admin may issue a certificate and want it to expire, so just having the user re-generate a new certificate with the private key isn’t really going to work for us. Instead, RavenDB validates that the chain of signatures on the certificate are the same. Actually, to be rather more exact, it verifies that the chain of signatures that signed the original (trusted by the admin) certificate and the new certificate that was just presented to us are signed by the same chain of public key hashes.

In this way, if the original issuer gave you a new certificate, it will just work. If you generate a new certificate on your own with the same key pair, we’ll reject that. The model that we have in mind here is trusting a driver’s license. If you have an updated driver’s license from the same source, that is considered just as valid as the original one on file. If the driver license is from Toys R Us, not so much.

Naturally, all such automatic certificate updates are going to be logged to the audit log, and we’ll show the updated certificates in the management studio as well.

As usual, we’ll welcome your feedback, the previous version of this post got us a great feature, after all.

time to read 4 min | 655 words

This post really annoyed me. Feel free to go ahead and go through it, I’ll wait. The gist of the post, titled: “WAL usage looks broken in modern Time Series Databases?” is that time series dbs that uses a Write Ahead Log system are broken, and that their system, which isn’t using a WAL (but uses Log-Structure-Merge, LSM) is also broken, but no more than the rest of the pack.

This post annoyed me greatly. I’m building databases for a living, and for over a decade or so, I have been focused primarily with building a distributed, transactional (ACID), database. A key part of that is actually knowing what is going on in the hardware beneath my software and how to best utilize that. This post was annoying, because it make quite a few really bad assumptions, and then build upon them. I particularly disliked the outright dismissal of direct I/O, mostly because they seem to be doing that on very partial information.

I’m not familiar with Prometheus, but doing fsync() every two hours basically means that it isn’t on the same plane of existence as far as ACID and transactions are concerned. Cassandra is usually deployed in cases where you either don’t care about some data loss or if you do, you use multiple replicas and rely on that. So I’m not going to touch that one as well.

InfluxDB is doing the proper thing and doing fsync after each write. Because fsync is slow, they very reasonable recommend batching writes. I consider this to be something that the database should do, but I do see where they are coming from.

Postgres, on the other hand, I’m quite familiar with, and the description on the post is inaccurate. You can configure Postgres to behave in this manner, but you shouldn’t, if you care about your data. Usually, when using Postgres, you’ll not get a confirmation on your writes until the data has been safely stored on the disk (after some variant of fsync was called).

What really got me annoyed was the repeated insistence of “data loss or corruption”, which shows a remarkable lack of understanding of how WAL actually works. Because of the very nature of WAL, the people who build them all have to consider the nature of a partial WAL write, and there are mechanisms in place to handle it (usually by considering this particular transaction as invalid and rolling it back).

The solution proposed in the post is to use SSTable (sorted strings table), which is usually a component in LSM systems. Basically, buffer the data in memory (they use 1 second intervals to write it to disk) and then write it in one go. I’ll note that they make no mention of actually writing to disk safely. So no direct I/O or calls to fsync. In other words, a system crash may leave you a lot worse off than merely 1 second of lost data.  In fact, it is possible that you’ll have some data there, and some not. Not necessarily in the order of arrival.

A proper database engine will:

  • Merge multiple concurrent writes into a single disk operation. In this way, we can handle > 100,000 separate writes per seconds (document writes, so significantly larger than the typical time series drops) on commodity hardware.
  • Ensure that if any write was confirmed, it actually hit durable storage and can never go away.
  • Properly handle partial writes or corrupted files, in such a way that none of the invariants on the system is violated.

I’m leaving aside major issues with LSM and SSTables, of which write amplification, and the inability to handle sustained high loads (because there is never a break in which you can do book keeping). Just the portions on the WAL usage (which shows broken and inefficient use) to justify another broken implementation is quite enough for me.

time to read 2 min | 246 words

imageOne of the primary reasons why businesses chose to use workflow engines is that they get pretty pictures that explain what is going on and look like they are easy to deal with. The truth is anything but that, but pretty sell.

My recommended solution for workflow has a lot going for it, if you are a developer. But if you’ll try to show a business analyst this code, they are likely to just throw their hands up in the air and give up.  Where are the pretty pictures?

One of the main advantages of this kind of approach is that it is very rigid. You are handling things in the event handlers, registering the next step in the workflow, etc. All of which is very regimented. This is so for a reason. First, it make it very easy to look at the code and understand what is going on. Second, it allow us to process the code in additional ways.

Consider the following AST visitor, which operate over the same code.

This took me about twenty minutes to write, mostly to figure out the Graphviz notation. It take advantage of the fact that the structure of the code is predictable to generate the actual flow of actions from the code.

You get to use readable code and maintainable practices and show pretty pictures to the business people.

time to read 3 min | 407 words

In my previous post, I talked about the driving forces toward a scripting solution to workflow behavior, and I presented the following code as an example of such a solution. In this post, I want to focus on the non obvious aspects of such a design.

The first thing to note about this code is that it is very structured. You are working on an event based system, and as such, the input / output for the system are highly visible. It also means that we have straightforward ways to deal with complexity. We can break some part of the behavior into a different file or even a different workflow that we’ll call into.

The second thing to note is that workflows tend to be long running processes. In the code above, we have a pretty obvious way to handle state. We get passed a state object, which we can freely modify. Changes to the state object are persisted between event invocations. That is actually a pretty important issue. Because if we store that state inside RavenDB, we also get the ability to do a bunch of other really interesting stuff:

  • You can query ongoing workflow and check their state.
  • You can use the revisions feature inside of RavenDB and be able to track down the state changes between invocations.

The input to the events is also an object, and that means that you can also store that natively, which means that you have full tracing capabilities.

The third important thing to note is that the script is just code, and even in complex cases, it is going to be pretty small. That means that you can run version resistant workflows. What do I mean by that?

Once a workflow process has started, you want to keep it on the same workflow script that is started with. This make versioning decision much nicer, and it is very easy for you to deal with changes over time.  On the other hand, sometimes you need to fix the script itself (there was a bug that allowed negative APR), in which case you can change it for just the ongoing workflows.

Actual storage of the script can be in Git, or as a separate document inside the database. Alternatively, you may actually want to include the script itself in every workflow. That is usually reserved for industries where you have to be able to reproduce exactly what happened and I wouldn’t recommend doing this in general.

time to read 6 min | 1018 words

I got a great comment on my previous post about using Map/Reduce indexes in RavenDB for event sourcing. The question was how to handle time sensitive events or ordered events in this manner. The simple answer is that you can’t, RavenDB intentionally don’t expose anything about the ordering of the documents to the index. In fact, given the distributed nature of RavenDB, even the notion of ordering documents by time become really hard.

But before we close the question as “cannot do that by design", let’s see why we want to do something like that. Sometimes, this really is just the developer wanting to do things in the way they are used to and there is no need for actually enforcing the ordering of documents. But in other cases, you want to do this because there is a business meaning behind these events. In those cases, however, you need to handle several things that are a lot more complex than they appear. Because you may be informed of an event long after that actually happened, and you need to handle that.

Our example for this post is going to be mortgage payments. This is a good example of a system where time matters. If you don’t pay your payments on time, that matters. So let’s see how we can model this as an event based system, shall we?

A mortgage goes through several stages, but the only two that are of interest for us right now are:

  • Approval – when the terms of the loan are set (how much money, what is the collateral, the APR, etc).
  • Withdrawal – when money is actually withdrawn, which may happen in installments.

Depending on the terms of the mortgage, we need to compute how much money should be paid on a monthly basis. This depend on a lot of factors, for example, if the principle is tied to some base line, changes to the base line will change the amount of the principle. If only some of the amount was withdrawn, if there are late fees, balloon payment, etc. Because of that, on a monthly basis, we are going to run a computation for the expected amount due for the next month.

And, obviously, we have the actual payments that are being made.

Here is what the (highly simplified) structure looks like:

image

This includes all the details about the mortgage, how much was approved, the APR, etc.

The following is what the expected amount to be paid looks like:

image

And here we have the actual payment:

image

All pretty much bare bones, but sufficient to explain what is going on here.

With that in place, let’s see how we can actually make use of it, shall we?

Here are the expected payments:

image

Here are the mortgage payments:

image

The first thing we want to do is to aggregate the relevant operations on a monthly basis, since this is how mortgages usually work. I’m going to use a map reduce index to do so, and as usual in this series of post, we’ll use JavaScript indexes to do the deed.

Unlike previous examples, now we have real business logic in the index. Most specifically, funds allocations for partial payments. If the amount of money paid is less than the expected amount, we first apply it to the interest, and only then to the principle.

Here are the results of this index:

image

You can clearly see that mistake that were made in the payments. On March, the amount due for the loan increased (took another installment from the mortgage) but the payments were made on the old amount.

We aren’t done yet, though. So far we have the status of the mortgage on a monthly basis, but we want to have a global view of the mortgage. In order to do that, we need to take a few steps. First, we need to define an Output Collection for the index, that will allow us to further process the results on this index.

In order to compute the current status of the mortgage, we aggregate both the mortgage status over time and the amount paid by the bank for the mortgage, so we have the following index:

Which gives us the following output:

image

As you can see, we have a PastDue marker on the loan. At this point, we can make another payment on the mortgage, to close the missing amount, like so:

image

This will update the monthly mortgage status and then the overall status. Of course, in a real system (I mentioned this is highly simplified, right?) we’ll need to take into account payments made in one time but applied to different times (which we can handle by an AppliedTo property) and a lot of the actual core logic isn’t in indexes. Please don’t do mortgage logic in RavenDB indexes, that stuff deserve its own handling, in your own code. And most certainly don’t do that in JavaScript. The idea behind this post is to explore how we can handle non trivial event projection using RavenDB. The example was chosen because I assume most people will be familiar with it and it wasn’t immediately obvious how to go about actually solving it.

If you want to play with this, you can import the following file (Settings > Import Data) to get the documents and index definitions.

time to read 3 min | 500 words

In the previous post I talked about how to use a map reduce index to aggregate events into a final model. You can see this on the right. This is an interesting use case of indexing, and it can consolidate a lot of complexity into a single place, at which point you can utilize additional tooling available inside of RavenDB.

As a reminder, you can get the dump of the database that you can import into your own copy of RavenDB (or our live demo instance) if you want to follow along with this post.

Starting from the previous index, all we need to do is edit the index definition and set the Output Collection, like so:

image

What does this do? This tell RavenDB that in addition to indexing the data, it should also take the output of the index and create new documents from it in the ShoppingCarts collection. Here is what these documents look like:

image

You can see at the bottom that this document is flagged as artificial and coming from an index. The document id is a hash of the reduce key, so changes to the same cart will always go to this document.

What is important about this feature is that once the result of the index is a document, we can operate it using all the usual tools for indexes. For example, we might want to create another index on top of the shopping carts, like the following example:

In this case, we are building another aggregation. Taking all the paid shopping carts and computing the total sales per product from these. Note that we are now operating on top of our event streams but are able to extract second level aggregation from the data.

Of course, normal indexes on top of the artificial ShoppingCarts allow you to do things like: “Show me my previous orders”. In essence, you are using the events for your writes, define the aggregation to the final model in an index and then RavenDB take care of the read model.

Some other options to pay attention to is the not doing the read model and the full work on the same database instance as your events. Instead, you can output the documents to a collection and then use RavenDB’s native ETL capabilities to push them to another database (which can be another RavenDB instance or a relational database) for further processing.

The end result is a system that is built on dynamic data flow. Add an event to the system, the index will go through it, aggregate it with other events on the same root and output it to a document, at which point more indexes will pick it up and do further work, ETL will push it to other databases, subscriptions can start operation on it, etc.

time to read 4 min | 679 words

RavenDB uses X509 certificates for many purposes. One of them is to enable authentication by using clients certificates. This create a highly secured authentication method with quite a lot to recommend it. But it does create a problem. Certificates, by their very nature, expire. Furthermore, certificates usually have relatively short expiration times. For example, Let’s Encrypt certificates expire in 3 months. We don’t have to use the same cert we use for server authentication for client authentication as well, but it does create a nice symmetry and simplify the job of the admin.

Except that every cert replacement ( 3 months, remember? ) the admin will now need to go to any of the systems that we talk to and update the list of allowed certificates whenever we update the Let’s Encrypt certificate. One of the reasons behind this 3 months deadline is to ensure that you’ll automate the process of cert replacement, so it is obvious that we need a way to automate the process of updating third parties about cert replacements.

Our current design goes like this:

  • This design applies only to the nodes for which we authenticate using our own server certificate (thus excluding Pull Replication, for example).
  • Keep track of all the 3rd parties RavenDB instances that we talk to.
  • Whenever we have an updated certificate, contact each of those instances and let them know about the cert change. This is done using a request that authenticate using the old certificate and providing the new one.
  • The actual certificate replacement is delayed until all of those endpoints have been reached or until the expiration of the current certificate is near.

Things to consider:

  • Certificate updates are written to the audit log. And you can always track the chain of updates backward.
  • Obviously, a certificate can only register a replacement as long as it is active.
  • The updated certificate will have the exact same permissions as the current certificate.
  • A certificate can only ever replace itself with one other certificate. We allow to do that multiple times, but the newly updated cert will replace the previous updated cert.
  • A certificate cannot replace a certificate that it updated if that certificate has updated certificate as well.

In other words, consider certificate A that is registered in a RavenDB instance:

  • Cert A can ask the RavenDB instance to register updated certificate B, at which point users can connect to the RavenDB instance using either A or B. Until certificate A expires. This is to ensure that during the update process, we won’t see some nodes that we need to talk to using cert A and some nodes that we need to talk to using cert B.
  • Cert A can ask the RavenDB instance to register updated certificate C, at which point, certificate B is removed and is no longer valid. This is done in case we failed to update the certificate and need to update with a different certificate.
  • Cert C can then ask the RavenDB instance to register updated certificate D. At this point, certificate A become invalid and can no longer be used. Only certs C and D are now active.

More things to consider:

  • Certain certificates, such as the ones exposing Pull Replication, are likely going to be used by many clients. I’m not sure if we should allow certificate replacement there. Given that we usually won’t use the server cert for authentication in Pull Replication, I don’t see that as a problem.
  • The certificate update process will be running on only a single node in the cluster, to avoid concurrency issues.
  • We’ll provide a way to the admin to purge all expired certificates (although, with one update every 3 months, I don’t expect there to be many).
  • We are considering limiting this to non admin certificates only. So you will not be able to update a certificate if it has admin privileges in an automated manner. I’m not sure if this is a security feature or a feel good feature.
  • We’ll likely provide administrator notification that this update has happened on the destination node, and that might be enough to allow updating of admin certificates.

Any feedback you have would be greatly appreciated.

FUTURE POSTS

  1. TimeSeries in RavenDB: Exploring the requirements - about one day from now

There are posts all the way to May 20, 2019

RECENT SERIES

  1. Reviewing Sled (3):
    23 Apr 2019 - Part III
  2. RavenDB 4.2 Features (5):
    21 Mar 2019 - Diffing revisions
  3. Workflow design (4):
    06 Mar 2019 - Making the business people happy
  4. Data modeling with indexes (6):
    22 Feb 2019 - Event sourcing–Part III–time sensitive data
  5. Production postmortem (25):
    18 Feb 2019 - This data corruption bug requires 3 simultaneous race conditions
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats