Ayende @ Rahien

Oren Eini aka Ayende Rahien CEO of Hibernating Rhinos LTD, which develops RavenDB, a NoSQL Open Source Document Database.

You can reach me by:

oren@ravendb.net

+972 52-548-6969

Posts: 7,045 | Comments: 49,766

filter by tags archive
time to read 2 min | 213 words

I was asked what the meaning of default(object) is in the following piece of code:

The code is something that you’ll see a lot in RavenDB indexes, but I understand why it is a strange construct. The default(object) is a way to null. This is asking the C# compiler to add the default value of the object type, which is null.

So why not simply say null there?

Look at the code, we aren’t setting a field here, we are creating an anonymous object. When we set a field to null, the compiler can tell what the type of the field is from the class definition and check that the value is appropriate. You can’t set a null to a Boolean properly, for example.

With anonymous objects, the compiler need to know what the type of the field is, and null doesn’t provide this information. You can use the (object)null construct, which has the same meaning as default(object), but I find the later to be syntactically more pleasant to read.

It may make more sense if you’ll look at the following code snippet:

This technique is probably only useful if you deal with anonymous objects a lot. That is something that you do frequently with RavenDB indexes, which is how I run into this syntax.

time to read 3 min | 541 words

One of the distinguishing feature of RavenDB is its ability to process large aggregations very quickly. You can ask questions on very large data sets and get the results in milliseconds. This is interesting, because RavenDB isn’t an OLAP database and the kind of questions that we ask can be quite complex.

For example, we have the Products/Recommendations index, which allow us to ask:

For any particular product, find me how many times it was sold, what other products were sold with it and in what frequency.

The index to manage this is here:

The way it works, we map the orders and have a projection for each product, and then we add the other products that were sold with the current one. In the reduce, we group by the product and aggregate the related products together.

But I’m not here to talk about the recommendation engine. I wanted to explain how RavenDB process such indexes. All the information that I’m talking about can be seen in the Map/Reduce visualizer in the RavenDB Studio.

Here is a single entry for this index. You can see that products/11-A was sold 544 times and 108 times with products/69-A.

image

Because of the way RavenDB process Map/Reduce indexes, when we query, we run over the already precomputed results and there is very little computation cost at querying time.

Let see how RavenDB builds the index. Here is a single order, where three products were sold. You can see that each of them as a very interesting tree structure.

image

Here is how it looks like when we zoom into a particular product. You can see how RavenDB aggregate the data. First in the bottom most page on the right (#596). We aggregate that with the other 367 pages and get intermediate results at page #1410. We then aggregate that again with the intermediate results in page #105127 to get the final tall. In this case, you can see that products/11-A was sold 217,638 times and mostly with products/16-A (30,603 times) and products/72-A (20,603 times).

image

When we have a new order, all we need to do is update a bottom most page and then recurse upward in the three. In the case we have here, there is a pretty big reduce value and we are dealing with tens of millions of orders. We have three levels to the tree, which means that we’ll need to do three update operations to account for new or updated data. That is cheap, because it means that we have to do very little work to maintain the index.

At query time, of course, we don’t really have to do much, all the hard work was done.

I like this example, because it shows case a non trivial example and how RavenDB handles this with ease. These kind of non trivial work is something that tend to be very hard to get working properly and with RavenDB this is part of my default: “let’s do this on the fly demo”.

time to read 2 min | 364 words

imageRavenDB 5.0 has been released and is now available for download and on the cloud. There are many new changes in this version, but the highlights are:

  • Time Series support – allow to store time series data and run queries on time series of any size in milliseconds.
  • Documents compression – will usually reduce your disk utilization by over 50%.
  • Many indexing improvements, especially with regards to indexing and querying date time data.

As I mentioned, you can upgrade to the new version right now on your own instances and you can deploy new cloud instances with RavenDB 5.0 immediately.

RavenDB 5.0 is backward compatible with RavenDB 4.2, you can shut down your RavenDB 4.2 instance, update the binaries and start RavenDB 5.0 and everything will work. The other way around will not work, mind you. Once you have run RavenDB 5.0, you cannot go back to RavenDB 4.2.

A RavenDB 5.0 instance can be part of a cluster running other RavenDB 4.2 nodes (and vice versa), which allows you to do rolling migrations and test the RavenDB 5.0 in production without committing all the way in.

An application using RavenDB 4.x client will be able to just continue working with RavenDB 5.0 server, with no change in behavior. That enables you to just switch over to the new version without needing to modify the whole stack at once.

For users running RavenDB 4.2, I’ll remind you that this is a Long Term Support (LTS) release and it is perfectly fine to remain on that edition for the next year or so.

Users on the cloud that are running RavenDB 4.2 can request an upgrade to RavenDB 5.0 via our support, but for the foreseeable future, we are going to keep users on RavenDB 4.2 unless they ask to be upgraded to RavenDB 5.0.

I’m really happy that we go to this milestone and I wanted to take this opportunity to congratulate the team behind RavenDB 5.0 for an excellent job done. Under non trivial circumstances, we have a pretty amazing product shipping, and I am very proud of what we have shipped.

time to read 1 min | 177 words

RavenDB 5.0 is scheduled to release this week.

It was supposed to be released a week or so ago, but we have a phase in the release which I guess should be called “release the monkeys”. In that phase, we basically have gather around the servers and push. The idea is that we are trying to generate a lot of abuse on the system and see if it will survive the storm of the monkeys.

The version we deployed didn’t survive that test, unfortunately. We had an improperly bounded transaction scope that caused a slow resource leak. Utterly unnoticeable in the grand scheme of things, but enough to cause degradation in resource usage over the course of several days of hard load.

The fix itself was simple enough, but that meant that we had to go back to square one and get a new bunch of monkeys to muck around in the servers.

Barring any new surprises, we expect to be able to certify the current 5.0 build as monkey resistant and set it free to the world.

time to read 3 min | 452 words

It should come as no surprise that our entire internal infrastructure is running on RavenDB. I wholly believe in the concept of dog fooding and it has serve us very well over the years.

I was speaking to a colleague just now and it occurred to me that it is surprising that we do certain things wrong, intentionally. It is fair to say that we know what the best practices for using RavenDB are, the things that you can do to get the most out of it.

In some of our internal systems, we are doing things in exactly the wrong way. We are doing things that are inefficient in RavenDB. We take the expedient route to implement things.  A good example of that is that we have a set of documents that can grow to be multiple MB in size. They are also some of the most common changed documents in the system. Properly design would call to break them apart to make things easier for RavenDB.

We intentionally modeled things this way. Well, I gave the modeling task to an intern with no knowledge of RavenDB and then I made things worse for RavenDB in a few cases where he didn’t get it out of shape enough for my needs.

Huh?! I can hear you thinking. Why on earth would we do something like that?

We do this because if serves as an excellent proving ground for misuse of RavenDB. It show us how the system behave under non ideal situations. Not just when the user is able to match everything to the way RavenDB would like things to be, but how they are likely to build their system. Unaware of what is going on behind the scenes and what the optimal solution would be. We want RavenDB to be able to handle that scenario well.

An example that pops to mind was having all the uploads on the system be attachments on a single document. That surfaced that we had a O(N^2) algorithm very deep in the bowels of RavenDB for placing a new attachment. It would be completely invisible under normal case, because it was fast enough under any normal or abnormal situation that we could think of. But when we started getting high latency from uploads, we realized that adding the 100,002th attachment to a document required us to scan through the whole list… it was obvious that we needed a fix. (And please, don’t put hundreds of thousands of attachments on a document, it will work (and it is fast now), but it isn’t nice).

Doing the wrong thing on purpose means that we can be sure that when users are doing the wrong thing accidently, they get good behavior.

time to read 6 min | 1084 words

In my previous post, I wrote about the case of a medical provider that has a cloud database to store its data, as well as a whole bunch of doctors making house calls. There is the need to have the doctors have (some) information on their machine as well as push updates they make locally back to the cloud.

image

However, given that their machines are in the field, and that we may encounter a malicious doctor, we aren’t going to fully trust these systems. We still want the system to function, though. The question is how will we do it?

Let’s try to state the problem in more technical terms:

  • The doctor need to pull data from the cloud (list of patients to visits, patient records, pharmacies and drugs available, etc).
  • The doctor nee to be able to create patient records (exam made, checkup results, prescriptions, recommendations, etc).
  • The doctor’s records needs to be pushed to the cloud.
  • The doctor should not be able to see any record that is not explicitly made available to them.
  • The same applies for documents, attachments, time series, counters, revisions, etc.

Enforcing Distributed Data Integrity

The requirements are quite clear, but they do bring up a bit of a bother. How are we going to enforce it?

One way to do that would be to add some metadata rule for the document, deciding if a doctor should or should not see that document. Something like that:

image

In this model, a doctor will have be able to get this document if they have any of the tags associated with the document.

This can work, but that has a bunch of non trivial problems and a huge problem that may not be obvious. Let’s start from the non trivial issues:

  • How do you handle non document data? Based on the owner document, probably. But that means that we have to have a parent document. That isn’t always the case.
  • It isn’t always the case if the document was deleted, or is in a conflicted state.
  • What do you do with revisions, if the access tags has changed? What do you follow?

There are other issues, but as you can imagine, they are all around managing the fact that this model allows you to change the tags for the document and expect to handle this properly.

The huge problem, however, is what should happen when a tag is removed? Let’s assume that we have the following sequence of events:

  • patients/oren is created, with access tag of “doctors/abc”
  • That access tag is then removed
  • Doctor ABC’s machine is then connected to the cloud and setup replication.
  • We need to remove patients/oren from the machine, so we send a tombstone.

So far, so good. However, what about Doctor' XYZ’s machine? At this time, we don’t know what the old tags were, and that machine may or may not have that document. It shouldn’t have it now, so we’ll send a tombstone there? That leads to information leak by revealing document ids that we aren’t authorized for.

We need a better option.

Using the Document ID as the Basis for Data Replication

We can define that once created, the access tags are immutable, and that would help considerably.  But that is still fairly complex to manage and opens up issues regarding conflicts, deletion and re-creation of a document, etc.

Instead, we are going to use the document’s id as the source for the decision to replicate the document or not. In other words, when we register the doctor’s machine, we set it up so it will allow:

Incoming paths Outgoing paths
  • doctors/abc/visits/*
  • tasks/doctors/abc/*
  • patients/clinics/33-conventry-rd/*
  • pharmacies/*
  • tasks/doctors/abc/*
  • doctors/abc
  • laboratories/*

In this case, incoming and outgoing are defined from the point of view of the cloud cluster. So this setup allows the doctor’s machine to push updates to any document with an id that starts with “doctors/abc/visits/” or “tasks/doctors/abc/*”. And the cloud will send all pharmacies and laboratories data. The cloud will also send all the patients for the doctor’s clinic as well as the tasks for this doctor, finally, we have the doctor’s record itself. Everything else will be filtered.

This Model is Simple

This model is simple, it provides a list of outgoing and incoming paths for the data that will be replicated. It is also quite surprisingly powerful. Consider the implications of the configuration above.

The doctor’s machine will have a list of laboratories and pharmacies (public information) locally. It will have the doctor’s own document as well as records of the patients in the clinic. The doctor is able to create and push patient visit’s records. Most interestingly, the tasks for the doctor are defined to allow both push and pull. The doctor will receive updates from the office about new tasks (home visits) to make and can mark them complete and have it show up in the cloud.

The doctor’s machine (and the doctor as well) is not trusted. So we limit the exposure of the data that they can see on a Need To Know basis. On the other hand, they are limited in what they can push back to the cloud. Even with these limitations, there is a lot of freedom in the system, because once you have this defined, you can write your application on the cloud side and on the laptop and just let RavenDB handle the synchronization between them. The doctor doesn’t need access to a network to be able to work, since they have a RavenDB instance running locally and the cloud instance will sync up once there is any connectivity.

We are left with one issue, though. Note that the doctor can get the patients’ files, but is unable to push updates to them. How is that going to work?

The reason that the doctor is unable to write to the patients’ files is that they are not trusted. Instead, they will send a visit record, which contains their finding and recommendation and on the cloud, we’ll validate the data, merge it with the actual patient’s record, apply any business rules and then update the record. Once that is done, it will show up in the doctor’s machine magically. In other words, this setup is meant for untrusted input.

There are more details that we can get into, but I hope that this outline the concepts clearly. This is not a RavenDB 5.0 feature, but will be part of the next RavenDB release, due around September.

time to read 4 min | 753 words

RavenDB is typically deployed as a set of trusted servers. The network is considered to be hostile, which is why encrypt everything over the wire and using X509 certificates for mutual authentication, but once the connection is established, we trust the other side to follow the same rules as we do.

To clarify, I’m talking here about trust between nodes, not a client connected to RavenDB. These are also authenticated using X509 certificate, but they are limited to the access permissions assigned to them. Nodes in a cluster fully trust one another and need to do things like forward commands accepted by one node to another one. That requires that the second node trust that the first node properly authenticated the client and won’t pass operations that the client has no authority for.

Use Case 1: A Database System for Multiple Medical Clinics

I think that a real use case might make things more concrete. Let’s assume that we have a set of clinics, with the following distribution of data.

image 

We have two clinics, one in Boston and one in Chicago, as well as a cloud system. The rules of the system are as follows:

  • Data from each clinic is replicated to the cloud.
  • Data from the cloud is replicated to the clinics.
  • Data from a clinic may only be at the clinic or in the cloud.
  • A clinic cannot get (or modify) data that didn’t came from the clinic.

In this model, we have three distinct locations, and we presumably trust all of them (otherwise, would we put patient data on them?). There is a need to ensure that we don’t expose patient data from one clinic to another, but that is about it. Note that in terms of RavenDB topology, we don’t have a single cluster here. That wouldn’t make sense. To start with, we need to be able to operate the clinic when there is no internet connectivity. And we don’t want to pay with any avoidable latency even if everything is working fine.  So in this case, we have three separate clusters, one in each location, and they are connected to one another using RavenDB’s multi master replication.

Use Case 2: A Database System Sharing with Outside Edge Points

Let’s look at another model, however, in this case, we are still dealing with medical data, but instead of a clinic, we have to deal with a doctor making house calls:

image

In this case, we are still talking about private data, but we are no longer trusting the end device. The doctor may lose the laptop, they may have malware running on the machine or may be trying to do Bad Things directly.  We want to be able to push data to the doctor’s machine and receive updates from the field.

RavenDB has some measures at the moment to handle this scenario. You need to only get some data from the cloud to the doctor’s laptop, and you want to push only certain things back to the cloud. You can use pull replication and ETL. to handle this scenario, and it will work, as long as you are willing to trust the end machine. Given the stringent requirement for medical data, it is not something out of bounds, actually. Full volume encryption, forbidding use of unknown software and a few other protections ensure that if the laptop is lost, the only thing you can do with it is repurpose the hardware.  If we can go with that assumption, this is great.

However… we need to consider the case that our doctor is actually malicious.

image

When the Edge Point isn’t as Healthy as the Doctor Using It

So we need a something in the middle, between all our data and what can reside on that doctor’s machine.  As it currently stands, in order to create the appropriate barrier between the doctor’s machine and the cloud, you’ll have to write your own sync code and apply any logic / authorization at that level.

Sync code is non trivial, mostly because of the number of edge cases you have to deal with and the potential for conflicts. This has already been solved by RavenDB, so having to write it again is not ideal as far as we are concerned.

What would you do?

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. Webinar recording (8):
    10 Jul 2020 - Multi tenancy with RavenDB
  2. RavenDB Webinar (3):
    01 Jun 2020 - Polymorphism at Scale
  3. Podcast (2):
    28 May 2020 - Adventures in .NET High performance databases with RavenDB with Oren Eini
  4. Talk (5):
    23 Apr 2020 - Advanced indexing with RavenDB
  5. Challenge (57):
    21 Apr 2020 - Generate matching shard id–answer
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats