Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,520
|
Comments: 51,142
Privacy Policy · Terms
filter by tags archive
time to read 2 min | 244 words

I don’t usually talk about what we call the Studio Features. These set of features impact the RavenDB Studio and are meant to make the lives of developers and administrators of RavenDB easier. We actually spend lot of effort on the Studio, because making features accessible and usable is considered to be a core part of the feature itself. We run the metrics, and bout 30% – 35% of the time spent building RavenDB 4.0 was spent on the Studio. We are still investing > 10% of our time and effort in continuously improving the Studio. I tend not to write about the Studio explicitly because we usually use it to show case the server features.

This feature, on the other hand, is purely a Studio feature. Yesterday I talked about Revisions in RavenDB and was reminded that we also have a new major feature for Revisions in the Studio only. Now you can diff different revisions directly  in the Studio.

Here is how you invoke this feature:

image

When you click on the compare button, you’ll get the following screen:

image

You can compare a revision to the current document or a revision to revision.

This can make it very easy to see what changed between two versions of a document.

time to read 4 min | 754 words

imageThe feature outlined in this post is a hidden behind a small button at a relatively obscure part of the RavenDB Studio (Database > Settings > Document Revisions). You can see how it looks like on the right. Despite its unassuming appearance, this is a pretty important feature. Revisions revert is a feature that we wish that no one use, though, which make it an interesting one.

Revision Revert allow you to take your entire database back to a particular moment in time. Documents changes will be undone, deleted documents will be restored, new documents will be removed, etc.

image

So far, this isn’t a surprising feature, being able to restore to a point in time is a feature that many other database have. How is this feature different? In most systems, a point in time restore require you to… well, restore. In a large database, that can take a lot of time. Revision Revert is an alternative to that. Instead of restoring from scratch, it utilize the revisions features in RavenDB to allow you to just hit the time machine button and go back to the desired time.

The common use case for that is immediately after the “Opps” moment. You have run an query without specifying a where clause, deployed a bad version of your app that removed important fields, etc.

Revision Revert is an online operation, you don’t need to take the database down. In fact, you can still serve reads while the process is going on. Since typically you’ll need to go back in time a relatively short period, this is a very quick process.

In a distributed system, the admin will invoke this process on one of the nodes in the system and the reverts will be applied on that node and then replicated from there to all the other nodes in the system. We have made every attempt to make what is likely to be a pretty stressful event as easy as possible.

You might have noticed the Window configuration in the screen above. What is that about?

To be honest, this is something that we expect most users to never really care about. It is there for correctness’ sake in a distributed environment. Let’s dig a little deeper into this feature.

First thing we need to talk about is time. The point in time that we’ll restore to is the user’s local time. This is converted into UTC internally and used to compute the cutoff point for the revert. In a distributed system, it is possible (even likely) that different machines on the network will have different clocks. (Note that while RavenDB will work just fine and do the Right Thing if your nodes have different timezones, we have found that really confusing. Better to keep all nodes on the same timezone and clock sync system).

This means that one problem for this feature is that changes happening on two machines at the same time may have different time stamps (in UTC, the local time is not relevant). You need to take that into account when using Revision Revert since that is what RavenDB uses to decide what stays and what go.

The second problem is that just because two updates happened at the same time, it doesn’t mean that we learned about them at the same time. What I mean here is that a change that two changes that happened at the same time on different machine may have reached a particular node at very different times. That is where the Window option come into play. We scan the revisions log for all changes to the system. And we scan them in the order that we learned about them. By default, we’ll go back 4 days until we are sure that there aren’t any revisions that we got out of order and missed.

A few additional things about this feature. Obviously, it requires that you’ll have revisions enabled (and have enough revisions capacity to go back far enough in time, naturally). It support live restores and operates nicely in a distributed environment. Note that if you are doing Revision Revert and not all your documents have revisions enabled, only those that have revisions will be reverted.

Currently we apply this revert globally, we are considering allowing you to select specific collections to revert, but I’m not sure how useful that would be in practice.

time to read 3 min | 454 words

If you are tracking the nightlies of RavenDB, the Pull Replication feature has fully landed. You now have three options to chose when you define replication in your systems.

image

External Replication is meant to go from the current database to another database (usually in a different cluster). It is a way to share data with another location. The owner of the replication is the current database, which initiate the connection and send the data to the other side.

Pull Replication reverse this behavior. The first thing you’ll need to do to get Pull Replication working is to define the Pull Replication Hub.

image

As you can see, there isn’t much to do here. We give the hub a name and minimal configuration (how far back this should go, basically). In this case, we are allowing sinks to get the data from the database, with a 20 minutes delay in built into the loop. You can also export the sink configuration from this view. We also define a certificate that provide access to this Hub Pull Replication, this certificate allow only access to this Pull Replication Hub, it grant no additional permissions. In this way, you may have one certificate that provide access to a delayed public stock ticker and another that provides an immediate access to the data.

The next step is to go to the other side, the sink. There, we either manually define the details on the hub (or more likely import the configuration). The sink will then connect to the hub and start pulling the data from it. Here is what this looks like:

image

The idea is that you are very likely to have a lot more sinks than hubs. That is why we make it easy to define the sink just by importing (although in practical terms we expect that this will just be part of a shared image that is deployed many times).

One we have defined the Sink Pull Replication, it will connect to the Hub and start accepting data. You can track how this works from the studio:

image

On the other side, you can track the connected sinks on the Hub:

image

And this is all you need to setup Pull Replication yourself.

time to read 4 min | 603 words

image

The flagship feature for RavenDB 4.2 is the graph queries, but there are a lot of other features that also deserve attention. One of the more prominent second string features is pull replication.

A very common deployment pattern for RavenDB is to have it deployed to the edge. A great example is shown in this webinar, which talk about deploying RavenDB to 36,000 locations and over half a million instances. To my knowledge, this is one of the largest single deployments of RavenDB, this deployment model is frequent in our users.

In the past few months I talked with users would use RavenDB on the edge for the following purposes:

  • Ships at sea, where RavenDB is used to track cargo and ongoing manifests updates. The ships do not have any internet connection while at sea, but connect to the head quarters when they dock.
  • Clinics in health care providers, where each clinic has a RavenDB instance and can operate completely independently if the network is down, but communicates to the central data center during normal operations.
  • Industrial robots, where each robot holds their own data and communicate occasionally with a central location.
  • Using RavenDB as the backing end for an application running on tablets to be used out in the field, which will only have connection to the central database when back in the office.

We call such deployments the hub & spoke model and distinguish between the types of nodes that we have.  We have edge nodes and the central node.

Now, to be clear, both the edge and the central can be either a single node or a full cluster, it doesn’t matter to our discussion.

Pull replication in RavenDB allows you to define a replication definition on the central once. On each of the edges nodes, you define the pull replication definition and that is pretty much it. Each edge node will connect to the central location and start pulling all the data from that database. On the face of it, it seems like a pretty simple process and not much different from external replication, which we already have in RavenDB.

The difference is that external replication is defined on the central node, for each of the nodes on the edges. Pull replication is defined once on the central node and then defined on each of the edges. The idea here is that deploying a new edge node shouldn’t have any impact on the central database. It is pretty common for users to deploy a new location, and you don’t want to have to go and update the central server whenever that happens.

There are a few other aspects of this feature that matters greatly. The most important of them is that it is the edge that initiates the connection to the central node, not the central to the edge. This means that the edge can be behind NAT and you don’t have to worry about tunneling, etc.

The second is about security. Pull replication it its own security measure. When you define a pull replication on the central node, you also setup the certificates that are allowed to utilize that. Those certificates are completely separate from the certificates that are used to access the database in general. So your edge nodes don’t have any access to the database at all, all they can do is just setup the channel for the central node to send them the data.

This is going to make edge deployments and topologies a lot easier to manage and work with in the future.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  2. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  3. re (33):
    28 May 2024 - Secure Drop protocol
  4. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
  5. Production Postmortem (51):
    12 Dec 2023 - The Spawn of Denial of Service
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}