Ayende @ Rahien

It's a girl

Live playground for RavenDB 3.0

We are getting to the part where we are out of things to do, so we setup a live instance of RavenDB 3.0 and opened it up for the world to play with.

It is available here: http://live-test.ravendb.net

Disclaimer - It may go down at any moment, data will routinely be wiped but is public and can be copied and used for other users. This is strictly for playing around with it, nothing more.

Give it a shot, see all the new cool stuff.

Tags:

Published at

Originally posted at

Comments (6)

Is the library open or not?

An interesting challenge came across my desk. Let us assume that we have a set of libraries, which looks like this:

{
    "Name": "The Geek Hangout",
    "OpeningHours": {
        "Sunday": [
            {   "From": "08:00", "To": "13:00"  },
            {   "From": "16:00", "To": "19:00"  }
        ],
        "Monday": [
            {   "From": "09:00", "To": "18:00"  },
            {   "From": "22:00", "To": "23:59"  }
        ],
        "Tuesday": [
            {   "From": "00:00", "To": "04:00"  },
            {   "From": "11:00", "To": "18:00"  }
        ]
    }
}
{
    "Name": "Beer & Books",
    "OpeningHours": {
        "Sunday": [
            {   "From": "16:00", "To": "23:59"  }
        ],
        "Monday": [
            {   "From": "00:00", "To": "02:00"  },
            {   "From": "10:00", "To": "22:00"  }
        ],
        "Tuesday": [
            {   "From": "10:00", "To": "22:00"  }
        ]
    }
}

I only included three days, to make it shorter, but you get the points. You can also see that there are times that the opening hours go through a day.

Now, the question we need to answer is: “find me an open library now”.

How can we answer such a question? If we were using SQL, it would be something like this:

select * from Libraries l 
where Id in (
         select Library Id OpeningHours oh 
         where oh.Day = dayofweek(now()) AND oh.From >= now() AND oh.To < now()
) 

I’ll leave the performance of such a query to your imagination, but the key point is that we cannot actually express such a computation in RavenDB. We can do range queries, but in this case, it is the current time that we compare to the range of values. So how do we answer such a query?

As usual, but not trying to answer the same thing at all. Here is my index:

image

The result of this is an index entry per day, and in each index entry, we have outputted the half hours that this library is open. So if we want to check for libraries that are open on Sunday at 4:30 PM, all we have to do is issue the following query:

image

The power of dynamic fields and index time computation means that this is an easy query to make, and even more importantly, this is something that we can answer very efficiently.

Tags:

Published at

Originally posted at

Comments (8)

RavenDB 3.0 & Subscription Licenses

We were asked this a few times, so I think it is worth clarifying.

If you have a subscription license to RavenDB, you have automatic access to all versions of RavenDB for as long as your subscription is current. That means that if you purchase a RavenDB 2.x subscription, your license allows you to use RavenDB 3.0 without any issues.

Note that this doesn’t include using RavenFS, which will require an updated license.

If you purhcased RavenDB using the one time code, you’ll need to purchase a new license for RavenDB 3.0.

Tags:

Published at

Originally posted at

Comments (6)

Fixing a production issue

So we had a problem in our production environment. It showed up like this.

image

The first thing that I did was log into our production server, and look at the logs for errors. This is part of the new operational features that we have, and it was a great time to try it under real world conditions:

image

This gave me a subscription to the log, which gave me the exact error in question:

image

From there, I went to look at the exact line of code causing the problem:

image

Can you see the issue?

We create a dictionary that was case sensitive, and they used that to create a dictionary that was case insensitive. The actual fix was adding the ignore case comparer to the group by clause, and then pushing to production.

Tags:

Published at

Originally posted at

Comments (2)

The RavenDB new website (and the beta discount)!

I’m currently with some of our team in the Oredev conference. So if you are here, seek us out.

In other good news, the new website for RavenDB is now up, and that means that we are no longer selling RavenDB 2.x. We are now selling RavenDB 3.0 only*!

With this, the last hurdle of releasing RavenDB 3.0 is pretty much out the door, we’ll probably wait until we are back from Oredev and recover a bit, but we are on track for a stable release of RavenDB 3.0 next week or the one just after.

In the meantime, go ahead and look at the new website.

* A RavenDB 3.0 license can work for 2.5, though.

Tags:

Published at

Originally posted at

Comments (26)

Finding the “best” book scenario

This started out as a customer engagement, but it was interesting to see how we solved it.

The problem is searching for books. Let us take the following books as good example:

image

We have users that want to have recommendations for books in specific topics, and authors can pay us to promote their books. You can see how it looks like above.

Now, the rules we want to follow for sorting the results are fairly simple. Find all the matching books, and sort them so:

  • The user has searched for a book primary tag, and the author paid to promote that tag, show first.
  • The user has searched for a book secondary tag, and the author paid to promote that tag, show second.
  • The user has searched for a book primary tag, and the author didn’t paid to promote that tag, show third.
  • The user has searched for a book secondary tag, and the author didn’t paid to promote that tag, show forth.

Actually trying to specify the sort order according to this tend to be quite hard to do, as it turns out, but we can take advantage of boosting to get what we want.

We define the following index:

from book in docs.Books
select new
{
  PaidPrimaryTag = book.Tags.Where(x=>x.Primary && x.Paid).Select(x=>x.Name),
  PaidSecondaryTag = book.Tags.Where(x=>x.Primary == false && x.Paid).Select(x=>x.Name),
  PrimaryTag = book.Tags.Where(x=>x.Primary).Select(x=>x.Name),
  SecondaryTag = book.Tags.Where(x=>x.Primary == false).Select(x=>x.Name),
}

And now we want to do a few searches: First for NoSQL and then RavenDB.

The actual query we issue is:

image

And as you can see, books/3 is shown first, because the author paid for higher ranking. What about when we do that with RavenDB?

image

We have books/3, as before, but books/2 is higher ranked than books/1. Why is that? Because books/2 paid to have a higher ranking on a secondary tag, and it is more important than even a primary tag match according to our query.

This is quite elegant, and it also allows us to take into account relevancy in the search as well.

Tags:

Published at

Originally posted at

Comments (3)

Bug tracking, when your grandparent isn’t in your family tree

We got a failing test because of some changes we made in RavenDB, and the underlying reason ended up being this code:

image

The problem was that the type that I was expecting did inherit from the right stuff. See this:

image

So something here is very wrong. I tracked this until I got to:

return RuntimeTypeHandle.CanCastTo(fromType, this);

And there is stopped. I work around this issue by using IsSubclassOf, instead of IsAssignableFrom.

The problem with IsAssignableFrom is that it is a confusing method. The parent is supposed to be the target, and the type you check is the parameter, but it is very easy to forget that and get confused. This worked for 99% of cases, because the single assembly we usually use also contained the RavenBaseApiController (which obviously can be assigned to itself), so that looked like it worked. IsSubclassOf is much nicer, but you need to understand that this won’t work for interfaces, or check direct equality. In this case, this was exactly what I needed, so that worked.

Tags:

Published at

Originally posted at

Comments (7)

Release preps, and my mobile cluster

I just took this picture on my desk. This is a set of machines running a whole set of tests for RavenDB 3.0. On the bottom right, you can see one of our new toys (more below).

image

This new toy is a NUC (i5, 16 GB, 180 GB SSD). We have a couple of those (and will likely purchase more).

They have a very small form factor, and they are pretty cool machines. We got a few of them so we can have easier time testing distributed systems that are really distributed.

It also has a very nice effect of actually being able to carry around a full cluster and “deploy” it in a few minutes.

Tags:

Published at

Originally posted at

Comments (4)

RavenFS and NServiceBus’ Data Bus

The NServiceBus data bus allows you to send very large messages by putting them on a shared resource and sending the reference to it. An obvious use case for this is using RavenFS. I took a few moments and wrote an implementation for that*.

public class RavenFSDataBus : IDataBus, IDisposable
{
private readonly FilesStore _filesStore;
private Timer _timer;


private object _locker = new object();
private void RunExpiration(object state)
{
bool lockTaken = false;
try
{
Monitor.TryEnter(_locker, ref lockTaken);
if (lockTaken == false)
return;

using (var session = _filesStore.OpenAsyncSession())
{
var files = session.Query()
.WhereLessThan("Time-To-Be-Received", DateTime.UtcNow.ToString("O"))
.OrderBy("Time-To-Be-Received")
.ToListAsync();

files.Wait();

foreach (var fileHeader in files.Result)
{
session.RegisterFileDeletion(fileHeader);
}

session.SaveChangesAsync().Wait();
}
}
finally
{
if (lockTaken)
Monitor.Exit(_locker);
}
}

public RavenFSDataBus(string connectionString)
{
_filesStore = new FilesStore
{
ConnectionStringName = connectionString
};
}

public RavenFSDataBus(FilesStore filesStore)
{
_filesStore = filesStore;
}

public Stream Get(string key)
{
return _filesStore.AsyncFilesCommands.DownloadAsync(key).Result;
}

public string Put(Stream stream, TimeSpan timeToBeReceived)
{
var key = "/data-bus/" + Guid.NewGuid();
_filesStore.AsyncFilesCommands.UploadAsync(key, stream, new RavenJObject
{
{"Time-To-Be-Received", DateTime.UtcNow.Add(timeToBeReceived).ToString("O")}
}).Wait();

return key;
}

public void Start()
{
_filesStore.Initialize(ensureFileSystemExists: true);
_timer = new Timer(RunExpiration);
_timer.Change(TimeSpan.FromMinutes(1), TimeSpan.FromMinutes(1));
}

public void Dispose()
{
if (_timer != null)
_timer.Dispose();
if (_filesStore != null)
_filesStore.Dispose();
}
}

* This is written to check it out, hasn’t been tested very well yet.

Tags:

Published at

Originally posted at

Complex nested structures in RavenDB

This started out as a question in the mailing list. Consider the following (highly simplified) model:

   public class Building
   {
       public string Name { get; set; }
       public List<Floor> Floors { get; set; }        
   }
   
   public class Floor
   {
       public int Number { get; set; }
       public List<Apartment> Apartments { get; set; }
   }
 
   public class Apartment
   {
       public string ApartmentNumber { get; set; }
       public int SquareFeet { get; set; }
   }

And here you can see an actual document:

{
    "Name": "Corex's Building - Herzliya",
    "Floors": [
        {
            "Number": 1,
            "Apartments": [
                {
                    "ApartmentNumber": 102,
                    "SquareFeet": 260
                },
                {
                    "ApartmentNumber": 104,
                    "SquareFeet": 260
                },
                {
                    "ApartmentNumber": 107,
                    "SquareFeet": 460
                }
            ]
        },
        {
            "Number": 2,
            "Apartments": [
                {
                    "ApartmentNumber": 201,
                    "SquareFeet": 260
                },
                {
                    "ApartmentNumber": 203,
                    "SquareFeet": 660
                }
            ]
        }
    ]
}

Usually the user is working with the Building document. But every now an then, they need to show just a specific apartment.

Normally, I would tell them that they can just load the relevant document and extract the inner information on the client, that is very cheap to do. And that is still the recommendation. But I thought that I would use this opportunity to show off some features that don’t get their due exposure.

We then define the following index:

image

Note that we can use the Query() method to fetch the query specific parameter from the user. Then we just search the data for the relevant item.

From the client code, this will look like:

var q = session.Query<Building>()
    .Where(b =>/* some query for building */)
    .TransformWith<SingleApartment, Apartment>()
    .AddTransformerParameter("apartmentNumber", 201)
    .ToList();


var apartment = session.Load<SingleApartment, Apartment>("building/123",
        configuration => configuration.AddTransformerParameter("apartmentNumber", 102));

And that is all there is to it.

Tags:

Published at

Originally posted at

Comments (8)

RavenDB 3.0–Release Candidate & Go Live

Update: We delayed the RC release by a week or two because we wanted to finish the new website. But I decided that it doesn't make sense to at least give you the RC bits so you can play with them. You can look at the new website at http://beta.ravendb.net, it should be done in about a week (we are in a holidays period right now, which slow things down). 

I’m taking a break from explaining what is new in RavenDB 3.0 because we have more important news. This is still release candidate, because we want to get more feedback from the field before we can say that this is a final version. The plan is to give the RC a few weeks to mature, and then make a full release. This also comes with Go Live version, so this is fully support for production (and much easier to deal with on production).

This release also include a new website for RavenDB, as well as the updated licensing. Note that we provide a 20% discount for purchases during the RC period. For customers that purchased a RavenDB license since 1 Jul 2014, can upgrade (for no cost) to a RavenDB 3.0 release.

You can go to our site to see how things changes.

image

Tags:

Published at

Originally posted at

Comments (16)

Working on Voron…

This took a bit less than what I expected, but…

image

And yes, it works. And this is running on Ubuntu.

And no, it isn’t ready.

Tags:

Published at

Originally posted at

Comments (5)

RavenDB Recognized in DZone’s 2014 Guide to Big Data

DZR_BigData_VendorButton

I’m excited to get to tell you that RavenDB is a  featured vendor in DZone’s 2014 Guide to Big Data. The guide includes expert opinions and tips, industry knowledge, and data platform and database comparisons. And it would give you a good background information about the different NoSQL solutions that are currently available.

Readers can download a free copy of the guide here.

Tags:

Published at

Originally posted at

Comments (5)

Optimizing event processing

During the RavenDB Days conference, I got a lot of questions from customers. Here is one of them.

There is a migration process that deals with event sourcing system. So we have 10,000,000 commits with 5 – 50 events per commit. Each event result in a property update to an entity.

That gives us roughly 300,000,000 events to process. The trivial way to solve this would be:

foreach(var commit in YieldAllCommits())
{
using(var session = docStore.OpenSession())
{
foreach(var evnt in commit.Events)
{
var entity = evnt.Load<Customer>(evnt.EntityId);
evnt.Apply(entity);
}
session.SaveChanges();
}
}

That works, but it tends to be slow. Worse case here would result in 310,000,000 requests to the server.

Note that this has the nice property that all the changes in a commit are saved in a single commit. We’re going to relax this behavior, and use something better here.

We’ll take the implementation of this LRU cache and add an event for dropping from the cache and iteration.

usging(var bulk = docStore.BulkInsert(allowUpdates: true))
{
var cache = new LeastRecentlyUsedCache<string, Customer>(capacity: 10 * 1000);
cache.OnEvict = c => bulk.Store(c);
foreach(var commit in YieldAllCommits())
{
using(var session = docStore.OpenSession())
{
foreach(var evnt in commit.Events)
{
Customer entity;
if(cache.TryGetValue(evnt.EventId, out entity) == false)
{
using(var session = docStore.OpenSession())
{
entity = session.Load<Customer>(evnt.EventId);
cache.Set(evnt.EventId, entity);
}
}
evnt.Apply(evnt);
}
}
}
foreach(var kvp in cache){
bulk.Store(kvp.Value);
}
}

Here we are using a cache of 10,000 items. With the assumption that we are going to have clustering for events on entities, so a lot of changes on an entity will happen on roughly the same time. We take advantage of that to try to only load each document once. We use bulk insert to flush those changes to the server when needed. This code will handle the case where we flushed out a document from the cache then we get events for it again, but he assumption is that this scenario is much lower.

What is new in RavenDB 3.0: Meta discussion

This is a big release, it is a big deal for us.

It took me 18(!) blog posts to discuss just the items that we wanted highlighted, out of over twelve hundred resolved issues and tens of thousands of commits by a pretty large team.

Even at a rate of two posts a day, this still took two weeks to go through.

We are also working on the new book, multiple events coming up as well as laying down the plans for RavenDB vNext. All of this is very exciting, but for now, I want to ask your opinion. Based on the previous posts in this series, and based on your own initial impressions of RavenDB, what do you think?

This is me signing off, quite tired.

Tags:

Published at

Originally posted at

Comments (16)

What is new in RavenDB 3.0: Operations–Optimizations

One of the important roles operations has is going to an existing server and checking if everything is fine. This is routine maintenance stuff. It can be things like checking if we have enough disk space for our expected growth, or if we don’t have too many indexes.

Here is some data from this blog’s production system:

image

Note that we have the squeeze button, for when you need to squeeze every bit of perf out of the system. Let us see what happens when I click it (I used a different production db, because this one was already optimized).

Here is what we get:

image

You can see that RavenDB suggest that we’ll merge indexes, so we can reduce the overall number of indexes we have.

We can also see recommendations for deleting unused indexes in general.

The idea is that we keep track of those stats and allow you to make decisions based on those stats. So you don’t have to go by gut feeling or guesses.

Tags:

Published at

Originally posted at

Comments (5)

What is new in RavenDB 3.0: Operations–the nitty gritty details

After looking at all the pretty pictures, let us take a look at what we have available for us for behind the cover for ops.

The first such change is abandoning performance counters. In 2.5, we reported a lot of our state through performance counters. However, while they are a standard tool and easy to work with using admin tools, they were also unworkable. We have had multiple times where RavenDB would hang because performance counters were corrupted, they require specific permissions and in general they were a lot of hassle. Instead of relying on performance counters, we are now using the metrics.net package to handle that. This gives us a lot more flexibility. We can now generate a lot more metrics, and we have. All of those are available in the /debug/metrics endpoint, and on the studio as well.

Another major change we did was to consolidate all of the database administration details to a centralized location:

Manage your server gives us all the tools we need to manage the databases on this server.

image

You can manage permissions, backup and restore, watch what is going on and in general do admin style operations.

image

In particular, note that we made it slightly harder to use the system database. The intent now is that the system database is reserved for managing the RavenDB server itself, and all users’ data will reside in their own databases.

You can also start a compaction directly from the studio:

image

 

Compactions are good if you want to ask RavenDB to return some disk space to the OS (by default we reserve it for our own usage).

Restore & backup are possible via the studio, but usually, admins want to script those out. We had Raven.Backup.exe to handle scripted backup for a while now. And you could restore using Raven.Server.exe --restore  from the command line.

The problem was that this restored the database to disk,  but didn’t wire it to the server, so you had the extra step of doing that. This was useful for restoring system databases, not so much for named databases.

We now have:

  • Raven.Server.exe  --restore-system-database --restore-source=C:\backups\system\2014-09-17 --restore-destination=C:\Raven\Data\System
  • Raven.Serve.exe --restore-database=http://localhost:8080 --restore-source=C:\backups\RealEstateRUs\2014-09-17 --restore-database-name=C:\Raven\Data\Databases\RealEstateRUs

Which make a clear distinction between those operations.

Another related issue is how Smuggler handles error. Previously, the full export process had to complete successfully for you to have a valid output. Now we are more robust for errors such as unreliable network or timeouts. That means that if your network has a tendency to cut connections off at the knee, you will be able to resume (assuming you use incremental export) and still get your data.

We have also made a lot of changes in the Smuggler to make it work more nicely in common deployment scenarios, where request size and time are usually limited. The whole process is more robust for errors now.

Speaking of making things more robust, another area where we put attention to was memory usage over time. Beyond just reducing our memory usage in common scenarios, we have also improved our GC story. We can now invoke explicit GCs when we know that we created a lot of garbage that needs to be rid off. We’ll also invoke Large Object Heap compaction if needed, utilizing the new features in the .NET framework.

That is quite enough for a single post, but still doesn’t cover all the operations change, I’ll cover the stuff that should make your drool on the next post.

Tags:

Published at

Originally posted at