Ayende @ Rahien

Refunds available at head office

Multi Threading Insanity

Insanity: doing the same thing over and over again and expecting different results.
Albert Einstein

You obviously never done any multi threading work, dude!

Tags:

Published at

Originally posted at

Comments (10)

Relational searching sucks, don’t try to replicate it

This question on Stack Overflow is a fairly common one. Here is the data:

image

And the question was about how to get RavenDB to create an index that would have the following results:

{
   CarId: "cars/1",
   PersonId: "people/1235",
   UnitId: "units/4321",
   Make: "Toyota",
   Model: "Prius"
   FirstName: "Ayende",
   LastName: "Rahien"
   Address: "Komba 10, Hadera"
}
{
   CarId: "cars/2",
   PersonId: "people/1236",
   UnitId: "units/4321",
   Make: "Toyota",
   Model: "4runner"
   FirstName: "test",
   LastName: "test"
   Address: "blah blah"
}
 
same unit different person owns a different car

Now, if you try really hard, you can probably try to get something like that, but that is the wrong way to go about this in RavenDB.

Instead, we can write the following index:

image

Note that this index is a simple multi map index, it isn’t a multi map/reduce index. There is no need.

This index can return one of three types.

  • Car – just show the car to the user
    image
  • Person – now that we have a person, we have the id, and we can query for that:
    image image
  • Unit – now that we have a unit, we have the id, and we can query for that:
    image  image

This method means that we have to generate an additional query for some cases, but it has a lot of advantages. It is simple. It requires very little work from both client and server and it doesn’t suffer from the usual issues that you run into when you attempt to query over multiple disjointed data sets.

Now, the bad thing about this is that this won’t allow me to query for cross entity values, so it would be hard for me to query for the cars in Hadera owned by Ayende. But in most cases, that isn’t really a requirement. We just want to be able to search by either one of those, not all of them.

Tags:

Published at

Originally posted at

Comments (23)

As the user’s put it: Insight into the RavenDB design mindset

I have been blogging for a long time now, and I am quite comfortable in expressing myself, but I was still blown away by this post to the RavenDB mailing list. Mostly because this thread sums up a lot of the core points that led me to design RavenDB the way it is today.

Rasmus Schultz has been able to put a lot of the thought processes behind the RavenDB design into words.

Back when I took my education in systems development, basically, I was taught to build aggregates as large, as complete and as connected as possible. But that was 14 years ago, and I'm starting to think, what they taught me back then was based on the kind of thinking that works for single-user, typically desktop applications, where the entire model was assumed to be in-memory, and therefore had to be traversible, since there was no "engine" you could go back to and ask for another piece of the model.

I can see now why that doesn't make sense for concurrent applications with large models persisted in the background. It just never occurred to me, and looked extremely wrong to me, because that's not how I was taught to think.

Yes. That is the exact problem that I see people run into over and over. The create highly connected object model, without regards to how they are persisted, and then they run into problems using them. And the assumption that everything is equally costly to read from memory is hugely expensive.

Furthermore, I'm starting to see why NHibernate doesn't really work well for me. So here's the main thing that's starting to dawn on me, and please confirm or correct me on this:

It seems that the idea behind NH is to configure the expected data-access strategies for the model itself. You write configuration-files that define the expected data-access strategies, but potentially, you're doing this based on assumptions about how you might access the data in this or that scenario.

The problem I'm starting to see, is that you're defining these assumptions statically - and while it is possible to deviate from these defined patterns, it's easy to think that once you've defined your access strategies, you're "done", and the model "just works" and you can focus on writing business logic, which too frequently turns out to be untrue in practice.

To be fair, you can specify those things in place, with full context. And I have been recommending to do just that for years, but yeah, that is a very common issue.

This contrasts with RavenDB, where you formally define the access strategies for specific scenarios - rather than for the model itself. And of course the same access strategy may work in different scenarios, but you're not tempted to assume that a single access strategy is going to work for all scenarios.

You're encouraged to think and make choices about what you're accessing and updating in each scenario, rather than just defining one overriding strategy and charging ahead blindly on the assumption that it'll always just work, or always perform well, or always make updates that are sufficiently small to not cause concurrency problems.

Am I catching on?

Precisely.

Tags:

Published at

Originally posted at

Comments (11)

Lazy’s Man comprehensive search with RavenDB

RavenDB supports many types of searches, and in this case, I want to show something that belongs to the cool parts of the pile, but also on the “you probably don’t really want to do this”.

First, let me explain why this is cool, then we will talk about why you probably don’t want to do that (and finally, about scenarios where you actually do want this).

Here is an index that will allow you to search over all of the values of all of the properties in the user entity:

public class Users_AllProperties : AbstractIndexCreationTask<User, Users_AllProperties.Result>
{
    public class Result
    {
        public string Query { get; set; }
    }
    public Users_AllProperties()
    {
        Map = users =>
              from user in users
              select new
              {
                  Query = AsDocument(user).Select(x => x.Value)
              };
        Index(x=>x.Query, FieldIndexing.Analyzed);
    }
}

This can be easily query for things like:

s.Query<Users_AllProperties.Result, Users_AllProperties>()
    .Where(x=>x.Query == "Ayende") // search first name
    .As<User>()
    .ToList()


s.Query<Users_AllProperties.Result, Users_AllProperties>()
    .Where(x=>x.Query == "Rahien") // search last name
    .As<User>()
    .ToList()

The fun part is that because we are actually going to index all the properties values into the Query field, which then allow us to easily query for every one of the values without any trouble.

The problem with that is that this is also quite wasteful and likely to lead to bad results down the road. Why?

For two major reasons. First, because this is going to index everything, and would result in larger index, more IO, etc. The second reason is that it is going to lead to bad results because you are now searching over everything, including the “last login date” and the “password hint”. That means that your search results relevancy is going to be poor.

So why would you ever want to do something like that if it is bad?

Well, there are a few scenarios where this is applicable. You need to do that if you want to be able to search over completely / mostly dynamic entities. And you want to do that if you have entities which are specifically generated for the purpose of being searched.

Both cases are fairly rare (the first case is usually covered by dynamic indexing, anyway), so I wanted to point this out, and also point out that it is usually far better to just specify what are the fields that actually matter for you.

Tags:

Published at

Originally posted at

Comments (9)

The RavenDB indexing process: Optimization–Tuning? Why, we have auto tuning

The final aspect of RavenDB’s x7 jump in indexing performance is the fact that we made it freakishly smart.

During standard operation, most indexes only update when new information comes in, we are usually talking about a small number of documents for every indexing run. The problem is what happens when you have a sudden outpour of documents into RavenDB? For example, during nightly ETL batch, or just if you suddenly have a flood of users doing write operations.

The problem here is that we actually have to balance a lot of variable at the same time:

  • The number of documents that we have to index*.
  • The current memory utilization**.
  • How any cores I have available to do the index work with?
  • How much time do I have to do this?

Basically, the idea goes like this, if I have a small batch size, I am able to index more quickly, ensuring that we have fresher results. If I have big batch size, I am able to index more documents, and my overall indexing times goes down.

There is a non trivial cost associated with every indexing run, so reducing the number of indexing run is good, but the more documents I shove into a single run, the more memory will I use, and the more time it will take before the results are visible to the users.

* It is non trivial because there is no easy way for us to even know how many documents we have left to index (to find out is costly).

** Memory utilization is hard to figure out in a managed world. I don’t actually have a way to know how much memory I am using for indexing and how much for other stuff, and there is no real way to say “free the memory from the last indexing run”, or even estimate how much memory that took.

What we have decided on doing is to start from a very small (low hundreds) indexing batch size, and see what is actually going on live. If we see that we have more documents to index than the current batch size, we will slowly double the size of the batch. Slowly, because bigger batches requires more memory, and we also have to take into account current utilization, memory usage, and a bunch of other factors as well. We also go the other way around, able to reduce the indexing batch size on demand based on how much work we have to do right now.

We also provide an upper limit, because at some point it make sense to just do a big batch and make the indexing results visible than to try to do everything all at once.

The fun part in all of that is that once we have found the appropriate algorithm for this, it means that RavenDB will automatically adjust itself based on real production load. If you have an low update rate, it will favor small indexing batches and immediately execute indexing on the new documents. However, if you suddenly have a spike in traffic and the update rate goes up, RavenDB will adjust the indexing batch size so it will be able to keep up with your rate.

We have done some (read, a huge amount) testing with regards to this new optimization, and it turns out that under slow update frequency, we are seeing an average of 15 – 25 ms between a document update and it showing up in the indexes. That is pretty good, but what is going on when we have data just pouring in?

We tested this with a 3 million documents and 3 indexes. And it turn out that under this scenario, where we are trying to shove data into RavenDB as fast as it can accept it, we do see an increase in index latency. Under those condition, latency rose all the way to 1.5 seconds.

This is actually something that I am very happy about, because we were able to automatically adjust to the changing conditions, and were still able to index things at a reasonable rate (note that under this scenario, the batch size was usually 8 – 16 thousands documents, vs. the 128 – 256 that it is normally).

Because we were able to adjust the batch size on the fly, we could handle sustained writes at this rate with no interruption in service and no real need to think about this from the users perspective.. Exactly what the RavenDB philosophy calls for.

The RavenDB indexing process: Optimization–Getting documents from disk

As I noted in my previous post, we have done major optimizations for RavenDB. One of the areas where we improved the performance was reading the documents from the disk for indexing.

In Pseudo Code, it looks like this:

while database_is_running:
  stale = find_stale_indexes()
  lastIndexedEtag = find_last_indexed_etag(stale)
  docs_to_index = get_documents_since(lastIndexedEtag, batch_size)
  

As it turned out, we had a major optimization option here, because of the way the data is actually structured on disk. In simple terms, we have an on disk index that lists the documents in the order in which they were updated, and then we have the actual documents themselves, which may be anywhere on the disk.

Instead of loading the documents in the orders in which they were modified, we decided to try something different. We first query the information we need to find the document on disk from the index, then we sort them based on the optimal access pattern, to reduce disk movement and ensure that we have as sequential reads as possible. Then we take those results in memory and sort them based on their last update time again.

This seems to be a perfectly obvious thing to do, assuming that you are aware of such things, but it is actually something that is very easy not to notice. The end result is quite promising, and it contributed to the 7+ times improvements in perf that we had for indexing costs.

But surprisingly, it wasn’t the major factor, I’ll discuss a huge perf boost in this area tomorrow.

RavenDB 1.2 work has started (and a road map)

Two years after the launch of RavenDB 1.0, (preceded by several years of working on 1.0, of course). We are now starting to actually plan and work on RavenDB 1.2.

You can read the planned roadmap here. RavenDB 1.2 is a big release, for several reasons.

  • We are going to break RavenDB into several distinct editions, from the RavenDB Basic, suitable for small apps to RavenDB Standard which is the current version and all the way up to RavenDB enterprise, which is going to get some awesome features (windows clustering, index encryption, etc). We are also going to have plans for ISVs, which will allow them royalty free distribution of RavenDB for their customers.
  • We are going to update our pricing structure. You’ll hear more about this when we have finalized pricing.

Because I am well aware of the possible questions, I suggest reading the thread discussing both editions and pricing in the mailing list:

I will repeat again that we haven’t yet made final pricing decisions, so don’t take the numbers thrown around in those threads as gospel, but they are pretty close to what we will have.

This is the boring commercial stuff, but I am much more interested in talking about the new RavenDB roadmap. In fact, you can actually read all of our plans here. The major components for RavenDB 1.2 are:

  • Better integration with C# 5.0 – much better support for async in general, async replicaiton, async sharding, etc.
  • Enterprise level features – Windows Clustering, Full Database Encryption, Indexing Priorities, Compression, etc.
  • Installer and server console - so you can manage your RavenDB installation more easily.
  • Better Admin support – scheduled backups, S3 Backups, live restores, etc.
  • Internalizing commonly used bundles – you shouldn’t have to take additional steps to make use of common functionality.

There are other stuff, of course, but those are the main pillars.

As mentioned, you can read all of that yourself, and we would welcome feedback on our current plans and suggestions for the new version.

Tags:

Published at

Originally posted at

Comments (11)

The RavenDB indexing process: Optimization–De-parallelizing work

One of the major dangers in doing perf work is that you have a scenario, and you optimize the hell out of that scenario. It is actually pretty easy to do without even noticing it. The problem is that when you do things like that, you are likely to be optimizing a single scenario to perform really well, but you are hurting the overall system performance.

In this example, we have moved heaven and earth to make sure that we are indexing things as fast as possible, and we tested with 3 indexes, on an 4 cores machine. As it turned out, we actually had improved things, for that particular scenario.

Using the same test case on a single core machine was suddenly far more heavy weight, because we were pushing a lot of work at the same time. More than the machine could process. The end result was that it actually got there, but much more slowly than if we would have run things sequentially.

Of course, I give you the outliers, but those are good indicators for what we found out. Initially, we thought that we could resolve that by using the TPL’s MaxDegreeOfParallelism, but it turned out to be more complex than that. We have IO bound and we have CPU bound tasks that we need to execute, and trying to execute IO heavy tasks with this would actually cause issues in this scenario.

We had to manually throttle things ourselves, both to ensure limited number of parallel work, and because we have a lot more information about the actual tasks than the TPL have. We can schedule them in a way that is far more efficient because we can tell what is actually going on.

The end result is that we are actually using less parallelism, overall, but in a more efficient manner.

In my next post, I’ll discuss the auto batch tuning support, which allows us to do some really amazing things from the point of view of system performance.

The RavenDB indexing process: Optimization–Parallelizing work

One of the things that we are doing during the index process for RavenDB is applying triggers and deciding what, if and how a document will be indexed. The actual process is a bit more involved, because we have to do additional things (like figure out which indexes have already indexed those particular documents).

At any rate, the interesting thing is that this is a process which is pretty basic:

for doc in docs:
    matchingIndexes = FindIndexesFor(doc)
    if matchingIndexes.Count > 0:
       doc = ExecuteTriggers(doc) 
       if doc != null:
          yield doc

The interesting thing about this is that this is a set of operations that only works on a single document at a time, and the result is the modified documents.

We were able to gain significant perf boost by simply moving to a Parallel.ForEach call.  This seems simple enough, right? Parallelize the work, get better benefits.

Except that there are issues with this as well, which I’ll touch on my next post.

The RavenDB indexing process: Optimization

The actual process done by RavenDB to index documents is a fairly complex one. In order to understand what exactly happened, I decided to break it apart to pseudo code.

It looks something like this:

while database_is_running:
  stale = find_stale_indexes()
  lastIndexedEtag = find_last_indexed_etag(stale)
  docs_to_index = get_documents_since(lastIndexedEtag, batch_size)
  
  filtered_docs = execute_read_filters(docs_to_index)
  
  indexing_work = []
  
  for index in stale:
    
    index_docs = select_matching_docs(index, filtered_docs)
    
    if index_docs.empty:
      set_indexed(index, lastIndexedEtag)
    else
      indexing_work.add(index, index_docs)
      
  for work in indexing_work:
  
     work.index(work.index_docs)

And now let me show you the areas in which we did some perf work:

while database_is_running:
  stale = find_stale_indexes()
  lastIndexedEtag = find_last_indexed_etag(stale)
  docs_to_index = get_documents_since(lastIndexedEtag, batch_size)
  
  filtered_docs = execute_read_filters(docs_to_index)
  
  indexing_work = []
  
  for index in stale:
    
    index_docs = select_matching_docs(index, filtered_docs)
    
    if index_docs.empty:
      set_indexed(index, lastIndexedEtag)
    else
      indexing_work.add(index, index_docs)
      
  for work in indexing_work:
  
     work.index(work.index_docs)

All of which gives us a major boost in the system performance. I’ll discuss each part of that work in detail, don’t worry Winking smile

RavenDB & FreeDB: An optimization story

So, as I noted in a previous post, we loaded RavenDB with all of the music CDs in existence (or nearly so). A total of 3.1 million disks and 43 million tracks. And we had some performance problems. But we got over them, and I am proud to give you the results:

  Old New
Importing Data Couple of hours 42 minutes
Raven/DocumentsByEntityName And hour and a half 23.5 minutes
Simple index over disks Two hours and twenty minutes 24.1 minutes
Full text index over disks and tracks More than seven hours 37.5 minutes

Tests were run on the same machine, and the database HD was  a single 300 GB 7200 RPM drive.

I then decided to take this one step further, and check what would happen when we already had the indexes. So we created three indexes. One Raven/DocumentsByEntityName, one for doing simple querying over disks and one for full text searches on top of all disks and tracks.

With 3.1 million documents streaming in, and three indexes (at least one of them decidedly non trivial), the import process took an hour and five minutes. Even more impressive, the indexing process was fast enough to keep up with the incoming data so we only had about 1.5 seconds latency between inserting a document and having it indexed. (Note that we usually seem much lower times for indexing latencies, usually in the low tens of milliseconds, when we aren’t being bombarded with documents).

Next up, and something that we did not optimize, was figuring out how costly it would be to query this. I decided to go for the big guns, and tested querying the full text search index.

Testing “Query:Adele” returned a result (from a cold booted database) in less than 0.8 seconds. But remember, this is after a cold boot. So let us see what happen when we issue a few other queries?

  • Query:Pearl - 0.65 seconds
  • Query:Abba – 0.67 seconds
  • Query:Queen – 0.56 seconds
  • Query:Smith – 0.55 seconds
  • Query:James – 0.77 seconds

Note that I am querying radically different values, so I force different parts of the index to load.

Querying for “Query:Adele” again? 32 milliseconds.

Let us see a few more:

  • Query:Adams – 0.55 seconds
  • Query:Abrahams – 0.6 seconds
  • Query:Queen – 85 milliseconds
  • Query:James – 0.1 seconds

Now here are a few things that you might want to consider:

  1. We have done no warm up to the database, just started it up from cold boot and started querying.
  2. I actually think that we can do better than this, and this is likely to be the next place we are going to focus our optimization efforts.
  3. We are doing a query here over 3.1 million documents, using full text search.
  4. There is no caching involved in the speed increases.

More goodies are coming in.

Tags:

Published at

Originally posted at

Comments (13)

RavenDB & FreeDB: An optimization opportunity

Update: The numbers in this post are not relevant. I include them here solely so you would have a frame of reference. We have done a lot of optimization work, and the numbers are orders of magnitude faster now. See the next post for details.

The purpose of this post is to setup a scenario, see how RavenDB do with it, and then optimize the parts that we don’t like. This post is scheduled to go about two months after it was written, so anything that you see here is likely already fixed. In future posts, I’ll talk about the optimizations, what we did, and what was the result.

System note: I run those tests on a year old desktop, with all the database activity happening on a single 7200 RPM 300GB disk with 8 GB of RAM. Please don’t get to hung up on the actual numbers, I include them for reference, but real hardware on production system should kick this drastically higher. Another thing to remember is that this was an active system, while all of those operations were running, I was actively working and developing on the machine. The main point is to give us some sort of a metric about where we are, and to see whatever we like this or not.

We keep looking at additional things that we can do with RavenDB, and having large amount of information to tests things with is awesome. Having non fake data is even awesomer, because fake data is predictable data, while real data tend to be much more… interesting.

That is why I decided to load the entire freedb database into RavenDB and see what is happening.

What is freedb?

freedb is a database to look up CD information using the internet. This is done by a client (a freedb aware application) which calculates a (nearly) unique disc ID for a CD in your CD-Rom and then queries the database. As a result, the client displays the artist, CD-title, tracklist and some additional info.

The nice thing about freedb is that you can download their data* and make use of it yourself.

* The not so nice thing is that the data is in free form text format. I wrote a parser for it if you really want to use it, which you can find here: https://github.com/ayende/XmcdParser

 

So I decided to push all of this data into RavenDB. The import process took a couple of hours (didn’t actually measure, so I am not sure exactly how much), and we ended up with a RavenDB database with: 3,133,903 documents. Memory usage during the import process was ~100  MB – 150 MB (no indexes were present).

The actual size in RavenDB is 3.59 GB with 3.69 GB reserved on the file system.

Starting the database from cold boot takes about 4 seconds.

This is what the document looks like:

image

A full backup of the database took about 3 minutes, with all of the time dedicate for pure I/O.

Doing an export, using smuggler (on the local machine, 128 document batches) took about 18 minutes and resulted in a 803MB file (not surprising, smuggler output is a compressed file).

Note that we created this in a completely empty database, so the next step was to actually create an index and see how the database behaves. We create the default Raven/DocumentsByEntityName index, and got 5,870 seconds, so just over an hour and a half. For what it worth, this resulted in on disk index with a size of 125MB.

I then tried a much more complex index:

image

Just to give you some idea, this index gives you full text search support over just about every music cd that was ever made. To be frank, this index scares me, because it means that we have to have index entry for every single track in the world.

After indexing was completed, we ended up with a 700 MB on disk presence. Indexing took about 7 hours to complete. That is a lot, but remember what we are dealing with, we indexed 3.1 million documents, but we actually indexed, 52,561,894 values (remember, we index each and every track).  The interesting bit is that while it took a lot of CPU (full text indexing usually does) memory usage was relatively low, it peaked about 300 MB and usually was around the 180MB).

Searching over this index is not as fast as I would like, taking about a second to complete. Then again, the results are quite impressive:

image

Well, given that this is the equivalent of a 52 million records (in this case, literally records Smile) , and we are performing full text search, quite nice.

Let us see what happens when do something a little simpler, shall we?

image

In this case, we are only indexing 3.1 millions documents, and we don’t do full text searches. This index took 2.3 hours to run.

Queries on that are a much more satisfactory rate of starting out at 75 ms and dropping to 5 ms very quickly.

Tags:

Published at

Originally posted at

Comments (11)

re: Kiip’s MongoDB’s experience

We got asked several times to respond to this post, about the reason Kiip moved away from MongoDB:

image

On the surface, RavenDB and MongoDB are really similar, looking at the Good parts of the Kiip post, we have schemalessness, easy replication, rich query langauge and we can be access from multiple languages.

But under the hood, RavenDB operates in a completely different way than MongoDB does. A vast majority of the issues that Kiip run into are actually low level (really low level, is some cases) issues that shouldn’t really be visible to the user.

Non-counting B-Trees

The fact that MongoDB uses non counting B-Trees? The only reason that the user care about that is that it actually impacts performance, but the Kiip blog mentions a bunch of other issues related to that.

In RavenDB, we use Lucene as the indexing format, and we really don’t care about the actual format of the indexes. We natively support Count() and limit / skip, because we feel that those are actually core parts of what most users need. In fact, our API allows us to get the total count of results of a paged query as a by product of actually making the query. There isn’t any additional cost for doing this.

Poor Memory Management

MongoDB relies on the OS to do the memory management, by letting the OS memory manager to do its work. That is actually quite a smart decision, because I can guarantee that more work has gone into optimizing the OS memory manager than could have been invested by the MongoDB project. But that is just part of the work.

In RavenDB, we are actually a managed application, so we don’t have directly control over memory. That doesn’t mean that we don’t actually manage it. We have several layers of caching in place, exactly because we know more than the OS about our own usage scenarios. In many cases, even if you are making a totally new request, it would never hit the disk, because we are keeping track on hot data and making sure that it resides in memory. This applies to both indexes and documents, mind. And during the indexing process we are very careful about memory management.

Sure, the OS memory manager is more optimized, but the database knows what is going on, and can predict its own usage patterns. That is how RavenDB does a lot of magic relating to auto configuration.

Uncompressed field names

In MongoDB, it is considered good practice to shorten field names for space optimization. But MongoDB doesn’t do it for you automatically.

RavenDB doesn’t compress field names, but at the same time, it isn’t a good practice to do so. In fact, I think that this is a horrible little mess. There are a lot of arguments against compressing field names, not the least of which is that it makes it pretty hard to figure out what it is that you are actually trying to do. Looking at the raw data, something that is done fairly frequently when debugging and troubleshooting becomes harder to work with and manage:

{
  "a2": "nathan ",
  "d3": "",
  "a2": "2012-05-17T00:00:00.0000000",
  "h3": "2012-04-15T00:00:00.0000000",
  "r2": "archanid@sample.com",
  "o2": "8169cd4a-babf-4015-a3c7-4d503642e021",
  "o1": "products/NHProf"
}

Anyone wants to figure out what this document is about? And at least in this one, the data itself tells you a lot about the actual content.

There are far better alternatives in place. In RavenDB, we do full response / request compression, and we allow to do document compression on disk as well. If we were ever to get to the point where this would be a serious problem (and so far, it isn’t, even on large data sets), it would be less than a week of work to implement string interning inside RavenDB, so we would use the same string references for field values.

Global write lock

MongoDB (as of the current version at the time of writing: 2.0), has a process-wide write lock. … At this point, all other operations including reads are blocked because of the write lock.

Now, to be fair, also have a write lock, but it isn’t nearly as bad as it is in MongoDB. RavenDB write lock is actually for… writes, and it doesn’t interfere with the either reads or indexes. It is on the list of things to remove, but the crazy part is. So far, and we have really demanding users, no one cares. The reason that no one cares is that this is really small lock, and it only affects writes, it is not Stop the World type of thing.

Safe off by default

I am just going to let Kiip’s words stand for themselves (emphasis mine):

This is a crazy default, although useful for benchmarks. As a general analogy: it’s like a car manufacturer shipping a car with air bags off, then shrugging and saying “you could’ve turned it on” when something goes wrong.

RavenDB entire philosophy is around Safe by Default. That is the only thing that really make sense, because otherwise… Well… here is what happenned at Kiip:

We lost a sizable amount of data at Kiip for some time before realizing what was happening and using safe saves where they made sense (user accounts, billing, etc.).

Offline table compaction

Every now and then, you need to take down MongoDB and let it compact its on disk data. This is another Stop the World operation, and the only way to keep up when you do so is to have a hot standby ready.

RavenDB does all maintenance task while the server is up and serving requests. You don’t need any downtime just because RavenDB need to arrange some data on disk, we take care of that live, and with no interruption in service.

Secondaries do not keep hot data in RAM

As Kiip explains it:

The primary doesn’t relay queries to secondary servers, preventing secondaries from maintaining hot data in memory. This severely hinders the “hot-standby” feature of replica sets, since the moment the primary fails and switches to a secondary, all the hot data must be once again faulted into memory.

RavenDB doesn’t do so either, but for a drastically different reason. As I mentioned earlier, the way RavenDB works is quite different. When you are running a hot standby node, it will get the new data from the server and index it. We keep the index open, so for a lot of the data, it is already going to be in memory. For the rest, as I mentioned, we have several layers of caches that would help prevent needing to page gigabytes on data into memory.

Conclusion

As an utterly unbiased observer (Smile), I can say that RavenDB rocks.

What we are actually seeing here is that RavenDB put different emphasis on different things. I really care for making the common application level scenarios easy and nice to work with. And I had enough time supporting production level apps that I tried very hard to make sure that RavenDB can take care of itself for most scenarios without any hand holding.

Tags:

Published at

Originally posted at

Comments (6)

Explain this code: Answers

The reason this code is useful?

image

Because it allows you to write this sort of code:

class Program
{
    private static Collection<int> nums;

    static void Main(string[] args)
    {
        nums.Add(1);
        Console.WriteLine(nums.Count);
    }
}

I was aiming for that, and still this code strikes me as wrong.

Tags:

Published at

Originally posted at

Comments (22)

Beware of big Task Parallel Library Operations

Take a look at the following code:

class Program
{
    static void Main()
    {
        var list = Enumerable.Range(0, 10 * 1000).ToList();

        var task = ProcessList(list, 0);


        Console.WriteLine(task.Result);

    }

    private static Task<int> ProcessList(List<int> list, int pos, int acc = 0)
    {
        if (pos >= list.Count)
        {
            var tcs = new TaskCompletionSource<int>();
            tcs.TrySetResult(acc);
            return tcs.Task;
        }

        return Task.Factory.StartNew(() => list[pos] + acc)
            .ContinueWith(task => ProcessList(list, pos + 1, task.Result))
            .Unwrap();
    }
}

This is a fairly standard piece of code, which does a “complex” async process and then move on. It is important in this case to do the operation in the order they were given, and the real code is actually doing something that need to be async (go and fetch some data from a remote server).

It is probably easier to figure out what is going on when you look at the C# 5.0 code:

class Program
{
    static void Main()
    {
        var list = Enumerable.Range(0, 10 * 1000).ToList();

        var task = ProcessList(list, 0);

        Console.WriteLine(task.Result);

    }

    private async static Task<int> ProcessList(List<int> list, int pos, int acc = 0)
    {
        if (pos >= list.Count)
        {
            return acc;
        }

        var result = await Task.Factory.StartNew(() => list[pos] + acc);

        return await ProcessList(list, pos + 1, result);
    }
}

I played with user mode scheduling in .NET a few times in the past, and one of the things that I was never able to resolve properly was the issue of the stack depth. I hoped that the TPL would resolve it, but it appears that it didn’t. Both code samples here will throw StackOverFlowException when run.

It sucks, quite frankly. I understand why this is done this way, but I am quite annoyed by this. I expected this to be solved somehow. Using C# 5.0, I know how to solve this:

class Program
{
    static void Main()
    {
        var list = Enumerable.Range(0, 10 * 1000).ToList();

        var task = ProcessList(list);

        Console.WriteLine(task.Result);

    }

    private async static Task<int> ProcessList(List<int> list)
    {
        var acc = 0;
        foreach (var i in list)
        {
            var currentAcc = acc;
            acc += await Task.Factory.StartNew(() => i + currentAcc);
        }
        return acc;
    }
}

The major problem is that I am not sure how to translate this code to C# 4.0. Any ideas?

Tags:

Published at

Originally posted at

Comments (43)

Simplicity is prerequisite for reliability

I just learned that the title of this post is a(n apparently) well known quote from Edsger W. Dijkstra.

I like it, and I fully agree. And along with the following, it makes up my design philosophy:

Published at

Originally posted at

Comments (8)

Security decisions: Separate Operations & Queries

The question came up several times in the mailing list with regards to how the RavenDB Authorization Bundle operates, and I think it serves a broader discussion.

Let us imagine a system where we have contracts, which may be in several states:

  • Mine – Contracts that an employee signed.
  • Done – Standard users can view, Lawyers assigned to the company can sign.
  • Draft – Lawyers can view / edit, Partners can approve.
  • Proposed – Lawyers can create / edit, but only the lawyer that created it can view it, Partners can accept.

So far, fairly simple, right? Except the pure hell that you are going to get into when you are trying to show the users all of the contracts that they can see, sorted by edit date and in the NDA category.

Why am I being so negative here? Well, let us look at what we are going to have to do in the most trivial of cases:

image

In this sort of system, we are going to have to show the user all of the contracts that they are allowed to see, and show them some indication what operations they can do on each.

The problem is that generating this sort of view is expensive. Especially when you have large amount of data to work through. More interesting, from a UX perspective, it also doesn’t really work that well. Most users would want a better separation of the things that they can do, probably something like this:

image

This allows us to do a first level filtering on the data itself, rather than try to apply security rules to it.

In the first case, we need to get all the contracts that we are allowed to see. The security rules above are really simple, mind. But trying to translate them into an efficient query is going to be pretty hard. Both in terms of the code requires and the cost to actually perform the query on the server. There are other things that are involved as well, such as paging and sorting in such an environment.  I have created several such systems in the past, Rhino Security is probably the most well known of them, and it gets really hard to optimize things and make sure that everything works when you start getting more complex security rules (especially when you have a user editable security system, which is a common request).

The second case is cheaper because we can limit the choices that we see in the query itself. We may still need to apply security concerns, but those goes through the query directly, rather than a security sub system. This kind of change usually force people to be more explicit in what they want, and it result in a system that tends to be simpler. The security rules aren’t just something arbitrary that can be defined, they are actually visible on the screen (My Contracts, Drafts, etc). Changing them isn’t something that is done on an administrator’s whim.

Yes, this is a way to manage the client and their expectations, but that is important. But what about the complex security that they want?

That might still be there, certainly, but that would be active mostly for operations (stuff that happen on a single entity), not on things that happen over all entities. It is drastically easier to make a single entity security decisions work efficiently than make it work over the whole set inside the database.

Hiding values, API keys and other fun stuff

This post is mostly about fun ideas. In one scenario, we had the need to show data to the user, but there was some concern with regards to the hackability of the URL.

In general, you should be handle such things within your code, checking permissions, etc. But I decided to see if I can do something nice with things, and I got this:

private static object HideValues(string entityId, string tenantId, byte[] key, byte[] iv)
{
    using (var rijndael = Rijndael.Create())
    {
        rijndael.Key = key;
        rijndael.IV = iv;
        var memoryStream = new MemoryStream();
        using (var cryptoStream = new CryptoStream(memoryStream, rijndael.CreateEncryptor(), CryptoStreamMode.Write))
        using (var binaryWriter = new BinaryWriter(cryptoStream))
        {
            binaryWriter.Write(entityId);
            binaryWriter.Write(tenantId);
            binaryWriter.Flush();

            cryptoStream.Flush();
        }
        var bytes = memoryStream.ToArray();
        var sb = new StringBuilder();
        for (int index = 0; index < bytes.Length; index++)
        {
            var b = bytes[index];
            sb.Append(b.ToString("X"));
            if (index % (bytes.Length/4) == 0 && index > 0)
                sb.Append('-');
        }
        return sb;
    }
}

This will generate a “guid looking” value that we can send to the user. When they send it back to us, we can decrypt it and figure out what is actually going on in there.

Because it is encrypted, we know that this is a valid key, because otherwise we wouldn’t be able to decrypt it to valid data.

Passing 15 and 32 as the first two values, I got the following value back: 2A8AC8888-46B92092-BFD81393-7A6FB1

And it handle larger values as easily, of course. Quite fun, even if I say so myself. Not sure if this is useful, but I got into writing code because it is a great hobby.

Tags:

Published at

Originally posted at

Comments (15)

Monika: A lesson in component based design

I was giving a lecture on architecture recently, and the notion of components came in. The most important bit about that lecture was probably at the very end, when I discussed what it is that I consider to be a component. During that discussion, I introduced Monika, the payment processing component.

Monika has the following Service Level Agreement:

  • Payment initiation is done messages.
  • Notification about payment completion is handled via a callback REST call.
  • The SLA calls for 90% of all successful payments to be processed in 2 business days.

So far, it doesn’t sound really complicated, right? And there isn’t even a hint of how Monika works in the SLA or the contracts.

This is Monica:

Well, not really, but it makes the point, doesn’t it.

Monica is a component in the system that respond to (SMTP) messages, does some work, and respond by clicking on a link in the email (REST call).

Monica has a really sucky SLA, since she has only 22% uptime over the course of the year, and then there are those two weeks when she has her yearly maintenance period (vacation), etc.

The most important thing about this is that we are able to abstract all of that away and treat this scenario as just another component in the system.

All too often, people hear components and they start thinking about things like this:

A component in a system is usually something much larger than a single class or a set of classes. It is an independent agent in the system that has its own behavior, resources, dedicated team and deployment schedule separate from all other components.

Beware the common infrastructure

One of the common problems that I run into when consulting with clients, or just whenever I am talking to developers in general is the notion of common infrastructure. “We are going to spend some time building a common infrastructure which we can then use on all of our applications.”

I made that mistake myself with Rhino Commons, and again very recently with RaccoonBlog (look at the code, you see the Loci stuff, that is stuff that is used from another project).

Why is that a problem? Well, for the simplest reason of all. Different projects have different needs. A common infrastructure that tries to accommodate them all is going to be much more complex. Not only that, it is going to be much more brittle. If I am modifying it in the context of project A, can I really say that I didn’t break something for project B?

Let us take a simple example, executing tasks. In RaccoonBlog, we need tasks merely to handle comments and email (long running background tasks). In another application, we need to do retries, and we need to get notifications if after N retries, the task have failed. In a third project, we need a way to specify dependencies between tasks.

Sure, you can build something that satisfy all three projects, but it would be drastically more complex than having to modify the original task executer for each project needs. And yes, I do mean copying the code and modifying it.

And no, it is not a horrible sin against the Little Endianness. Even duplicated N times, the code is going to be simpler to read, perform faster, easier to maintain and modify over time.

Nitpicker note: I am not talking about 3rd party libraries here. If you can find something that fits your needs that already exists, wonderful. I am talking about infrastructure that you build, inside your organization.

Tags:

Published at

Originally posted at

Comments (25)

Got to debug is a bug, fix your error messages

This is part of the RavenDB test suite.

image

It has a bug.

Can you see it?

One of the things that I learned when working on the Castle project is that errors are important. In fact, they are really important, so it is worth the time to check them very carefully.

In this case, this code would fail but won’t tell me why it is failing.

Changing it to this:

image

Means that I get the actual error message right there and then, no need to do anything special.

Tags:

Published at

Originally posted at

Comments (14)

Performance implications of method signatures

In my previous post, I asked: What are the performance implications of the two options?

image_thumb

Versus:

image_thumb1

And the answer is quite simple. The chance to optimize how it works.

In the first example, we have to return an unknown amount of information. In the second example, we know how much data we need to return. That means that we can optimize ourselves based on that.

What do I mean by that?

Look at the method signatures, those requires us to scan a secondary index, and get the results back. From there, we need to get back to the actual data. If we knew what the size of the data that we need to return is, we could fetch just the locations from the index, then optimize our disk access pattern to take advantage of sequential reads.

In the first example, we have to assume that every read is the last read. Callers may request one item, or 25 or 713, so we don’t really have a way to optimize things. The moment that we have the amount that the caller wants, things change.

We can scan the index to get just actual position of the document on disk, and then load the documents from the disk based on the optimal access pattern in terms of disk access. It is a very small change, but it allowed us to make a big optimization.

RavenDB Course–Israel

I got repeated calls for doing a RavenDB in Israel, and it really makes little sense not to do one here, since it is the one we would have the easiest time running.

Therefor, I am pleased to announce that our two days RavenDB course is going to open in Israel on the 11 – 12 July. We are going to do the course in our offices in Hadera, and part of the course will include interaction with the actual development team.

You can register for the course using the following link. We provide early bird registration until the 16th May.

The course is going to be in English, and is open for people from outside of Israel as well.

Tags:

Published at

Originally posted at

Comments (6)