Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,592
|
Comments: 51,223
Privacy Policy · Terms
filter by tags archive
time to read 1 min | 183 words

I got a few bug reports about NH Prof giving an error that looks somewhat like this one:

System.Windows.Markup.XamlParseException: Cannot convert string '0.5,1' in attribute 'EndPoint' to object of type 'System.Windows.Point'. Input string was not in a correct format. 

It took a while to figure out exactly what is going on, but I finally was able to track it down to this hotfix (note that this hotfix only take cares of list separator, while the problem exists for the decimal separator as well. Since I can’t really install the hotfix for all the users of NH Prof, I was left with having to work around that.

I whimpered a bit when I wrote this, but it works:

private static void EnsureThatTheCultureInfoIsValidForXamlParsing()
{
	var numberFormat = CultureInfo.CurrentCulture.NumberFormat;
	if (numberFormat.NumberDecimalSeparator == "." && 
		numberFormat.NumberGroupSeparator == ",") 
		return;
	Thread.CurrentThread.CurrentCulture = CultureInfo.GetCultureInfo("en-US");
	Thread.CurrentThread.CurrentUICulture = CultureInfo.GetCultureInfo("en-US");
}

I wonder when I’ll get the bug report about NH Prof not respecting the user’s UI culture…

time to read 1 min | 139 words

15:51 - I have about ten more minutes before starting a presentation, and I thought that I might as well make use of the time and do some work on NH Prof.

This feature is supporting filtering of sessions by URL. And I don’t expect it to be very hard.

16:01 - Manual testing is successful, writing a test for it

16:02 – Test passed, ready to commit, but don’t have network connection to do so.

The new feature is integrated into the application, in the UI, filtering appropriately, the works:

image

Just for fun, I did the feature with the projector on in front of the waiting crowd. I love NH Prof architecture.

time to read 4 min | 760 words

Continuing to shadow Davy’s series about building your own DAL, this post is about the Session Level Cache.

The session cache (first level cache in NHibernate terms) exists to support one main scenario, in a single session, a row in the database is represented by a single instance. That means that the session needs to track all the instances that it loads and be able to search through them. Davy does a good job covering how it is used, and the implementation is quite similar to the way it is done in NHibernate.

Davy’s implementation is to use a nested dictionary to hold instances per entity type. This is done mainly to support  RemoveAllInstancesOf<TEntity>(), a method that is unique to Day’s DSL. The reasoning for that method are interesting:

When a user executes a custom DELETE statement, there is no way for us to know which entities were actually removed from the database. But if any of those deleted entities happen to remain in the SessionLevelCache, this could lead to buggy application code whenever a piece of code tries to retrieve a specific entity which has already been removed from the database, but is still present in the SessionLevelCache. In order to deal with this scenario, the SessionLevelCache has a ClearAll and a RemoveAllInstancesOf method which you can use from your application code to either clear the entire SessionLevelCache, or to remove all instances of a specific entity type from the cache.

Personally, I think this is using an ICBM to crack eggshells. But I am probably being unfair. NHibernate has much the same issue, if you issue a Delete via SQL or HQL queries, NHibernate doesn’t have a way to track what was actually delete and deal with it. With NHibernate, it doesn’t tend to be a problem for the session level cache, mostly because of usage habits than anything else. The session used to do so rarely have to deal with entities loaded that were deleted by the query issue (and if it does, the user needs to handle that by calling Evict() on all the objects manually). NHibernate doesn’t try to support this scenario explicitly for the session cache. It does support this very feature for the second level cache.

It make sense, though. With NHibernate, in the vast majority of cases deleting is going to be done using NHibernate itself, rather than a special query. With Davy’s DAL, the usage of SQL queries for deletes is going to be much higher.

Another interesting point in Davy’s post is the handling of queries:

When a custom query is executed, or when all instances are retrieved, there is no way for us to exclude already cached entity instances from the result of the query. Well, theoretically speaking you could attempt to do this by adding a clause to the WHERE statement of each query that would prevent cached entities from being loaded. But then you might have to add the cached entity instances to the resulting list of entities anyways if they would otherwise satisfy the other query conditions. Obviously, trying to get this right is simply put insane and i don’t think there’s any DAL or ORM that actually does this (even if there was, i can’t really imagine any of them getting this right in every corner case that will pop up).

So a good compromise is to simply check for the existence of a specific instance in the cache before hydrating a new instance. If it is there, we return it from the cache and we skip the hydration for that database record. In this way, we avoid having to modify the original query, and while we could potentially return a few records that we already have in memory, at least we will be sure that our users will always have the same reference for any particular database record.

This is more or less how NHibernate operates, and for much the same reasoning. But there is a small twist. In order to ensure query coherency between the data base queries and in memory entities, NHibernate will optionally try to flush all the items in the session level cache that have been changed that may be affected by the query. A more detailed description of this can be found here, Davy’s DAL doesn’t do automatic change tracking, so this is not a feature that can be easily added with this prerequisite.

time to read 4 min | 652 words

Continuing to shadow Davy’s series about building your own DAL, this post is about hydrating entities.

Hydrating Entities is the process of taking a row from the database and turning it into an entity, while de-hydrating in the reverse process, taking an entity and turning it into a flat set of values to be inserted/updated.

Here, again, Davy’s has chosen to parallel NHibernate’s method of doing so, and it allows us to take a look at a very simplified version and see what the advantages of this approach is.

First, we can see how the session level cache is implemented, with the check being done directly in the entity hydration process. Davy has some discussion about the various options that you can choose at that point, whatever to just use the values from the session cache, to update the entity with the new values or to throw if there is a change conflict.

NHibernate’s decision at this point was to assume that the entity that we have is correct, and ignore any changes made in the meantime to the database. That turn out to be a good approach, because any optimistic concurrency checks that we might want will run when we commit the transaction, so there isn’t much difference from the result perspective, but it does simplify the behavior of NHibernate.

Next, there is the treatment of reference properties, what NHibernate call many-to-one associations. Here is the relevant code (editted slightly so it can fit on the blog width):

private void SetReferenceProperties<TEntity>(
	TableInfo tableInfo, 
	TEntity entity, 
	IDictionary<string, object> values)
{
	foreach (var referenceInfo in tableInfo.References)
	{
		if (referenceInfo.PropertyInfo.CanWrite == false)
			continue;
		
		object foreignKeyValue = values[referenceInfo.Name];

		if (foreignKeyValue is DBNull)
		{
			referenceInfo.PropertyInfo.SetValue(entity, null, null);
			continue;
		}

		var referencedEntity = sessionLevelCache.TryToFind(
			referenceInfo.ReferenceType, foreignKeyValue);
			
		if(referencedEntity == null)
			referencedEntity = CreateProxy(tableInfo, referenceInfo, foreignKeyValue);
								   
		referenceInfo.PropertyInfo.SetValue(entity, referencedEntity, null);
	}
}

There are a lot of things going on here, so I’ll take them one at a time.

You can see how the uniquing process is going on. If we already have the referenced entity loaded, we will get it directly from the session cache, instead of creating a separate instance of it.

It also shows something that Davy’s promised to touch in a separate post, lazy loading. I had an early look at his implementation and it is pretty. So I’ll skip that for now.

This piece of code also demonstrate something that is very interesting. The lazy loaded inheritance many to one association conundrum.  Which I’ll touch on a future post.

There are a few other implications of the choice of hydrating entities in this fashion. For a start, we are working with detached entities this way, the entity doesn’t have to have a reference to the session (except to support lazy loading). It also means that our entities are pure POCO, we handle it all completely externally to the entity itself.

It also means that if we would like to handle change tracking (with Davy’s DAL currently doesn’t do), we have a much more robust way of doing so, because we can simply dehydrate the entity and compare its current state to its original state. That is exactly how NHibernate is doing it. This turn out to be a far more robust approach, because it is safe in the face of method modifying state internally, without going through properties or invoking change tracking logic.

I wanted to also touch about a few things that makes the NHibernate implementation of the same thing a bit more complex. NHibernate supports reflection optimization and multiple ways of actually setting the values on the entity, is also support things like components and multi column properties, which means that there isn’t a neat ordering between properties and columns that make the Davy’s code so orderly.

time to read 2 min | 264 words

A while ago I mentioned the idea of Concepts and Features, I expounded it a bit more in the Feature by Feature post. Concepts and Features is the logical result of applying the Open Closed and Single Responsibility Principles. It boils down to a single requirement:

A feature creation may not involve any design activity.

Please read the original post for details about how this is actually handled. And a the specific example about filtering with NH Prof.

But while I put constraints on what a feature is, I haven’t talked about what a concept is. Oh, I talked about it being the infrastructure, but not much more.

The point about a concept implementation is that it contains everything that a feature must do. To take the filtering support in NH Prof as an example, the concept is responsible for finding all available filters, create and manage the UI, show the filter definition for the user when a filter is active, save and load the filter definition when the application is closed/started, perform the actual filtering, etc. The concept is also responsible for defining the appropriate conventions for the features in this particular concept.

As you can see, a lot of work goes into building a concept. But that work pays off the first time that you can service a user request in a rapid manner. To take NH Prof again, for most things, I generally need about half an hour to put out a new feature within an existing concept.

time to read 3 min | 526 words

A while ago I mentioned the idea of Concepts and Features, I expounded it a bit more in the Feature by Feature post. Concepts and Features is the logical result of applying the Open Closed and Single Responsibility Principles. It boils down to a single requirement:

A feature creation may not involve any design activity.

Please read the original post for details about how this is actually handled. In this case, I wanted to show you how this works in practice with NH Prof. Filtering in NH Prof is a new concept, which allows you to limit what you are seeing in the UI based on some criteria.

The ground work for this feature include the UI for showing the filters, changing the UI to support the actual idea of filtering, and other related stuff. I can already see that we would want to be able to serialize the filters to save them between sessions, but that is part of what the concept is. It is the overall idea.

But once we have the concept outlined, thinking about features related to it is very easy. Let us see how we can build a new filter for NH Prof, one that filter out all the cached statements.

I intentionally choose this filter, because it doesn’t really have any options you need a UI for. Which make my task easier, but here is the UI, FilterByNotCachedView.xaml

<UserControl
	xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
	xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
    <Grid>
		<TextBlock>Uncached statements</TextBlock>
    </Grid>
</UserControl>

And the filter implementation:

[DisplayName("Not Cached")]
public class FilterByNotCached: StatementFilterBase
{
	public override bool IsValid
	{
		get { return true; }
	}


	public override Func<IFilterableStatementSnapshot, bool> Process
	{
		get
		{
			return snapshot => snapshot.IsCached == false;
		}
	}
}

Not hard to figure out what this is doing, I think.

This is all the code required, and NH Prof knows that it is going to pick it up from the appropriate places, and make use of it. This is why after adding these two files, I can now run NH Prof and I get the following:

image

I don’t think that I can emphasis enough how important this is to enable creating consistent and easy to handle solutions. Moreover, since most requests tend to be for features, rather than concepts, it is extremely easy to put up a new feature.

The end result is that I have a smart infrastructure layer, where I am actually implementing the concepts, and on top of it I am building the actual features. Just to give you an idea, NH Prof currently have just 6 concepts, and everything else is built on top of it.

time to read 2 min | 224 words

Continuing the series about Davy’s build your own DAL, this post is talking about CRUD functionality.

One of the more annoying things about any DAL is dealing with the repetitive nature of talking to the data source. One of Davy’s stated goal in going this route is to totally eliminate those. CRUD functionality shouldn’t be something that you have to work with for each entity, it is just something that exists and that you get for free whenever you are using it.

I think he was very successful there, but I also want to talk about his method in doing so. Not surprisingly, Davy’s DAL is taking a lot of concepts from NHibernate, simplifying them a bit and then applying them. His approach for handling CRUD operations is reminiscent of how NHibernate itself works.

He divided the actual operations into distinct classes, called DatabaseActions, and he has things like FindAllAction, InsertAction, GetByIdAction.

This architecture gives you two major things, maintaining things is now far easier, and playing around with the action implementations is far easier. It is just a short step away from Davy’s Database Actions to NHibernate’s event & listeners approach.

This is also a nice demonstrations of the Single Responsibility Principle. It makes maintaining the software much easier than it would be otherwise.

time to read 4 min | 630 words

Continuing my shadowing of Davy’s Build Your Own DAL series, this post is shadowing Davy’s mapping post. 

Looking at the mapping, it is fairly clear that Davy has (wisely) made a lot of design decisions along the way that drastically reduce the scope of work he had to deal with. The mapping model is attribute base and essentially supports a simple one to one relation between the classes and the tables. This make things far simpler to deal with internally.

The choice has been made to fix the following:

  • Only support SQL Server
  • Attributed model
  • Primary key is always:
    • Numeric
    • Identity
    • Single Key

Using those rules, Davy create a really slick implementation. About the only complaint that I can make about it is that he doesn’t support having a limit clause of selects.

Take a look at the code he have, it should give you a good idea about what is involved in dealing with mapping between objects and classes.

The really fun part about redoing things that we are familiar with is that we get to ignore all the other things that we don’t want to do which introduce complexity. Davy’s solution works for his scenario, but I want to expand a bit on the additional features that NHibernate has at that layer. It should give you some understanding on the task that NHibernate is solving.

  1. Inheritance, Davy’s DAL is supporting only Table Per Class inheritance. NHibernate support 4 different inheritance model (+ mixed models that we won’t get into here).
  2. Eager loading, something that would add significantly to the complexity of the solution is the ability to load a Product with its Category. That requires that you’ll be able to change the generated SQL dynamically, and more importantly, that you will be able to read it correctly. That is far from simple.
  3. Property that spans multiple columns, it seems like a simple thing, but in actually it affects just about every part of the mapping layer, since it means that all property to column conversion has to take into account multiple columns. It is not so much complex as it is annoying. Especially since you have to either carry the column names or the column indexes all over the place.
  4. Collections, they are complex enough on their own (try to think about the effort involved in syncing changes in a collection in an efficient manner) but the part that really kills with them is trying to do eager loading with them. Oh, but I forgot about one to many vs. many to many. And let us not get into the distinctions between different types of collections.
  5. Optimistic Concurrency, this is actually a feature that would be relatively easy to add, I think. At least, if all you care about is a single type of versioning / optimistic concurrency. NHibernate supports several (detailed here).

I could probably go on, but I think the point was made. As I said before, the problem with taking on something like this is that it either take a day to get the basic functionality going or several months to really get it into shape to handle more than a single scenario.

This series of posts should give you a chance to appreciate what is going on behind the scenes, because you’ll have Davy’s effort at the base functionality, and my comments on what is required to take it to the next level.

time to read 1 min | 121 words

I had a discussion about session factory management in NHibernate just now, and I was asked why NHibernate isn’t managing that internally.

Well… it does, actually. As you can guess, SessionFactoryObjectFactory is the one responsible for that, and you can use GetNamedInstance() to get a specific session factory.

It isn’t used much (as a matter of fact, I have never seen it used), I am not quite sure why. I think that it is easier to just manage it ourselves. There are very rare cases where you would need more than one or two session factories, and when you run into them, you usually want to do other things as well, such as lazy session factory initialization and de-initialization.

FUTURE POSTS

  1. Semantic image search in RavenDB - about one day from now

There are posts all the way to Jul 28, 2025

RECENT SERIES

  1. RavenDB 7.1 (7):
    11 Jul 2025 - The Gen AI release
  2. Production postmorterm (2):
    11 Jun 2025 - The rookie server's untimely promotion
  3. Webinar (7):
    05 Jun 2025 - Think inside the database
  4. Recording (16):
    29 May 2025 - RavenDB's Upcoming Optimizations Deep Dive
  5. RavenDB News (2):
    02 May 2025 - May 2025
View all series

Syndication

Main feed ... ...
Comments feed   ... ...
}