Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,523
|
Comments: 51,144
Privacy Policy · Terms
filter by tags archive
time to read 1 min | 88 words

A bunch of us will have beers in Vienna on Tuesday (5th Mai) at about 18:30 and would welcome you to join us!

I also don't know if telavivbeach has opened, but i know that "Hermes Strandbar" has opened. If the weather is fine you can find us there:

ttp://www.strandbarherrmann.at/ or if it's cold/rainy at Bar Vulcania located here: http://tinyurl.com/dcq4t7

Anybody is welcome to join us.

Thanks for Christoph for setting this up.

time to read 3 min | 450 words

One of the more common mistakes that I see people doing with NHibernate is related to how they are loading entities by the primary key. This is because there are important differences between the three options.

The most common mistake that I see is using a query to load by id. in particular when using Linq for NHibernate.

var customer = (
	select customer from s.Linq<Customer>()
	where customer.Id = customerId
	select customer
	).FirstOrDefault();

Every time that I see something like that, I wince a little inside. The reason for that is quite simple. This is doing a query by primary key. The key word here is a query.

This means that we have to hit the database in order to get a result for this query. Unless you are using the query cache (which by default you won’t), this force a query on the database, bypassing both the first level identity map and the second level cache.

Get  and Load are here for a reason, they provide a way to get an entity by primary key. That is important for several aspects, most importantly, it means that NHibernate can apply quite a few optimizations for this process.

But there is another side to that, there is a significant (and subtle) difference between Get and Load.

Load will never return null. It will always return an entity or throw an exception. Because that is the contract that we have we it, it is permissible for Load to not hit the database when you call it, it is free to return a proxy instead.

Why is this useful? Well, if you know that the value exist in the database, and you don’t want to pay the extra select to have that, but you want to get that value so we can add that reference to an object, you can use Load to do so:

s.Save(
	new Order
	{
		Amount = amount,
		customer = s.Load<Customer>(1)
	}
);

The code above will not result in a select to the database, but when we commit the transaction, we will set the CustomerID column to 1. This is how NHibernate maintain the OO facade when giving you the same optimization benefits of working directly with the low level API.

Get, however, is different. Get will return null if the object does not exist. Since this is its contract, it must return either the entity or null, so it cannot give you a proxy if the entity is not known to exist. Get will usually result in a select against the database, but it will check the session cache and the 2nd level cache first to get the values first.

So, next time that you need to get some entity by its primary key, just remember the differences…

time to read 4 min | 651 words

NHibernate’s listeners architecture bring with it a lot of power to the game, but understanding how to use it some of the listeners properly may require some additional knowledge. In this post, I want to talk specifically about the pre update hooks that NHibernate provides.

Those allow us to execute our custom logic before the update / insert is sent to the database. On the face of it, it seems like a trivial task, but there are some subtleties that we need to consider when we use them.

Those hooks run awfully late in the processing pipeline, that is part of what make them so useful, but because they run so late, when we use them, we have to be aware to what we are doing with them and how it impacts the rest of the application.

Those two interface define only one method each:

bool OnPreUpdate(PreUpdateEvent @event) and bool OnPreInsert(PreInsertEvent @event), respectively.

Each of those accept an event parameter, which looks like this:

image

Notice that we have two representations of the entity in the event parameter. One is the entity instance, located in the Entity property, but the second is the dehydrated entity state, which is located in the State property.

In NHibernate, when we talk about the state of an entity we usually mean the values that we loaded or saved from the database, not the entity instance itself. Indeed, the State property is an array that contains the parameters that we will push into the ADO.Net Command that will be executed as soon as the event listener finish running.

Updating the state array is a little bit annoying, since we have to go through the persister to find appropriate index in the state array, but that is easy enough.

Here comes the subtlety, however. We cannot just update the entity state. The reason for that is quite simple, the entity state was extracted from the entity and place in the entity state, any change that we make to the entity state would not be reflected in the entity itself. That may cause the database row and the entity instance to go out of sync, and make cause a whole bunch of really nasty problems that you wouldn’t know where to begin debugging.

You have to update both the entity and the entity state in these two event listeners (this is not necessarily the case in other listeners, by the way). Here is a simple example of using these event listeners:

public class AuditEventListener : IPreUpdateEventListener, IPreInsertEventListener
{
	public bool OnPreUpdate(PreUpdateEvent @event)
	{
		var audit = @event.Entity as IHaveAuditInformation;
		if (audit == null)
			return false;

		var time = DateTime.Now;
		var name = WindowsIdentity.GetCurrent().Name;

		Set(@event.Persister, @event.State, "UpdatedAt", time);
		Set(@event.Persister, @event.State, "UpdatedBy", name);

		audit.UpdatedAt = time;
		audit.UpdatedBy = name;

		return false;
	}

	public bool OnPreInsert(PreInsertEvent @event)
	{
		var audit = @event.Entity as IHaveAuditInformation;
		if (audit == null)
			return false;


		var time = DateTime.Now;
		var name = WindowsIdentity.GetCurrent().Name;

		Set(@event.Persister, @event.State, "CreatedAt", time);
		Set(@event.Persister, @event.State, "UpdatedAt", time);
		Set(@event.Persister, @event.State, "CreatedBy", name);
		Set(@event.Persister, @event.State, "UpdatedBy", name);

		audit.CreatedAt = time;
		audit.CreatedBy = name;
		audit.UpdatedAt = time;
		audit.UpdatedBy = name;

		return false;
	}

	private void Set(IEntityPersister persister, object[] state, string propertyName, object value)
	{
		var index = Array.IndexOf(persister.PropertyNames, propertyName);
		if (index == -1)
			return;
		state[index] = value;
	}
}

And the result is pretty neat, I must say.

time to read 7 min | 1343 words

image

This is a post that is riling against things like Rhino Commons, MyCompany.Util, YourCompany.Shared.

The reason for that, and the reason that I am not longer making direct use of Rhino Commons in my new projects is quite simple.

Cohesion:

In computer programming, cohesion is a measure of how strongly-related and focused the various responsibilities of a software module are. Cohesion is an ordinal type of measurementand is usually expressed as "high cohesion" or "low cohesion" when being discussed. Modules with high cohesion tend to be preferable because high cohesion is associated with several desirable traits of software including robustness, reliability, reusability, and understandability whereas low cohesion is associated with undesirable traits such as being difficult to maintain, difficult to test, difficult to reuse, and even difficult to understand.

I am going to rip into Rhino Commons for a while, let us look at how many things it can do:

  1. Create SQL CE databases dynamically
  2. Keep track of the performance of ASP.Net requests
  3. Log to a database in an async manner using bulk insert
  4. Log to a collection of strings
  5. Log to an embedded database – with strict size limits
    1. Same thing for SQLite
    2. Same thing for SqlCE
  6. Keep track of desirable properties of the http request for the log
  7. Add more configuration options to Windsor
  8. Provide a static gateway to the Windsor Container
    1. Plus some util methods
  9. Allow to execute Boo code as part of an msbuild script
  10. Allow the execute a set of SQL scripts as part of an msbuild script
  11. Provide cancelable thread pool
  12. Provide an implementation of a thread safe queue
  13. Provide an implementation of a countdown latch (threading primitive)
  14. Expose a SqlCommandSet that is internal in the BCL so we can use it
  15. Allow to easily record log messages executes in a given piece of code
  16. Allow to get the time a piece of code executed with high degree of accuracy
  17. Allow to bulk delete a lot of rows from the database efficiently
  18. Provide an easy way to read XML based on an XPath
  19. Provide a way to update XML based on an XPath
  20. Provide a configuration DSL for Windsor
  21. Provide local data store that works both in standard code and in ASP.Net scenarios
  22. Provide collection util methods
  23. Provide date time util methods
  24. Provide disposable actions semantics
  25. Provide generic event args class
  26. Allow 32 bit process to access 64 bit registry
  27. Give nice syntax for indexed properties
  28. Give a nice syntax for static reflection
  29. Provide guard methods for validating arguments
  30. Provide guard methods for validating arguments – No, that is not a mistake, there are actually two different and similar implementations of that there
  31. Provide a very simple data access layer based on IDbConnection.
  32. Provide a way to query NHibernate by for a many to one using its id with a nicer syntax
  33. Provide a named expression query for NHibernate (I am not sure what we are doing that for)
  34. Provide unit of work semantics for NHibernate
  35. Provide transaction semantics for auto transaction management using the previously mentioned unit of work
  36. Provide a way to map an interface to an entity and tie it to the Repository implementation
  37. Provide a fairly complex in memory test base class for testing Container and database code
  38. Provide a way to warn you when SELECT N+1 occur in your code via http module
  39. Provide nicer semantics for using MultiCriteria with NHibernate
  40. Provide Future queries support externally to NHibernate
  41. Provide an IsA expression for NHibernate
  42. Provide a way to execute an In statement using XML selection (efficient for _large_ number of queries)
  43. Provide a pretty comprehensive generic Repository implementation, including a static gateway
  44. Provide an easy way to correctly implement caching
  45. Provide a way to easily implement auto transaction management without proxies
  46. Do much the same for Active Record as well as NHibernate

Well, I think that you get the drift by now. Rhino Commons has been the garbage bin for anything that I came up with for a long time.

It is easy to get to that point, but just not paying attention. In fact, we are pretty much doctrined into doing just that, with “reuse, reuse, reuse” bang into our head so often.

The problem with that?

Well, most of this code is only applicable for just one problem, in one context, in one project. Bulk Deleter is a good example, I needed it for one project, 3 years ago, and never since. The repository & unit of work stuff has been used across many projects, but what the hell do they have to do with a configuration dsl? Or with static reflection?

As a matter of fact, that Rhino Commons has two (different) ways to do parameter validation is a pretty good indication of a problem. The mere fact that we tend to have things like Util, Shared or Common is an indication that we are basically throwing unrelated concerns together.  It actually get worse if we have something in the common project that can be used for multiple projects. A good example of that would be the in memory database tests that Rhino Commons provide. I have used it in several projects, but you know what? It is freaking complex.

The post about rolling your own in memory test base class with NHibernate show you how simple it can be. The problem is that as timed went by, we wanted more & more functionality out of the rhino commons implementation, container integration, support for multiple databases, etc. And as we did, we piled more complexity on top of it. To the point where it is easier to roll your own than to use the ready made one. Too many requirements for one piece of code == complexity. And complexity is usually a bad thing.

The other side of the problem is that we are going to end up with a lot of much smaller projects, focused on doing just one thing. For example, a project for extending NHibernate’s querying capabilities, or a project to extend log4net.

Hm, low coupling & high cohesion, I heard that somewhere before…

Post scheduling

time to read 2 min | 277 words

This is a general announcement about a change in the way that I am posting to this blog.

One of the more frequent feedback items about the blog was that people find it hard to catch up with my rate of posting. This is especially true since I tend to spend some days posting a large number of posts, and I feel that the sheer quantity reduce the amount of time people dedicate to each post (hence reducing its quality).

I have started making use of future posting to a high degree (almost all of the NHibernate mapping posts were written in a day or two, for example, but spaced over about a month). I don’t really try to keep any sort of organization, except that I am going to try to keep the maximum number of posts per day to no more than two. Each new post is just going to the back of the queue, and will be posted then.

Currently I have scheduled posts all the way to mid May, but I think it will get higher. This is good news in the sense that you are almost always going to get at least one post per day from me, but it does mean that sometimes posts that are written together are stretched over a period of time. Or I may refer (usually in comments) to posts that will be posted in the future.

There is no real meaning behind the timing of the posts, unless there is something special that happens in this date, so you may leave the conspiracy theories to rest :-) .

NH Prof feedback

time to read 3 min | 465 words

Every now and then I am running a search to see what people are thinking about NH Prof. And I decided that this time it might be a good idea to share the results found on Twitter. I am still looking for more testimonials, by the way.

image

You are welcome :-)

image

I am not sure if that is that much of a good thing, though…

image

I certainly agree :-)

image

Should I worry about it or use it as a marketing channel? Remind me something from a Batman movie.

image

Go for it :-D

image

Thanks.

image

Yeah!

image

Wait until you see some of the bug reports that I am seeing…

image

That is the point.

image

Hm, I am not sure that I am happy to be the center of a life changing event, but as long as it is positive…

image

That was a damn hard feature to write, too.

image

What can I say, I agree.

And assuming that you got to this point in the post, I am doing a lot of work on NH Prof recently, getting it ready to v1.0. As a reminder, when I release v1.0, the reduced beta pricing is going away…

time to read 2 min | 369 words

When using NHibernate we generally want to test only three things, that properties are persisted, that cascade works as expected and that queries return the correct result. In order to do all of those, we generally have to talk to a real database, trying to fake any of those at this level is futile and going to be very complicated.

We can either use a standard RDBMS or use an in memory database such as SQLite in order to get very speedy tests.

I have a pretty big implementation of a base class for unit testing NHibernate in Rhino Commons, but that has so many features that I forget how to use it sometimes. Most of those features, by the way, are now null & void because we have NH Prof, and can easily see what is going on without resorting to the SQL Profiler.

At any rate, here is a very simple implementation of that base class, which gives us the ability to execute NHibernate tests in memory.

public class InMemoryDatabaseTest : IDisposable
{
	private static Configuration Configuration;
	private static ISessionFactory SessionFactory;
	protected ISession session;

	public InMemoryDatabaseTest(Assembly assemblyContainingMapping)
	{
		if (Configuration == null)
		{
			Configuration = new Configuration()
				.SetProperty(Environment.ReleaseConnections,"on_close")
				.SetProperty(Environment.Dialect, typeof (SQLiteDialect).AssemblyQualifiedName)
				.SetProperty(Environment.ConnectionDriver, typeof(SQLite20Driver).AssemblyQualifiedName)
				.SetProperty(Environment.ConnectionString, "data source=:memory:")
				.SetProperty(Environment.ProxyFactoryFactoryClass, typeof (ProxyFactoryFactory).AssemblyQualifiedName)
				.AddAssembly(assemblyContainingMapping);

			SessionFactory = Configuration.BuildSessionFactory();
		}

		session = SessionFactory.OpenSession();

		new SchemaExport(Configuration).Execute(true, true, false, true, session.Connection, Console.Out);
	}

	public void Dispose()
	{
		session.Dispose();
	}
}

This just set up the in memory database, the mappings, and create a session which we can now use. Here is how we use this base class:

public class BlogTestFixture : InMemoryDatabaseTest
{
	public BlogTestFixture() : base(typeof(Blog).Assembly)
	{
	}

	[Fact]
	public void CanSaveAndLoadBlog()
	{
		object id;

		using (var tx = session.BeginTransaction())
		{
			id = session.Save(new Blog
			{
				AllowsComments = true,
				CreatedAt = new DateTime(2000,1,1),
				Subtitle = "Hello",
				Title = "World",
			});

			tx.Commit();
		}

		session.Clear();


		using (var tx = session.BeginTransaction())
		{
			var blog = session.Get<Blog>(id);

			Assert.Equal(new DateTime(2000, 1, 1), blog.CreatedAt);
			Assert.Equal("Hello", blog.Subtitle);
			Assert.Equal("World", blog.Title);
			Assert.True(blog.AllowsComments);

			tx.Commit();
		}
	}
}

Pretty simple, ah?

time to read 2 min | 325 words

To be truthful, I never thought that I would have a following for this post 4 years later, but I run into that today.

The following is a part of an integration test for NH Prof:

Assert.AreEqual(47, alerts[new StatementAlert(new NHProfDispatcher())
{
	Title = "SELECT N+1"
}]);

I am reviewing all our tests now, and I nearly choked on that one. I mean, who was stupid enough to write code like this?  I mean, yes, I can understand what it is doing, sort of, but only because I have a dawning sense of horror when looking at it.

I immediately decided that the miscreant that wrote that piece of code should be publically humiliated and  chewed on by a large dog.

SVN Blame is a wonderful thing, isn’t it?

image

Hm… there is a problem here.

Actually, there are a couple of problems here. One is that we have a pretty clear indication that we have a historic artifact here. Just look at the number of version that are shown in just this small blame window. This is good enough reason to start doing full fledged ancestory inspection. The test has started life as:

[TestFixture]
public class AggregatedAlerts:IntegrationTestBase
{
	[Test]
	public void Can_get_aggregated_alerts_from_model()
	{
		ExecuteScenarioInDifferentAppDomain<Scenarios.ExecutingTooManyQueries>();

		var alerts = observer.Model.Sessions[1].AggregatedAlerts;
		Assert.AreEqual(47, alerts["SELECT N+1"]);
		Assert.AreEqual(21, alerts["Too many database calls per session"]);
	}
}

Which I think is reasonable enough. Unfortunately, it looks like somewhere along the way, someone had taken the big hammer approach to this. The code now looks like this:

Assert.AreEqual(47, alerts.First(x => x.Key.Title == "SELECT N+1"));

Now this is readable.

Oh, for the nitpickers, using hash code evaluation as the basis of any sort of logic is wrong. That is the point of this post. It is a non obvious side affect that will byte* you in the ass.

* intentional misspelling

time to read 4 min | 788 words

So, after a delicious month at home, mostly spent resting I am back to traveling again. This time my traveling schedule is so complex that I need to write it down.

Next week (04 May – 08 May), I am going to be in Vienna, Austria. Doing some work for a client. If someone want to organize a beer night, or something similar, I open for that :-)

The week after that (11 May – 13 May), I am going to take part in Progressive.NET tutorials in London, UK. There are quite a few good speakers there, doing 12 half day workshops. I am going to be talking about NHibernate, doing an Intro to NHibernate and an Advanced NHibernate workshop. This is basically trying to squeeze my 3 days NHibernate course into one half day workshop (Intro) and then just directly to the advanced NHibernate (which goes beyond the stuff that I am talking about in both the workshop and the course).

The week after that, (18 May – 20 May), I am giving the first NHibernate Course (London, UK). This is a three days course that is going to take you from no NHibernate knowledge to having a pretty good knowledge in NHibernate, including understanding all the knobs and how NHibernate thinks. This course is already full, but have no fear, because the weak after that… I am giving the same course again (again in London, UK).

This time the dates are 26 May – 28 May, and there are booking available there.

You might have noticed that this means that I am going to spend three weeks in London. There is already a beer night planned, thanks to Sebastian. A side affect of this schedule is that I am available in the holes between the scheduled workshops and courses. This means that if you are in London and would like me to have a short engagement in 14 May – 15 May or 21 May – 25 May, please drop me a line.

Next, 29 May – 7 June I am spending in New Jersey, working. I might catch up the ALT.Net meeting there again, I seem to make a habit of that.

Following that, 8 June – 12 June, it is time for DevTeach, my favorite conference on that side of the Atlantic.

image

I think that early bird registration is still open, so hurry up. I am going to talk about:

  • Advanced IoC
  • OR/M += 2 – my advanced NHibernate workshop, squeezed into a single session
  • Producing production quality software
  • Writing Domain Specific Languages in Boo

And as usual for DevTeach, I am going to need a lot of coffee just to keep up with the content in the sessions and the fun happening in the hallways and in the bars. And as a pleasant bonus, ALT.Net Canada is going to follow up directly after DevTeach, so on 13 June – 14 June I am going to be there.

Wait, it is not over yet!

17 June – 19 June, Norwegian Developers Conference! Just from my interactions with the conference organizer I can feel that it is going to be a good one. I mean, just look at the speakers lineup.

image

For that matter, you might want to do better than that and check the unofficial NDC 2009 video:

image

If a conference has that much energy before it is even started, you can pretty much conclude that it is going to be good.

Here I am talking about building Multi Tenant Applications, Producing Production Quality Software and Writing Domain Specific Languages in Boo.

And as long as I am in Norway, why not give another NHibernate course (Oslo, Norway)? This one is on the 22 June – 24 June,

By that time, assuming that I will survive all of that, I am heading home, and giving myself a course in hibernating, because I am pretty sure that this is what I would have to do to get my strength back.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  2. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  3. re (33):
    28 May 2024 - Secure Drop protocol
  4. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
  5. Production Postmortem (51):
    12 Dec 2023 - The Spawn of Denial of Service
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}