Ayende @ Rahien

Hi!
My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:

ayende@ayende.com

+972 52-548-6969

, @ Q c

Posts: 18 | Comments: 71

filter by tags archive

NHibernate Futures

time to read 7 min | 1204 words

One of the nicest new features in NHibernate 2.1 is the Future<T>() and FutureValue<T>() functions. They essentially function as a way to defer query execution to a later date, at which point NHibernate will have more information about what the application is supposed to do, and optimize for it accordingly. This build on an existing feature of NHibernate, Multi Queries, but does so in a way that is easy to use and almost seamless.

Let us take a look at the following piece of code:

using (var s = sf.OpenSession())
using (var tx = s.BeginTransaction())
{
	var blogs = s.CreateCriteria<Blog>()
		.SetMaxResults(30)
		.List<Blog>();
	var countOfBlogs = s.CreateCriteria<Blog>()
		.SetProjection(Projections.Count(Projections.Id()))
		.UniqueResult<int>();

	Console.WriteLine("Number of blogs: {0}", countOfBlogs);
	foreach (var blog in blogs)
	{
		Console.WriteLine(blog.Title);
	}

	tx.Commit();
}

This code would generate two queries to the database:

image

image

image

Two queries to the database is a expensive, we can see that it took us 114ms to get the data from the database. We can do better than that, let us tell NHibernate that it is free to do the optimization in any way that it likes, I have marked the changes in red:

using (var s = sf.OpenSession())
using (var tx = s.BeginTransaction())
{
	var blogs = s.CreateCriteria<Blog>()
		.SetMaxResults(30)
		.Future<Blog>();
	var countOfBlogs = s.CreateCriteria<Blog>()
		.SetProjection(Projections.Count(Projections.Id()))
		.FutureValue<int>();

	Console.WriteLine("Number of blogs: {0}", countOfBlogs.Value);
	foreach (var blog in blogs)
	{
		Console.WriteLine(blog.Title);
	}

	tx.Commit();
}

Now, we seem a different result:

image

image

Instead of going to the database twice, we only go once, with both queries at once. The speed difference is quite dramatic, 80 ms instead of 114 ms, so we saved about 30% of the total data access time and a total of 34 ms.

To make things even more interesting, it gets better the more queries that you use. Let us take the following scenario. We want to show the front page of a blogging site, which should have:

  • A grid that allow us to page through the blogs.
  • Most recent posts.
  • All categories
  • All tags
  • Total number of comments
  • Total number of posts

For right now, we will ignore caching, and just look at the queries that we need to handle. I think that you can agree that this is not an unreasonable amount of data items to want to show on the main page. For that matter, just look at this page, and you can probably see as much data items or more.

Here is the code using the Future options:

using (var s = sf.OpenSession())
using (var tx = s.BeginTransaction())
{
	var blogs = s.CreateCriteria<Blog>()
		.SetMaxResults(30)
		.Future<Blog>();

	var posts = s.CreateCriteria<Post>()
		.AddOrder(Order.Desc("PostedAt"))
		.SetMaxResults(10)
		.Future<Post>();

	var tags = s.CreateCriteria<Tag>()
		.AddOrder(Order.Asc("Name"))
		.Future<Tag>();

	var countOfPosts = s.CreateCriteria<Post>()
		.SetProjection(Projections.Count(Projections.Id()))
		.FutureValue<int>();

	var countOfBlogs = s.CreateCriteria<Blog>()
		.SetProjection(Projections.Count(Projections.Id()))
		.FutureValue<int>();

	var countOfComments = s.CreateCriteria<Comment>()
		.SetProjection(Projections.Count(Projections.Id()))
		.FutureValue<int>();

	Console.WriteLine("Number of blogs: {0}", countOfBlogs.Value);

	Console.WriteLine("Listing of blogs");
	foreach (var blog in blogs)
	{
		Console.WriteLine(blog.Title);
	}

	Console.WriteLine("Number of posts: {0}", countOfPosts.Value);
	Console.WriteLine("Number of comments: {0}", countOfComments.Value);
	Console.WriteLine("Recent posts");
	foreach (var post in posts)
	{
		Console.WriteLine(post.Title);
	}

	Console.WriteLine("All tags");
	foreach (var tag in tags)
	{
		Console.WriteLine(tag.Name);
	}

	tx.Commit();
}

This generates the following:

image

And the actual SQL that is sent to the database is:

SELECT top 30 this_.Id             as Id5_0_,
              this_.Title          as Title5_0_,
              this_.Subtitle       as Subtitle5_0_,
              this_.AllowsComments as AllowsCo4_5_0_,
              this_.CreatedAt      as CreatedAt5_0_
FROM   Blogs this_
SELECT   top 10 this_.Id       as Id7_0_,
                this_.Title    as Title7_0_,
                this_.Text     as Text7_0_,
                this_.PostedAt as PostedAt7_0_,
                this_.BlogId   as BlogId7_0_,
                this_.UserId   as UserId7_0_
FROM     Posts this_
ORDER BY this_.PostedAt desc
SELECT   this_.Id       as Id4_0_,
         this_.Name     as Name4_0_,
         this_.ItemId   as ItemId4_0_,
         this_.ItemType as ItemType4_0_
FROM     Tags this_
ORDER BY this_.Name asc
SELECT count(this_.Id) as y0_
FROM   Posts this_
SELECT count(this_.Id) as y0_
FROM   Blogs this_
SELECT count(this_.Id) as y0_
FROM   Comments this_

That is great, but what would happen if we would use List and UniqueResult instead of Future and FutureValue?

I’ll not show the code, since I think it is pretty obvious how it will look like, but this is the result:

image

Now it takes 348ms to execute vs. 259ms using the Future pattern.

It is still in the 25% – 30% speed increase, but take note about the difference in time. Before, we saved 34 ms. Now, we saved 89 ms.

Those are pretty significant numbers, and those are against a very small database that I am running locally, against a database that is on another machine, the results would have been even more dramatic.

The Repository’s Daughter

time to read 6 min | 1002 words

Keeping up with the undead theme, this post is a response to Greg’s. I’ll just jump into the parts that I disagree with:

The boundary is not arbitrary or artificial. The boundary comes back to the reasons we were actually creating a domain model in the first place. it seems what Oren is actually arguing against is not whether “advances in ORMs” have changed things but that he questions the isolation at all. The whole point of the separation is to remove such details from our thinking when we deal with the domain and to make explicit the boundaries around the domain and the contracts of those boundaries.

As I understand Greg’s interpretation of my points, I agree. For quite a few needs, there is no need to create an explicit boundary between the persistence medium and the code. Transparent lazy loading and persistence by reachability allow us to hand the entire problem to the infrastructure. The two things that we have to worry about it controlling the fetch paths and making sure that we aren’t doing stupid things like calling the database in a loop.

Those things are the responsibilities of the controllers layer (not necessarily an MVC controller, by the way, I am talking about the highest level in the app that isn’t actually about presentation concerns).

If we take Oren’s advice, we can store our data anywhere … so long as it looks and acts like a database. If that is not the case then oops we have to either

  • Make it look and act like a full database
  • Scrap our code that treated it as such and go back to the explicit route.

Just to be clear on this point … He has baked in fetch paths, full criteria, and ordering into his Query objects so any new implementation would also have to support all of those things.Tell me how do you do this when you are getting your data now from an explicit service?

Well, duh! Of course they would need that. We need to be able to do that to be able to execute business logic. Going back to the example that I gave in the previous post, “Add Product” and “Charge Order” have drastically different data requirements, how are you going to handle that without having support for fetch paths?

The last statement there is quite telling, I think. I thought that my previous post made it clear that I am advocating doing this inside a service boundary. The problem that Greg is trying to state doesn’t exist since you don’t do that across a service boundary.

Its not just about YAGNI its about risk management. We make the decision early (Port/Adapter) that will allow us to delay other decisions. It also costs us next to nothing up front to allow for our change later. YAGNI should never be used without thought of risk management, we should always be able to quantify our initial cost, the probability of change, and the cost of the change in the future.

I call bull on that. Saying that it using an adapter cost “next to nothing” is wrong and misleading. Leaving aside the problems of trying to expose the advance functionality that you need aside, it also doesn’t work. A good example would be using a repository using NHibernate, which take part in a Unit of Work and uses persistence by reachability and auto flush on dirty.

Chances are, the developers aren’t even aware that they are taking advantage of all of that. Trying to switch that with a web service based repository introduce quite a lot of friction to the mix. I know, I was there, and I saw what it did.

That is leaving aside things like how do you expose concurrency violations or transaction deadlocks using different persistence options. You need to control that, and an adapter is generally either a very leaky abstract or a huge effort to write, and it is still leaky. Worse, using an adapter, you are forced to go with the lowest common denominator for features. Of course you would want to isolate that, you are working at a level so low you might as well be writing things to the disk without the benefit of even a file system. That doesn’t mean that this is the smart choice.

Trying to abstract things away unless you have a firm requirement is just about the definition of YAGNI. And handwaving effort required to build this sort of infrastructure doesn’t really make it go away.

Yes, the approach that I am advocating makes a lot of assumptions. If you remove any of them, the approach that I am advocating is invalid. But when the assumptions are valid (inside a service boundary, using a database, using a smart persistence framework), not making use of that is… stealing from your client.

Arguments against my approach should be made in the context that I am establishing it.

Let me point out a large failure in logic here. You assume an impedance mismatch with a relational database that results in a much higher cost of getting the children with the parents. If I am using other mechanisms like say storing the ShoppingCart as a document in CouchDb that the cost will be nearly identical whether I bring back only the Cart or the Items.

Again, I am talking about something in context. Taking it out of context make the argument invalid. I am going to stop here, because I don’t think that there is any value in parallel monologues. Arguments against the approach that I suggesting should be made in the context that I am outlying my suggestion, not outside it.

The difference between Infrastructure & Application

time to read 3 min | 466 words

Recently I am finding myself writing more and more infrastructure level code. Now, there are several reasons for that, mostly because the architectural approaches that I advocate don’t have a good enough infrastructure in the environment that I usually work with.

Writing infrastructure is both fun & annoying. It is fun because usually you don’t have business rules to deal with, it is annoying because it take time to get it to do something that will give the business some real value.

That said, there are some significant differences between writing application level code and infrastructure level code. For that matter, I usually think about this as:

image

Infrastructure code is usually the base, it provides basic services such as communication, storage, thread management, etc. It should also provide strong guarantees regarding what it is doing, it should be simple, understandable and provide the hooks to understand what happens when things go wrong.

Framework code is sitting on top of the infrastructure, and provide easy to use semantics on top of that. They usually take away some of the options that the infrastructure give you in order to present a more focused solution for a particular scenario.

App code is even more specific than that, making use of the underlying framework to deal with much of the complexities that we have to deal with.

Writing application code is easy, it is a single purpose piece of code. Writing framework and infrastructure code is harder, they have much more applicability.

So far, I don’t believe that I said anything new.

What is important to understand is that practices that works for application level code does not necessarily work for infrastructure code. A good example would be this nasty bit of work. It doesn’t read very well, and it has some really long methods, and… it handle a lot of important infrastructure concerns that you have to deal with. For example, it is completely async, has good error handling and reporting and it has absolutely no knowledge about what exactly it is doing. That is left for higher level pieces of the code. Trying to apply application code level of practices to that will not really work, different constraints and different requirements.

By the same token, testing such code follow a different pattern than testing application level code. Tests are often more complex, requiring more behavior in the test to reproduce real world scenarios. And the tests can rarely be isolated bits, they usually have to include significant pieces of the infrastructure. And what they test can be complex enough as well.

Different constraints and different requirements.

NHibernate 2nd Level Cache

time to read 3 min | 442 words

NHibernate has a builtin support for caching. It sounds like a trivial feature at first, until you realize how significant it is that the underlying data access infrastructure already implements it. It means that you don’t have to worry about thread safety, propagating changes in a farm, built smart cache invalidation strategies or deal with all of the messy details that are usually along for the ride when you need to implement a non trivial infrastructure piece.

And no, it isn’t as simple as just shoving a value to the cache.

I spent quite a bit of time talking about this here, so I wouldn’t go about all the cache internals and how they work, but I’ll mention the highlights. NHibernate internally has the following sub caches:

  • Entity Cache
  • Collection Cache
  • Query Cache
  • Timestamp Cache

NHibernate make use of all of them in a fairly complex way to ensure that even though we are using the cache, we are still presenting a consistent view of the cache as a mirror of the database. The actual details of how we do it can be found here.

Another thing that NHibernate does for us when we update the cache is try to maintain the consistent view of the world even when using replicated caches used in a farm scenarios. This requires some support from the caching infrastructure, such as the ability to perform a hard lock on the values. Of the free caching providers for NHibernate, only Velocity support this, which means that when we evaluate a cache provider for NHibernate to be used, we need to take this into account.

In general, we can pretty much ignore this, but it does have some interesting implications with regards to what are the isolation guarantees that we can make based on the cache implementation that we use, the number of machines we use and the cache concurrency strategy that we use.

You can read about this here and here.

One thing that you should be aware of is that NHibernate currently doesn’t have transaction cache concurrency story, mostly because there is no cache provider that can give us that. As such, be aware that if you require serializable isolation level to work with your entities, you cannot use the 2nd level cache. The 2nd level cache currently guarantee only read committed (and almost guarantee repeatable read if this is the isolation level that you use in the database). Note that this guarantee is made for read-write cache concurrency mode only.

Find the bug

time to read 1 min | 120 words

Can you find the bug in here?

public void Receiver(object ignored)
{
	while (keepRunning)
	{
		using (var tx = new TransactionScope())
		{
			Message msg;
			try
			{
				msg = receiver.Receive("uno", null, new TimeSpan(0, 0, 10));
			}
			catch (TimeoutException)
			{
				continue;
			}
			catch(ObjectDisposedException)
			{
				continue;
			}
			lock (msgs)
			{
				msgs.Add(Encoding.ASCII.GetString(msg.Data));
				Console.WriteLine(msgs.Count);
			}
			tx.Complete();
		}
	}
}

And:

[Fact]
public void ShouldOnlyGetTwoItems()
{
	ThreadPool.QueueUserWorkItem(Receiver);

	Sender(4);

	Sender(5);

	while(true)
	{
		lock (msgs)
		{
			if (msgs.Count>1)
				break;
		}
		Thread.Sleep(100);
	}
	Thread.Sleep(2000);//let it try to do something in addition to that
	receiver.Dispose();
	keepRunning = false;

	Assert.Equal(2, msgs.Count);
	Assert.Equal("Message 4", msgs[0]);
	Assert.Equal("Message 5", msgs[1]);
}

I will hint that you cannot make any part of the receiver after it was disposed.

Esent, identity and the case of the duplicate key

time to read 2 min | 333 words

Following up on a bug report that I got from a user of Rhino Queues, I figured out something very annoying about the way Esent handles auto increment columns.

Let us take the following bit of code:

using (var instance = new Instance("test.esent"))
{
	instance.Init();

	using (var session = new Session(instance))
	{
		JET_DBID dbid;
		Api.JetCreateDatabase(session, "test.esent", "", out dbid, CreateDatabaseGrbit.OverwriteExisting);

		JET_TABLEID tableid;
		Api.JetCreateTable(session, dbid, "outgoing", 16, 100, out tableid);
		JET_COLUMNID columnid;

		Api.JetAddColumn(session, tableid, "msg_id", new JET_COLUMNDEF
		{
			coltyp = JET_coltyp.Long,
			grbit = ColumndefGrbit.ColumnNotNULL |
					ColumndefGrbit.ColumnAutoincrement |
					ColumndefGrbit.ColumnFixed
		}, null, 0, out columnid);

		Api.JetCloseDatabase(session, dbid, CloseDatabaseGrbit.None);
	}
}

for (int i = 0; i < 3; i++)
{
	using (var instance = new Instance("test.esent"))
	{
		instance.Init();

		using (var session = new Session(instance))
		{
			JET_DBID dbid;
			Api.JetAttachDatabase(session, "test.esent", AttachDatabaseGrbit.None);
			Api.JetOpenDatabase(session, "test.esent", "", out dbid, OpenDatabaseGrbit.None);

			using (var table = new Table(session, dbid, "outgoing", OpenTableGrbit.None))
			{
				var cols = Api.GetColumnDictionary(session, table);
				var bytes = new byte[Api.BookmarkMost];
				int size;
				using (var update = new Update(session, table, JET_prep.Insert))
				{
					update.Save(bytes, bytes.Length, out size);
				}
				Api.JetGotoBookmark(session, table, bytes, size);
				var i = Api.RetrieveColumnAsInt32(session, table, cols["msg_id"]);
				Console.WriteLine(i);

				Api.JetDelete(session, table);
			}

			Api.JetCloseDatabase(session, dbid, CloseDatabaseGrbit.None);
		}
	}
}

What do you think is going to be the output of this code?

If you guessed:

1
1
1

I have a cookie for you.

One of the problems of working with low level libraries is that they are… well, low level. As such, they don’t provide all the features that you think they would. Most databases keep track of the auto incrementing columns outside of the actual table. But Esent keep it in memory, and read max(id) from the table on init.

It is an… interesting bug* to track down, I have to say.

* Bug in my code, no in Esent, just to be clear.

Night of the living Repositories

time to read 8 min | 1506 words

This is a response to Greg’s post in reply to mine. I can’t recall the last time I had a blog debate, they are usually fun to have. I am going to comment on Greg’s post as I read it. So this part is written before I did anything but skim the first paragraph or so.

A few things that I want to clarify:

    • His post was originally intended to be an “alternative to the repository pattern” which he believes “is dead”.
    • Of course this is far from a new idea

First, careful re-reading of the actual post doesn’t show me where I said that the repository pattern is dead. What I said was that the pattern doesn’t take into account advances in the persistence frameworks, and that in many cases, applying it on top of existing persistence framework don’t give us much.

The notion of query objects is also far from my invention, this is just to clear that out, it is a well established pattern that I am particularly fond of.

Now, let us move to the part that I really object to:

What is particularly annoying is the sensationalism associated with this post. It is extremely odd to argue against a pattern by suggesting to use the pattern eh? The suggested way to avoid the Repository pattern is to use the Repository pattern which shortening the definition he provided

Provides the domain collection semantics to a set of aggregate root objects.

So now that we have determined that he has not actually come up with anything new and is actually still using repositories let’s reframe the original argument into what it really is.

I fail to see how I am suggesting the use of the repository pattern in my post. This seems to be a fairly important point in Greg’s argument. And no, I don’t follow how I did that. Using the approach that I outlined in the post, there is no such thing as repository. Persistence concerns are handled by the persistence framework directly (using ISession in NHibernate, for example), queries are handled using Query Objects.

The notion of providing in memory collection semantics is not needed anymore, because that responsibility is no longer in the user code, it is the responsibility of the underlying persistence framework. Also note that I explicitly targeted the general use of the repository, not just the DDD use of it.

The problem here is that the Repository interface is not necessarily Object Oriented. The Repository represents an architectural boundary, it is intended to be a LAYER/TIER boundary. Generally speaking when we define such interfaces we define them in a procedural manner (and with good cause).

Hm, I can see Greg’s point here, but I am not sure that I agree with him here. I would specify it differently .Service boundaries are procedural (be it RPC or message based, doesn’t matter). But a service spans both layers and tiers, and I am not going to try to create artificial boundaries inside my service. And yes, they are artificial. Remember: “A Repository mediates between the domain and data mapping layers…”

A repository is a gateway to the actual persistence store. The persistence store itself may be another service, it is usually a remote machine, and the interface to that is by necessity pretty procedural. Trying to model a repository on top of that would by necessity lead us to procedural code. But that is a bad thing.

The problem is, again, we aren’t attempting to take advantage on the capabilities of the persistence frameworks that we have. We can have OO semantics on top of persistence store, because the responsibility to handle that is in the hands of the persistence framework.

Analyzing the situation given of a CustomerRepository what would happen if we were to want to put the data access behind a remote facade?

I am going to call YAGNI on that. Until and unless I have that requirement, I am not going to think about that. There is a reason we have YAGNI. And there is a reason why I don’t try to give architectural advice without having a lot more context. In this case, we have a future requirement that doesn’t really make sense at all.

What would happen though if we used the “other” Repository interface that is being suggested? Well our remote facade would need to support the passing of any criteria dynamically across its contract, this is generally considered bad contract design as we then will have great trouble figuring out and optimizing what our service actually does.

If I need to make the data access remotely, then I am still within my own service (remember, services can span several machines, and multiple applications can take part of the same service). As such, I know exactly what the requirements and the queries that my remote data access is going to require. More than that, within a  service, I want as much flexibility as I can get.

It is on a service boundary, that I start pouring out concrete and post the armed guards.

I also think that there is some sort of miscommunication, or perhaps, as usual, I split a thought across several posts, because a few posts after the post Greg is talking about, I suggested just what he is talking about.

If you don’t want a LAYER/TIER boundary don’t have one just use nhibernate directly …

That is what I am advocating. Or Linq to Sql, or whatever ( as long as it has enough smarts to support what you need without doing nasty things to your code).

My whole argument is that the persistence framework is smart enough today that we don’t need to roll this stuff by hand anymore!

At this point you probably shouldn’t have a domain either though …

And I call bull on that. Repositories != domain, and how you do data access has nothing to do with how you structure your application.

Something that me & Greg & Udi have discussed quite often in the past is the notion of Command Query Separation. I’ll let Greg talk about that, and then add my comments:

I have had this smell in the past as well but instead of destroying the layering I am building into my domain (with good reason, see DDD by Evans for why) I went a completely different route. I noticed very quickly that it was by some random chance that my fetch plans were being different. I had a very distinct place where things were different, I needed very different fetching plans between when I was getting domain objects to perform a writing behaviour on them as opposed to when I was reading objects to say build a DTO.

Well, yes & no. There are quite a few scenarios in which I want to have a different fetch plan for writing behavior even when using CQS. But before I show the example, I want to point out that it is just an example. Don’t try to nitpick this example, talk about the generic principle.

A simple example would be a shopping cart, and the following commands:

  • AddProduct { ProductId = 12, Quantity = 2}
    • This require us to check if the product already exists in the cart, so we need to load the Items collections
  • Purchase
    • We can execute this with properties local to the shopping cart, so no need to load the items collection, this just charge the customer and change the cart status to Ordered

As I said, this is a simple example, and you could probably poke holes in it, that is not the point. The point is that this is a real example of real world issues. There is a reason why IFetchingStrategy is such an important concept.

That is leaving aside that like all architectural patterns, CQS is something that you shouldn’t just apply blindly. You should make a decision base on additional complexity vs. required scope before using it. And in many applications, CQS, or a separate OLTP vs. Reporting models, are often not necessary.

And yes, they are not big applications, with a lot of users and a lot of data, but they are often important applications, with a lot of complexity and behavior.

Help needed: Writing Domain Specific Languages in Boo – Java Edition

time to read 1 min | 140 words

Just came out of a discussion with Manning about my book (which is approach the last few stages before actually printing!), apparently they hit upon the notion that Boo works on both the CLR and the JVM, and are interested in having a Java edition of the book.

Disclaimer: This is early, and anything is subject to change, blah blah blah.

I find this hilarious, since this is Hibernate in Action vs. NHibernate in Action, in reverse. At least, I hope it is. The problem? I am not familiar enough with Java to be able to write a book targeting it, hence this post.

If you are familiar with Java and BooJay (filter 90%), read my book (filter 90%) and think you could help (filter 0%), I would love to talk to you.

NHibernate Tidbit – using <set/> without referencing Iesi.Collections

time to read 1 min | 117 words

Some people don’t like having to reference Iesi.Collections in order to use NHibernate <set/> mapping. With NHibernate 2.1, this is possible, since we finally have a set type in the actual BCL. We still don’t have an ISet<T> interface, unfortunately, but that is all right, we can get by with ICollection<T>.

In other words, any ISet<T> association that you have can be replaced with an ICollection<T> and instead of initializing it with Iesi.Collections.Generic.HashedSet<T>, you can initialize it with System.Collections.Generic.HashSet<T>.

Note that you still need to deploy Iesi.Collections with your NHibernate application, but that is all, you can remove the association to Iesi.Collections and use only BCL types in your domain model, with not external references.

NH ProfAn important milestone

time to read 1 min | 111 words

Yeah, this now works:

image

And if you wonder why I am happy about something that looks very similar to what already worked months ago, you are right.

What you don’t see is that this version of NH Prof uses the revised backend, which I have talked about before. This backend use the pull vs. push mechanism, and it is supposed to allow us much higher performance than before.

And no, this branch is not merge to the public builds yet. Give us a few weeks.

FUTURE POSTS

  1. RavenDB 3.0 New Stable Release - one day from now
  2. Production postmortem: The industry at large - about one day from now
  3. The insidious cost of allocations - 3 days from now
  4. Buffer allocation strategies: A possible solution - 6 days from now
  5. Buffer allocation strategies: Explaining the solution - 7 days from now

And 3 more posts are pending...

There are posts all the way to Sep 11, 2015

RECENT SERIES

  1. Find the bug (5):
    20 Apr 2011 - Why do I get a Null Reference Exception?
  2. Production postmortem (10):
    01 Sep 2015 - The case of the lying configuration file
  3. What is new in RavenDB 3.5 (7):
    12 Aug 2015 - Monitoring support
  4. Career planning (6):
    24 Jul 2015 - The immortal choices aren't
View all series

RECENT COMMENTS

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats