Ayende @ Rahien

Sleep as currency? I would buy some...

Seeking: Individuals and interactions over processes and tools

Over two years ago, I started working at We!. I wanted to work there mainly because during the interview I was very impressed by the amount of knowledge that the interviewer had. Looking back on the last couple of years, it has been a very good decision. I had the chance to work on interesting projects, work with very smart people, and do some crazy things. Mostly, I got to chance to make a lot of mistakes, learn from then and then make another set of mistakes.

I also had a lot of fun, and the ability to say that I really enjoy what I was doing.

Recently, We! has made a shift in the direction the developer division is headed, and now intends to focus primarily on business solutions. This translate to Share Point and MS CRM solutions. I have no experience in Share Point, so I can't comment on that, but I believe that I expressed my opinions on MS CRM elsewhere quite clearly. The company is focusing on bigger projects, based on those platforms. Those are more profitable, but they are also of the assembly line variety, hook this, bridge that, fill in those gaps, etc.

My interest lies elsewhere. I am interested in building applications in agile fashion, using TDD, DDD and ADD. From a business perspective, We!'s decision is probably extremely reasonable and smart, but it is simply not what I would like to do.

I have been thinking about this for a while now, and I have finally made up my mind about it in the last couple of days. Unless something drastic changes, I intend to leave We! after we finish our current project (around January 2008, I believe).

What does this mean?

I want to take this chance and see how things are going elsewhere in the world, this means that I am leaning toward relocating. I have a good idea about what I want to do next, but nothing that is set in stone. I have already rejected a highly technical position, I am not interested in tech for tech's sake. I believe that I have enough knowledge and experience to handle most technologies. What I would like to do now is working in an agile environment, in an agile team.

I see the following options:

  • Independent consultant / trainer
  • Consultancy
  • Startup
  • Product team

As always, I am interested in your opinions in this matter.

Full disclosure note: I don't have a degree, which I looks like can cause issues when getting a work visa.

A question of scale

I am using a different meaning to the terms "scaling" and "scalable". I am usually not really worried about performance or scaling out a solution. I am often thinking about how I can take a certain approach and scale it out to the more complex scenarios.

Dragging a table to the form as an example, doesn't scale. It doesn't scale because when you need to handle logic, you need to go into significant complexity just to get it. NHibernate does scale, because so far it handled everything that I threw at her without increasing the complexity of the solution beyond the complexity of the problem.


I think that this graph should explain it better.

What we see here are the complexities of solutions vs. the complexity of the problem. The unscalable solution complexity increase more and more as the complexity of the problem grows.

The scalable solution's complexity increase as well, but it increase in direct relationship to the problem at hand. If we need to have a problem twice a complex, then the solution will be about twice as complex.

It can't be less than twice as complex, because you can't escape the complexity, but the other solution is nine times as complex, and the difference between the two only grows when the problem gets more complex.

And that is how I evaluate most of the things that I use. Do they scale? Will I be able to handle more complex scenarios without tool-induced pain?

The solution is allowed to be complex because the problem that we are trying to solve is complex. It must not be complex because the tool that I am using need crutches to handle complex scenarios.

Embracing fluent interfaces

Jeff Atwood is talking about languages in languages, and suggest that you would avoid using fluent interfaces to hide a language semantics. I completely disagree. Let us talk about the example. The first one deals with regular expressions:

//Regular expression
Pattern findGamesPattern = new Pattern(@"<div\s*class=""game""\s*id=:"(?<gameID>\d+)-game""(?<content>.*?)

// Fluent interface
Pattern findGamesPattern = Pattern.With.Literal(@"<div")
    .NamedGroup("gameId", Pattern.With.Digit.Repeat.OneOrMore)
    .NamedGroup("content", Pattern.With.Anything.Repeat.Lazy.ZeroOrMore)
    .NamedGroup("gameState", Pattern.With.Digit.Repeat.OneOrMore)

I can read the fluent interface, but I need to parse the regex pattern. There are tradeoffs there, because as a regex gets more complex, it is getting unreadable. The fluent interface above keep it maintainable even for complex regular expression.

The second example deal with SQL:

// Embdded language
IDataReader rdr = QueryDb("SELECT * FROM Customer WHERE Country = 'USA' ORDER BY CompanyName");

// Fluent interface
IDataReader rdr = new Query("Customer")
	.WHERE(Customer.Columns.Country, "USA")

Here it is much easier to read the SQL statement, right? Except that the statements are not equal to one another. Using "select *" is inefficient, you want to explicitly say what columns you want (even if you want them all).

Nevertheless, the SQL code is much easier to read. It doesn't handle parameters, though, so it will probably encourage SQL injection, but I digress.

So, we have two examples, which mostly go in two different direction, we are inconclusive, right?

No, because one thing that Jeff didn't cover was logic.

I have a regular expression that need to change based on some sort of logic. Imagine validating a document with several options, validating ID per country, validating phone numbers by area, validating email address that belong to a certain company, etc. Even if we have only three choices in each category, this put us in way too many separate regexes to maintain. We need to construct them dynamically.

Now, anyone here thinks that regular expressions that are being constructed using string concat are going to be easy to understand? Easy to write? Maintainable?

What about SQL? Do you think that you can build this search form by simply concating SQL? Do you think it would be maintainable?

I am sorry, but I don't see this approach working when you move beyond the static, one of, stuff. Sure, I write a lot of SQL to look at my data, but it is done in management studio, and it doesn't go into my application. I can't afford the maintainability issues that this will cause.

The other side of open source

While most of the people working and using open source are fun to work with, there are some people who are... not.

A discussion on the nature of OSS development had turned into an exchange of stories about the "I'm using your stuff, you owe me big time." characters that appears every so often among the otherwise great communities.

Some of those encounters range from the one who thinks that I work for them: "I run into this issue with your stuff, I need you to fix it by Wednesday."

The blackmail attempts are almost amusing: "If you don't add this feature (three months development and human sacrifice may be involved) the I am never using your stuff again, and neither will my company."

To those that have Seen the Light and feel the need to spread it: "Why are you wasting your time on such a expletive expletive expletive stuff, are you so stupid you don't realize that Yxz is so much better then your expletive expletive expletive."

 I remove the expletives because they were fairly unimaginative.

The weirdest so far was a guy that  tried to convert me to Islam on the basis of exchanging three emails.

So, what is your best horror story?


image Not my code.

I am not even calling this by any way shape or mean.

No way in, no way to debug, check or fix.

In Reflector, the ViewId setter is 80 lines long and contains enough logic to start evolving on its own.

I have spent the entire day fighting the CRM, trying to get it, not to work right, because I consider this beyond hope, but just to the barely functioning level that seems to be the normal state of affair there.

This is just the latest in a long series of error that had made this day an utterly frustrating, annoying, aggrevating mess.

I am going home.








Mocking Boo

Okay, I built it to relax a bit, because I am extremely annoyed at the moment. I apologize in advance for the code quality, it is POC only, but still, I wouldn't generally release it like this.

What is this? Do you see the highlighted bit at the bottom? This is Boo code that invokes a Macro on compile. It will generate an adapter and an interface, so you don't have to do it manually. The implementation code is below:


class AdapterMacro(AbstractAstMacro):

      def Expand(macro as MacroStatement):

            if macro.Arguments.Count != 1 or not macro.Arguments[0] isa ReferenceExpression:

                  raise "adapter must be called with a single argument"

            entity = NameResolutionService.Resolve(macro.Arguments[0].ToString())

            raise "adapter only accept types" unless entity.EntityType == EntityType.Type

            BuildType(macro, entity)


      def GetModule(node as Node) as Boo.Lang.Compiler.Ast.Module:

            return node if node isa Boo.Lang.Compiler.Ast.Module

            return GetModule(node.ParentNode)


      def BuildType(macro as MacroStatement, type as IType):

            adapter = ClassDefinition(Name: "${type.Name}Adapter")


                  Field(Name: "theTarget", Type: SimpleTypeReference(type.FullName) )



            ctor = Constructor()

            ctor.Parameters.Add(ParameterDeclaration("target", SimpleTypeReference(type.FullName) ) )




                        ReferenceExpression(Name: "target")




            adapterInterface = InterfaceDefinition(Name: "I${type.Name}")



            adapter.BaseTypes.Add( SimpleTypeReference(adapterInterface.FullName) )

            for member in type.GetMembers():

                  AddMethod(adapter, adapterInterface,  member) if member isa IMethod





      def AddMethod(adapter as ClassDefinition,

            adapterInterface as InterfaceDefinition,

            method as IMethod):


            return unless method.IsPublic


            interfaceMethod = Method(Name: method.Name)

            forwarder = Method(Name: method.Name)


            if method.ReturnType.IsByRef or method.ReturnType.IsArray:



            args = []

            for param in method.GetParameters():


                  if param.IsByRef or param.Type.IsArray:





                              Name: param.Name,

                              Type: SimpleTypeReference(param.Type.FullName)





                              Name: param.Name,

                              Type: SimpleTypeReference(param.Type.FullName)



                  args.Add( ReferenceExpression(param.Name) )





            mie = MethodInvocationExpression(

                  Target: AstUtil.CreateReferenceExpression("theTarget.${method.Name}")



            forwarder.ReturnType = SimpleTypeReference(method.ReturnType.FullName)

            interfaceMethod.ReturnType = SimpleTypeReference(method.ReturnType.FullName)

            if method.ReturnType == typeof(void):




Dynamic Methods

I don't hear it talked about, but the CLR has a very efficient way to generate code at runtime. Probably this is because this code generation stuff is something that is accessible through IL generation only, and that is not for the faint of heart. Nevertheless, there are some very useful uses for this. NHibernate is utilizing this approach to avoid the costs of reflection, for instance.

Let us take a look about a simple scenario, we want to translate any delegate type with two parameters to a call to an instance method on our class:

public class Program
	private static void Main(string[] args)
		new Program().Execute();

	private void Execute()
		//instance that has events that we want to subscribe the adapter to 
		DataGridView dataGridView1 = new DataGridView();
		EventInfo ei = dataGridView1.GetType().GetEvent("RowPrePaint");

		ParameterInfo[]pia = ei.EventHandlerType.GetMethod("Invoke").GetParameters();

		MethodInfo methodInfo = this.GetType().GetMethod("Handler", 
			new Type[]{typeof (object), typeof (object)});

		DynamicMethod mtd = new DynamicMethod(
			new Type[]
					typeof (Program), // this 
					pia[0].ParameterType,// sender
					pia[1].ParameterType // e
				}, this.GetType(), true);

		ILGenerator gtr = mtd.GetILGenerator();
		gtr.Emit(OpCodes.Ldarg_0); // this
		gtr.Emit(OpCodes.Ldarg_1); // sender
		gtr.Emit(OpCodes.Ldarg_2); // e
		gtr.Emit(OpCodes.Call, methodInfo);

		// generate a delegate bound to this object instance
		Delegate dynamicDelegate = mtd.CreateDelegate(typeof(DataGridViewRowPrePaintEventHandler), this);
		//register the adapter
		ei.AddEventHandler(dataGridView1, dynamicDelegate);

		dataGridView1.GetType().GetMethod("OnRowPrePaint", BindingFlags.NonPublic | BindingFlags.Instance)
			.Invoke(dataGridView1, new object[] { null });

	// method that handles the call
	public void Handler(object x, object y)
		Console.WriteLine("{0}: {1}, {2}", this.GetHashCode(), x, y);

Take into account that you are probably going to want to cache the method anyway, but this is a cool, if long winded way of achieving this. Personally, in this scenario I would probably simply write a reflection based wrapper, the complexity doesn't really have justification in such a case, but this is just an example, of course.

SQL Gotchas

I think you can imagine the amount of paint involved in having a query behave in an unexpected manner. I have run into both of those recently. This one had me doubting my sanity (imagine this on a table with several hundred thousands records, with a fairly complex query around it:

select 1

where 1 not in (2,3,null)

And then there is this interesting little query:

select 1 where 'e' = 'e   '

I refused to believe the result until I saw it myself.

Hibernating Rhinos 6 - Code Search Engine

image This is a screen cast that was spawned as a result of the discussion in the ALT.Net about the ideal IDE. Glenn Block mentioned that something that would be cool is:

Ability to instantly search for a specific artifact (kind of like Google / windows live search). As I type it in, I see the filtered results.

I like challenges, and I happened to know some components that can make this very easy, so I set out to build the foundations of a code search engine that can match the above requirements. Now, refining it to the point where it is usable should take about a day or two, I think, but all the basics are there.

  • Length: 28:07:00
  • Download size: 40Mb
  • Code starts at: 1 minute mark

This is basically glue code, so be aware of it. It meant to show you how, not to actually show production level code to handle all the required scenarios.

You can download the screen cast here. As usual, the sound quality is probably suspect, and I recorded it at 2AM, so I am not sounding my best there.

Using Expect.Call( void method )

It looks like there is some confusion about the way Rhino Mocks 3.3 Expect.Call support for void method works.

Let us examine this support for an instance, shall we? Here is all the code for this feature:

public delegate void Action();

public static IMethodOptions<Action> Call(Action actionToExecute)
	if (actionToExecute == null)
		throw new ArgumentNullException("actionToExecute", "The action to execute cannot be null");
	return LastCall.GetOptions<Action>();

As you can see, this is simply a method that accept a no args delegate, execute it, and then return the LastCall options. It is syntactic sugar over the usual "call void method and then call LastCall). The important concept here is to realize that we are using the C# * anonymous delegates to get things done.

Let us see how it works?

public void InitCustomerRepository_ShouldChangeToCustomerDatabase()
	IDbConnection mockConnection = mocks.CreateMock<IDbConnection>();
	Expect.Call(delegate { mockConnection.ChangeDatabase("myCustomer"); });

	RepositoryFactory repositoryFactory = new RepositoryFactory(mockConnection);

This is how we handle void methods that have parameters, but as it turn out, we can do better for void methods with no parameters:

public void StartUnitOfWork_ShouldOpenConnectionAndTransaction()
	IDbConnection mockConnection = mocks.CreateMock<IDbConnection>();
	Expect.Call(mockConnection.Open); // void method call
	Expect.Call(mockConnection.BeginTransaction()); // normal Expect call

	RepositoryFactory repositoryFactory = new RepositoryFactory(mockConnection);

Notice the second line, we are not calling the mockConnection.Open() method, we are using C#'s ability to infer delegates, which means that the code actually looks like:

Expect.Call(new Action(mockConnection.Open));

Which will of course be automatically executed by the Call(Action) method.

I hope that this will make it easier to understand how to use this new feature.

Happy Mocking,


* Sorry VB guys, this is not a VB feature, but I am not going to rewrite VBC to support this :-) You will be able to take advantage of this in VB9, though, so rejoice.

Silverlight RTL

And on the heels of the Blame Microsoft post, here is a perfect example. Silverlight doesn't support Right to Left languages. This means that if I want to display Hebrew or Arabic content, it is flat out.

This is usually the place where you sharpen a steak knife and go to meeting with a lot of red faced people and you need to wear ear muffs.

Until an angel appears...

Justin Angel went ahead and built RTL support to Silverlight. Check the links for details, but there a CodePlex project, a webcast and an online demo.

Rendering Comparison
Normal Silverlight
RTL (Right-to-Left): Not Supported
Align-to-right: Not Supported
Silverlight Hebrew & Arabic support
RTL: Supported
Align-to-right: Supported
image image

No one was fired because they bought Microsoft

And just to be clear, I don't agree with this sentiment, but it is a very real one.

Times have changed
Our applications are getting worse
They run so slow and won't behave
The code is ugly and perverse!
I tell you, that application is deprave!

Should we blame the PM?
Or blame the developers?
Or should we blame the process?
No! blame Microsoft
Everyone: Blame Microsoft

Blamability is an important concept, the facts doesn't really matter, but the ability to blame someone else, preferably something as ambiguous as Microsoft, is a good way to have an out.

Hibernating Rhinos 5: Hot Code Swapping

image After reading about Erlang, I got very excited about doing hot code swapping, and always on applications. I decided that this is something that would be cool to do on the CLR. So I did, it was very easy.

The screen cast is a short one, less than 25 minutes, but it covers all the concepts, and we have dynamically updated code in the end. :-)

As usual, the sound quality is suspicious, and I am probably speaking to fast.

  • Total length: 00:24:12
  • Download Size: 34.2 Mb
  • Code starts at 2:05

Memorable code:

I think it is telling that I am using the Command pattern to print hello world.

You can get the code here

The ideal IDE

We are discussing the ideal IDE on the Alt.Net mailing list, I thought that it would be interesting to gather all the requirements in one place.

I made some minimal attempt to reduce duplication, and to put the interesting parts, but check the discussion, it is still going on. This is not a list of "what is wrong with VS", it is a list of things that we want to see in an IDE (or not see in an IDE :-) ). IDE is defined as the place where I write code / debug /test.

Good quotes:

  • I want to be a train speeding full steam and have this IDE laying track just in front of me until such time as I tell it I want to go a different direction.
  • I just want a lightening fast intelligent editor that will let me choose to add on the features I care about. I don't want to have to deal with lag time.
  • Give me something like TextMate for .NET. Build it on managed code. Make the AST model/API sensible and self-evident. Provide a user-centricity driven plugin model - no abstracterbation for programmers with technology fetishes. Plain, simple, self-evident. Extensibility and usability for the merest of mere mortals.
  • So, do you really need Snippet Compiler with ReSharper editing features and debugger?

And now to the real stuff that was mentioned:

  • Coding:
    • Syntax highlighting
    • Intellisense (I may think I am a jedi but true zen is to talk to the machine and have it put forth the words before you)
    • Code Browsing
    • Refactoring
    • Treat Warnings as Errors as not negotiable (always enabled).
    • i want it to be geared towards the keyboard developer.
  • Text manipulation:
    • Regex manipulation of the text.
  • Debugger, good one.
    • Edit and continue
  • Testing integration
  • Setting up all the required stuff, what is compiled, embedded, etc.
    • Preferably it should read my build script and work on top of that.
    • i want the build system to be pluggable
    • i want a light project structure. If anything is going to be stored in a file it needs to be in a readable form and would preferable have a lightweight API that ships with it.
  • Source control integration - nice, but not really a must have, I use tortoise for that.
    • Some opinions said that SCM should not be in the IDE.
  • Extensible
    • Should be able to add a new language easily.
    • Should offer me the AST of the code when I build a plugin.
    • Make it have a light version, where you can unplug stuff you find unnecessary. ??
  • So as far as an IDE is concerned I want to do everything from inside it like deployments
  • Smarts:
    • Error highlighting (errors, warnings, test failures, bad practices, ...)
    • A Code Smell detection system (users could extend it with CQL)
    • i want it to give me visual cues to interesting code constructs
    • All existing R# features
    • IDE support for effect analysis (Chapter 11 of Working Effectively with Lagacy Code).
    • automatically correct these issues: flase > false, reslut = result, retrun = return
  • User interface:
    • Full screen ability
  • Background compilation as you type (I hate waiting for VS to compile my solutions all the time).
  • Code searching:
    • A very fast way to get from the current function you are in to any caller or to any function the function is calling (and class definitions, ...)
    • Ability to instantly search for a specific artifact (kind of like google / windows live search). As I type it in, I see the filtered results. This is NOT the find feature.
  • Misc:
    • The ability to have 2 people working in separate window panes on the same file (one person typing on the local keyboard/mouse, the other in a remote desktop); both visible in the same environment.
    • IDE installation < 100Mb
    • Shell integration would be nice. and available from the keyboard (think launchy)
    • For windows PowerShell integration would be nice too.
    • How about an IDE that doesn’t lock up when the source control is unavailable ;)
    • better multiple monitor support.
    • Configuration screens that don’t give me a headache
  • Responsive:
    • I am willing to pay for a quad core computer with 16 GB RAM, and use that for only programming. But it _must_ be responsive. (I used VS.Net on such a system, it crawled!)
  • Designers:
    • i don't want any of the overhead of a designer.
    • Designers that don’t break or no designers at all
  • Built in (no separate window) reflector
    • I should be able to CTRL+B from the method to the code or to the disassembled code.
  • Release cycle:
    • I want an IDE which is released early and often. And I want an IDE built by a team which is responsive to community feedback. The rest is just details.
    • Multiple Releases a year, constant improvement.

High performance domain models

Udi has an interesting presentation that I recommend that you go through. He is going to present it at Tech Ed (Thu Nov 8 13:30 - 14:45 Room 117).

Most of the ideas are familiar to me because I have spoken to him about them before, but it represent new concepts to most people. I would preface his suggestion with the usual warning about designing for performance. Udi's points are about big systems, so consider if they are appropriate to your scenario first.

A pal of mine once told me that he designs systems for an order of magnitude increase in the requirements. So a system that works for a hundred users should also scale to a thousand users (additional hardware, maybe, but the architecture should hold), but going beyond that will require more than just throwing hardware at the issue.

This sounds like a good rule of the thumb.

Model View Action

Adam has an interesting discussion here about handling common actions in MonoRail. This has sparked some discussion in the MonoRail mailing list. I wanted to take the chance to discuss the idea in more detail here.

Basically, he is talking about doing this:

public class IndexAction : SmartDispatcherAction
   private ISearchableRepository repos;
   private string indexView;

   public IndexAction(
      ISearchableRepository repos,
      string indexView)
      this.repos = repos;
      this.indexView = indexView;

   public void Execute(string name)
      ISearchable item = repos.FindByName(name);
      if(item == null)
         PropertyBag["UnknownSearchTerm"] = name;

      PropertyBag["Item"] = item;

And then registering it with the routing engine like this:

new Route("/products/<name>",
    new IndexAction(new ProductRepository(), "display_product"));
new Route("/categories/<name>",
    new IndexAction(new CategoryRepository(), "display_category"));

Now, accessing "/categories/cars" will give you all the items in the cars category.

On the face of it, it seems like a degenerated controller, no? Why do we need it? We can certainly map more than a single URL to a controller, so can't we solve that problem that way?

Let us stop for a moment and think about the MVC model. Where did it come from? From Smalltalk, when GUI was something brand new & sparkling. It is a design pattern for a connected system. In that case, the concept of a controller made a lot of sense.

But when we are talking about the web? The web is a disconnected world. What is the sense in having a controller there? An Action, or a Command, pattern seems much more logical here, no?

image image But then we have things that just doesn't fit this model. Consider the example of CRUD on orders. We can have a controller, which will handle all of the logic for this use case in a single location, or we can have four separate classes, each taking care of a single aspect of the use case.

Personally, I would rather have the controller to do the work in this scenario, because this way I have all the information in a single place, and I don't need to hunt multiply classes in order to find it.

But, there are a lot of cases where we do want to have just this single action to happen, or maybe we want to add some common operations to the controller, without having to get in to crazy inheritance schemes.

For this, MonoRail supports the idea of Dynamic Actions, which supports seamless attachment of actions to a controller.

Hammett describe them best:


DynamicActions offers a way to have an action that is not a method in the controller. This way you can “reuse an action” in several controllers, even among projects, without the need to create a complex controller class hierarchy.

The really interesting part in this is that we have both IDynamicAction and IDynamicActionProvider. This means that we get mixin-like capabilities.

Dynamic Actions didn't get all the love they probably deserve, we don't have the SmartDispatcherAction (yet), so if we want to use them, we will need to handle with the raw request data, rather than with the usual niceties that MonoRail provides.

Nevertheless, on a solid base it is easy enough to add.

Now all we need to solve is the ability to route the requests to the correct action, right? This is notepad code, so it is ugly and not something that I would really use, but it does the job:

public class ActionRoutingController : Controller
	public delegate IDynamicAction CreateDynamicAction();
	public static IDictionary<string, CreateDynamicAction> Routing 
		= new Dictionary<string, CreateDynamicAction>();
	protected override void InternalSend(string action, IDictionary actionArgs)
		if(Routing.ContainsKey(action) == )
			throw new NoActionFoundException(action);

What this means is that you can now do this:

public void AddRoutedActions()
	AddRoutedAction("categories", "/categories/<name:string>", delegate {
		return  new IndexAction(new CategoryRepository(), "display_category");
	AddRoutedAction("products", "/products/<name:string>", delegate {
		return new IndexAction(new ProductRepository(), "display_product");

public void AddRoutedAction(string action, string url, CreateDynamicAction actionFactory)
		PatternRule.Build(action, url, typeof(ActionRoutingController), action));
    ActionRoutingController.Routing.Add(action, actionFactory);

And get basically the same result.

Again, all of this is notepad code, just doodling away, but it is nice to see that all the building blocks are there.

Microsoft, SubSonic and Open Source

Rob Conery has just announced that he is going to work for Microsoft. That is interesting, but not really surprising or shocking. Microsoft does seems to hire a lot of the bloggers in the .Net space.

What is surprising is the role that he is expected to fill in Microsoft. He is going to work full time on SubSonic, an Open Source project.

Why is this surprising? Because to date, I haven't heard of any other cases where Microsoft have paid for developers to work full time on OSS that didn't came directly from Microsoft. This is a fairly common model in the OSS world, but this is the first time that Microsoft has made such a move.

Very welcome move from Microsoft, and congratulations to Rob.

Template Controllers

imageSharing common functionality across controllers is something that I have run into several times in the past. It is basically needing to offer the same functionality across different elements in the application.

Let us take for a moment a search page. In my current application, a search page has to offer rich search functionality for the user, the ability to do pattern matching, so given a certain entity, match all the relevant related entities that can fit this entity. Match all openings for a candidate. Match all candidates for an opening.

That is unique, mostly, but then we have a lot of boiler plate functionality, which moves from printing, paging, security, saving the query and saving the results, changing the results, exporting to XML, loading saved queries and saved results, etc, etc etc. Those requirements are shared among several

On the right you can see one solution for this problem, the Template Controller pattern. Basically, we concentrated all the common functionality into the Base Specification Controller.

What you can't see is that the declaration of the controller also have the following generic constraints:

image public class BaseSpecificationController<TSpecification, TEntity> : BaseController
	where TSpecification : BaseSpecification<TEntity>, new()
	where TEntity : IIdentifable

This means that the base controller can perform most of its actions on the base classes, without needing to specialize just because of the different types.

Yes, dynamic language would makes things much easier, I know.

Note that while I am talking about sharing the controller logic here, between several controllers, we can also do the same for the views using shared views. Or not. That is useful if we want to use different UI for the search.

In fact, given that we need to show a search screen, it is not surprising that we would need a different UI, and some different behavior for each search controller, to get the data required to specify a search.

Now that we have the background all set up, let us see what we can do with the concrete search controllers, shall we.

imageYou can see the structure of them on the right. The search candidates is doing much more than the search orders, but a lot of the functionality between the two is shared. And more importantly, easily shared.

Well, if you define the generics syntax above as easy, at least.

The main advantage of this approach is that I can literally develop a feature on the candidates controller, and then generalize it to support all the other searches in the application.

In this scenario, we started with searching for candidates, and after getting the basic structure done, I moved to start working on the search orders.

At the same time, another guy was implementing all the extra functionality (excel export, sending SMS and emails, etc).

After we I finished the search order page, we merged and refactored up most of the functionality that we needed in both places.

This is a good approach if you can utilize inheritance in your favor. But there is a kink, if you want to aggregate functionality from several sources, then you are going to go back to delegation or duplication.

Adam has interesting discussion about this issue, and an interesting proposition. But that will be another post.

Deal Breakers

Occasionally I get to try something that sounds really good, but it falls on the execution because of the small details. Let us take FireFox Prism as an example. The idea is good, I get a separate fire fox window just for gmail, or other always on apps. Note that I am using it as an example only, because I just run into it. Prism is still in beta, so it is just an example.

This means that if the browser crash I don't lose my unsaved email, and more importantly, I can close FireFox (and thus release the memory) without closing my mail.

Here is why I wouldn't be using it for now:

 image image

Can you spot the difference between the two images?

The Prism one is on the left, FireFox is on the right. The Gmail on Prism doesn't display a caret symbol in Gmail's rich edit box. That is a deal breaker for me, because I can't really type without it.

I. Can't. Type. Without. The. Caret. Period.

The point that I am trying to make is that you need to consider the deal breakers for the user. And even something as small as the caret is a deal breaker.

In my last project, the first release was completely useless to some of the users, because a select list didn't contains all the old values, and we couldn't bring those values, because their presence depended on so many other things. Those users had to wait for the second release to use the application.

Deal breakers are the other side of the killer features, and are just as important when you analyze a solution. Examples of deal breakers in typical software include:

  • Not supporting the old application's keyboard shortcuts
  • Web application replacing local app - latency
  • Different fonts (True story!  Priority 0 bug!)
  • Compatibilities

Real World NHibernate: Reducing startup times for large amount of entities

The scenario that Christiaan Baes need to solve is reducing the startup time of a Win Forms application. The main issue here is that the initial load of the application should be fast, but in this case, we are feeding NHibernate about a hundred entities, so it take a few seconds to run them.

I asked Christiaan to send me profiler results of the code, and it looked all right on his end, so it was time to look at NHibernate and see what she had to say about that.

The test scenario was startup time for a thousands entities. I think that this is a nice shiny number that probably represent way too much entities per application, but let us go with it.

Initial testing showed that for this amount, we get about 14 seconds startup time, just to get the session factory started. Now that wasn't right, I felt. NHibernate does a lot of reflection on startup, but even so, 14 seconds was quite a bit. A deeper analysis showed that I was wrong, it wasn't reflection that was costing so much, it was actually building the Configuration that took the time, not building the session factory.

Why was it taking so long? Well, NHibernate is using XML for configuration, and that xml is validating using a schema. And that took 11 seconds for a thousand documents.


I played a bit with the code, but it remained steady on 11 seconds just for reading the configuration in. Eventually I tried to do merge the 1000 XML documents to a single big document, and that had a significant effect, it dropped the time to just over 3 seconds. To an overall startup time of 5.5 seconds on a 1000 entities session factory.

If you are going in this route, I would strongly suggest that you would do this merging as a pre build step, rather than try to work with a single unwieldy artifact.

Still not good enough, I hear you think, right?

Here are a few other suggestions that are worth keeping in mind:

  • Why are you initializing it on startup? And does it have to be on the main thread? If you can push it to a background thread, then this is usually all that you would need to do.
  • Do you really need all the entities, from the get go? If not, you can create two session factories. One to serve as a fast initializing factory, able to respond to the initial requests. At the same time, initial the global session factory, and then replace them.
  • Do you really need all those entities in a single session factory? If you have a lot of entities, usually this is a sign of several mixed domains that are involved. It would probably be better to split them up to different session factories.

If all of the above still isn't enough for you, the next step is persistable configuration itself (probably using serialization. That is not supported by NHibernate at the moment, although I am more than willing to accept a patch that would add this functionality.

Hope this helps...

Rhino Mocks 3.3

image Well, it has been a couple of months, but we have a new release of Rhino Mocks. I would like to thank Aaron Jensen, Ivan Krivyakov and Mike Winburn for contributing.

Probably the two big new features is the ability to call void methods using Expect.Call and the ability to use remoting proxies to mock classes that inherit from MarshalByRefObject. Rhino Mocks will now choose the appropriate mocking strategy based on the type you want.

Note: you cannot pass constructor arguments or create partial mocks with remoting proxies.

The change log is:

Bug Fixes: 

  • Fixing inconsistency in how Is.Equals handled ICollection instances, now it will compare them by their values, not their Equals() 
  • Fixing NASTY bug where setup result that were defined in a using(mocks.Ordered()) block were not registered properly. 


  • Changing error message to "are you calling a virtual (C#) / Overridable (VB) method?" - make it friendlier to VB guys. 
  • Exception for Record will not trigger a ReplayAll(), which can mask the exception. 
  • Adding a check for running on Mono to avoid calling Marshal.GetExceptionCode in that case.

New features:

  • Adding support for calling void methods using Expect.Call
  • Adding remoting proxies.
  • Made IMethodOptions generic. Allows compile time type safety on Return. 
  • Adding a PublicFieldConstraint

As usual, you can get it here