Ayende @ Rahien

It's a girl

Another Entity Framework opinion

I run across this post, which I found interesting:

If someone give me a set of classes that doesn't bring a A to Z solution to a problem, sorry but I don't call it a Framework ... I call it a Base Class Library. I've been very supportive to the Entity Framework Team (I gave design feedback through multiple channels) but now I think I'm done.

This resonate very well with what Udi Dahan has said as a comment to Frans' post.

Microsoft is a platform company.

They build technologies that partners can build stuff on top of.

It's what they've always done.

It's what they're continuing to do.

Unfortunately, many developers in the Microsoft community don't know/understand this, thinking that these technologies are supposed to be used to build applications directly. This often causes overly complex codebases.

The EF's team's decision is consistent with providing a platform for partners to build their own ORMs on.

That being said, I don't care very much for a platform - just as I wouldn't drive the chassis of a car.

How to get good people?

If I knew the answer, I would bottle it and would be rich. (insert mad laugher)

Lacking a bullet proof answer, the following are my observations regarding this issue. A while ago I was the chief interviewer in my company, and I have had the chance to interview literally hundreds of people. My own conclusions match Joel from 2005. The good people are already employed elsewhere, and are likely to be happy there. If they aren't happy, they tend to have the connections to find a job based on their known skills and experience.

In other words, unless something like the bubble burst has happened, you aren't going to find the good people in your interviews. This means that you have to look elsewhere for that.

I know that this isn't widely applicable, but I am using this blog as a good way of finding good people. The audience who reads this blog is already highly self selecting, and likely to share the same conceptions as I do about many things. I had several client engagements that resulted from this blog, and the environment I was working with was significantly higher than the industry average.

From the other side, the one thing that I am absolutely against is the "get a new job, close the blog" mentality. I wanted to point out couple of friends who did that, but I just checked, and both of them are blogging, which I somehow missed :-(

Anyway, what I am trying to say is that online presence matters. I have several friends who never had a blog, and wouldn't consider opening one. The direct implication is that some of the smartest people I have worked with are only known through word of mouth. Real story, I was at a client site one, and was half way listening to a hallway conversation, waiting for a conference call, and I heard the following: "Did you hear how Muhammad* saved [big cellular provider]'s when their [not important now]".

Some people has taken my suggestion to heart, and opened a blog. It doesn't really matter what they are talking about, what is important is that they do. Creating interactions in the community, creating a name for themselves. Getting recognized. Reputation matter, and having access to someone's thought stream over time is incredibly valuable in trying to estimate their abilities.

In the end, however, there is really no substitute for working with someone to actually evaluate their skills (which is a transient thing) and their abilities (which tend to change far more slowly, and what I really care about). Any attempt to short circuit the process is going to end in tears.

* That is his real name, and I had the pleasure of having him as my boss for over two years. Sadly, he doesn't have a blog. And yes, this is my own gentle way of trying to get him one.

Can you succeed without good people?

The answer is a sound yes, but for a given value of success.

A bad team can still get a product out, and that product may even be successful. But the cost that this requires is far greater than it would have been otherwise, often to the point where you have no choice but to scrap the project. The really sad part is that some people accept this cost as the true cost of building software.

Oh, and good people doesn't necessarily equate to experts.

You need a few good people, but the rest of the team can follow fairly simple check list approach to development. There are a lot of stories about teams like that, and most of them end with a big project failure. Usually, this is because there was a hard separation between the people the business consider as the high end and the regular developers. This usually cause the people who actually dictate the architecture and shape of the system to become totally oblivious of any design issues that they may inflict on the rest of the team.

Having a mixed team, with the architecture and the implementation done by the same group, is a much better proposition in the long term. Note, however, that I am explicitly not saying that the whole team should take part in the design. This is an ideal circumstance, and I haven't found it to be either common or even desirable in many cases. Design by committee is rarely a good idea, and I much prefer either design by addition or design by feature / feature batch. Again, those depends on the type of team that I am working on.

Another thing to note is the difference between technological experts and business experts. In both cases, we are talking about developers, but in technological experts are good for one thing, clearing the way so the developers who actually understand the business can get things done, fast.

I will reference again my posts about JFHCI, as a good example of how I think those things should be done.

Why Remoting is so painful?

Yes, I know, 2003 called and asked to get its distribution technology back. Nevertheless, remoting is an extremely useful tool, if you can make several assumptions about the way that you are going to use it.

In my case, I am assuming inter process, local machine configuration, with high expectation of reliability from both ends. Considering that I also need low latency, it seems like an appropriate solution indeed. I was pretty happy about this, until all my integration tests start to break.

After a while, I managed to figure out that the root cause for that is this error: Because of security restrictions, the type XYZ cannot be accessed.

Now, it worked, and it worked for a long time. What the hell is going on?

After thinking about this for a while, I realized that the major thing that changed was that I am now signing my assemblies. And that caused all hell to break lose. I managed to find this post with the solution, but I am still not happy. I really dislike things that can just go and break on me.

The worst checkout experience EVER

I swear, I was just about to back away from the entire thing.

It started with me trying to create a new account. I had to go through about seven screen to do so. Then there was the big warning about gmail accounts not being acceptable, then there were the mandatory formatting of address & phone numbers, to match what their system wanted from me.

I kinda of like their licensing agreement.


Argh, that is so stupid. I am trying to give you money, why are you trying so hard to make this hard on me?

NH Prof: Configuration Story

Icon I think that I mentioned that NHibernate Profiler is working mostly by doing some smarts on top of the log output from NHibernate. That is not exactly the case, but that is close enough. The problem with working through the logs is that there are roughly 30 lines of XML that you need to deal with in order to manage this properly.

The first time I sent this to anyone else, he run into problems with the configuration because of very subtle issues. For a while now, I had a ticket saying that I need to document what the failure scenarios are, and how to deal with them.

Today, as I sat down to deal with this ticket, I decided that it is wrong to even try. This is shoving my own problems to my users. I shouldn't do that, if the configuration is hard to do, that is my own issue, and not theirs. I should bear the burden of complexity.

As such, I spent some additional time getting this to work in the smoothest way possible. The end result is that in order to use NHibernate Profiler in your application, all you need to do is add the following line at the application startup (Main, Application_Start, etc).


That is it :-)

Setting expectations straight

I am currently working on getting a beta Icon version of NH Prof out, but I run into a problem. There are several features that I intend to put into the release version that I didn't have the time to actually put it. Those are usually features that are good to have, but not necessarily important for the actual function of the tool. One of them is saving & loading the captured data. Currently, I am working on more important things than dealing with this, so I didn't implement this.

However, I do want to make it clear that it will be supported, and more than that, I want to make it clear that it it is not there yet.

Using my amazing WPF skills, I manage to whip up the following:


Which is located on a style set on MenuItem. This means that the only thing that I need to do to mark a menu item visually as unusable is to set its Tag attribute to NotImpl. I love this ability.

This is how it looks like:


NH Prof: A testing story

Remember that I mentioned the difference about working and production quality?Icon

One of the things that separate the two that in production quality software, you don't need to know which buttons not to push. Here is a simple example. For a while now, if you tried to bring up two instances of NH Prof, the second one would crash. That wasn't something that you really want to show the users. Today I got back to doing NH Prof stuff, getting it ready for public beta, and I decided that the first thing to do was to tackle this easy feature.

Doing this is actually not an issue, testing this, however, was a problem. I have two application instances and four layers to go through. Here is the first test:

public void When_creating_two_instances_of_application_will_tell_the_other_to_pop_up()
var bus = MockRepository.GenerateStub<IBus>();
var listener = new NHibernateAppenderRemotingLoggingEventListener(bus);

try { var anotherBus = MockRepository.GenerateStub<IBus>(); var anotherListener = new NHibernateAppenderRemotingLoggingEventListener(anotherBus);
Assert.Fail("Exception was expected");
catch (AnotherApplicationIsAlreadyRunningAndControlWasMovedToItException)

bus.AssertWasCalled(x => x.Publish(null),
options => options.Constraints(Is.TypeOf<BringApplicationToFront>())

The logging event listener is the reason that we can't bring up two instances of this. We use this to listen to running applications, and having several of the profilers running at one time isn't going to be helpful. And just to deal with the nitpickers, the four layers I mentioned are: Communication layer, backend, front end, actual user interface.

Here is an example that tests communication between the back end and the front end:

public void When_bring_up_to_front_message_is_publish_observer_should_raise_event_for_UI()
var facility = new BusFacility();
var observer = new ModelBuilderObserver(action => action());
bool wasBroughtToFront = false;
observer.ShouldBringApplicationToFront += () => wasBroughtToFront = true;
facility.Bus.Publish(new BringApplicationToFront());

As for the actual UI code, I am not sure how to test that. I did a manual test, but I am not sure that I am happy about this. Then again, we are talking about testing this line of code:

observer.ShouldBringApplicationToFront += () => Activate();

I can just test that the event was subscribed to, but I don't really see this as valuable.

Ayende's Observation on the state of software

Slightly revised with help from Matt Campbell:

The quality of a software system is inversely proportional to the amount of money it handles.

I intentionally call it observation vs. law, because I don't think that there is any inherent reason for it to be this way, that is just the way it is.

On NHibernate Quality

A while ago Patrick Smacchia posted an analyses of NHibernate 2.0 in NDepends. Go read it, it is interesting, not only because it gives an insight about how to utilize NDepends, but also as a good indication of problematic areas inside NHibernate.

The NHibernate code base is aimed at solving a very complex problem, and I will be the first to admit that there are a lot of improvements that I would be delighted to do if I had infinite time to do so. In fact, some places of the code annoy me to the point where I make a point of putting them up in this blog as an example of what not to do.

The main problem is that, as I said, NHibernate is aimed at solving a very complex problem. And we already know that you can't escape complexity. Where I think NHibernate is making a great job is in dealing with this complexity for you, and making sure that your code is clean and clear.

As such, I can live with imperfections in the code, for as long as I don't hit them. When I do, I take out the hammer and deal with them. But the #1 criterion for the NHibernate code base is that we will make a whole lot of effort to ensure that the client code is clean.

As long as we manage to do that, we are doing a good job.

Presentation Styles

It is sometimes hard to believe, but I have been giving talks all over the world for quite a while now. It is said that practice makes perfect, and I think that practice has certainly made me a much better presenter.

The story about how I learned to handle public speaking is entertaining, and I get to tell it quite a lot.  But this post is not about that old story, it is about a new story. I have been doing a lot of presentations for a long while now, and I can track how I improved as a speaker since the beginning.

Detailed Bullet Points is probably by far the most common, and usually considered as flawed. This type of presentation is also referred to as Power Point Poetry. You can safely assume that I am not overly fond of this style.

Recently, I have leant much more toward the presentation zen style of slides. For that matter, I consider the slides a far third in the importance in giving a session. The first being the ability to actually talk and the second is good familiarity with the material.

Before the first & second, however, there is something that is far more important, at least to me. And yes, I am aware of the impossibility of priority zero. Language. I am not a native English speaker, and speaking in English, especially public speaking, is not something that came naturally to me. I still remember doing a talk and switching to Hebrew during the introduction without noticing...

But I have been practicing English quite a lot lately, apparently speaking only English for long period of times does help, and it shows. It got to the point where I sometimes automatically speaks in English, which I find highly disturbing at times. Oh, and if someone can explain to me the process in which I can select which language to talk, I would be grateful, I am extremely interested in that, but I have no clue how it works.

Anyway, given that you are actually able to speak coherently in the language that you are going to use, there is one important thing that you must do before you start prepping for a presentation. Decide what is your goal, given your constraints.

Constraints are things like:

  • What is the level of the people in the audience?
  • What is their familiarity with the subject at hand?
  • How complex is the topic?
  • How much time do you have?

The goal is affected by your constraints, not the other way around. And the sad thing is that often enough, it is so easy to get it wrong.

The basic problem is that most of the time, it is so bloody hard to get it right. It is easy to spend hours on a topic, because it deserve to have hours dedicated to it. But most often you have an hour or so, and that is it. And in that timeframe, you have to decide what you want to do.

Broadly, there are several options:

  • High level vision - what you can do, why you want to do it, how does this help you make your life better. In technical discussion, you might also want to be able to do a demo or two, but you keep it high level, and don't get mired in the code. The main goal is to give the audience the concepts and the understanding.
  • Introduction - show what is going on, demos, skim the surface, don't get too deep, with some minimum level of introduction to the concepts that are being exposed. The main goal is to give the audience an actionable start. They can go home and do something with this.
  • Detailed overview - this is a focused discussion on a specific topic. Usually this assumes some level of familiarity with the subject from the audience, and we can dive into the details and discuss a topic or two at length. Note that a topic is a small matter, but we can cover it quite well. The goal is to shine a spotlight into a particular area, and give the audience a lot more knowledge about it.
  • Blow their mind - despite the name, and the fact that I had done a presentation just like that very recently, I am not sure that I like this style at all. Somehow, I feel that this is cheating. Basically, in this style of presentation, you identify some problem that your presentation can help solve, and then you try to do the best demo you can to get an impressive result. The reason that I feel like cheating is that it is usually an non actionable presentation. You usually can do something with it right away, but you would need to do a lot more to get the real benefits out of this. This also takes a whole lot of time in prepping for this, and you have better be able to judge the audience and see if you get the appropriate reaction. If you can't keep up the pace of the presentation, you are going to flop badly.

In Oredev, I had done three talks, and a workshop, which falls naturally into each of those categories. Peter Hultgren has this to say about my Active Record presentation:

The best presentation was probably “Using Active Record to write less code” by Ayende Rahien, which was cocky and super motivating. Even though I have some experience using ActiveRecord, and pretty much knew about the features he brought up, I had the same “wow” reaction as everyone else did. If you can deliver an ad-hoc presentation which preaches to a converted and makes him want to re-convert, then you’re on to something.

Leaving aside being immensely flattered by him, that was a "blow their mind" approach, which I purposefully tried to stirred the pot. I assumed that calling everyone in the room a criminal would make sure that it wouldn't be a popular presentation, but apparently that was quite popular. The theme of that talk was Persistence is a Solved Problem.

As I said, this is a dangerous technique. Another talk that I did in this style was ReSharper in DevTeach almost a year ago. And that one is a great example of a flopped presentation. I wasn't able to keep up the wow effect, and I basically lost the audience midway through.

In Oredev, I also had the "Producing Production Quality Software", which was a high level talk. That one went well as well, and it mostly involved telling a story. The key part in this presentation is involving the audience. Since I am talking to a crowd of IT guys, it wasn't hard to get them to commiserate with my war stories about production failure.

I also gave a Rhino Mocks presentation, which unfortunately was at the last talk at the last day. Here I made a major mistake, I fail to read the mood of the audience as wasn't able to adjust as well to a crowd that was simply a bit tired to be fully participating. This is something that I should have handled better, which I hope that I'll be able to utilize in the future.

The interesting part about Oredev presentations is that we had only 50 minutes for each presentation. Most other conferences has an hour to an hour and a half. That gives the speaker a lot more time, and it made prepping for Oredev much more... interesting. On the one hand, it is hard to try to squeeze almost fifteen minutes out of a session. On the other, it does mean that you have a very succinct talk if you manage to do so.

That is another important aspect of the constraints that I mentioned before. Which style of presentation you use is greatly influenced by the amount of time that you have. In particular, for most people, showing code is the most time consuming process in the presentation, except writing code. And there is basically few things worse than leaving the audience hanging without any input from you while you are busy typing code.

If you can cut down the writing code part to less than 20 seconds without input to the audience, it will work, otherwise, you need to prepare ahead of time. Something that I like to do for my presentation is to do a lot of ad hoc coding. Sometimes it works, sometimes it doesn't.

When it does work, it tend to impress people. When it doesn't work, I crash & burn :-) I try to have a backup plan for such scenarios, but I need to actually notice that I need the time to move to it.  Another tip, you have ten seconds to debug a problem in your presentation, if you try to do anything more than that, you better back up, blame Murphy, and move on.

For that matter, watch the audience. You need to match the presentation to the people actually listening to it. Always start a presentation by syncing up with the audience. Who are they? What do they know about the subject? Tell them what you are going to talk about, how this is going to help make their life better.

In the timeframe of most presentations, you can't really give a whole lot of information. You just don't have enough time, and worse, you don't have a consistent audience, which would allow you to start from a level ground. What you can do, however, is to raise their interest. If after a talk I give, people in the audience go home (or on next Monday), and start looking up the things that I talked about, then I know that I have been successful.

Not a Production Quality Software

A while ago I worked at a bank, doing stuff there, and I was exposed to their internal IT structure. As a result of that experience, I decided that I will never put any money in that bank. I am in no way naive enough to think that the situation is different in other banks, but at least I didn't know how bad it was. In fact, that experience has led me to the following observation:

There is a direct reverse relationship between the amount of money a piece of code handles and its quality.

The biggest bank in Israel just had about 60 hours of downtime. Oh, and it also provide computing services for a couple of other banks as well, so we had three major banks down for over two days. The major bank, Hapoalim, happen to be my bank as well, and downtime in this scenario means that all of the systems in the bank were down. From credit card processing to the internal systems and from trading systems to their online presence and their customer service.

From what I was able to find out, they managed to mess up an upgrade, and went down hard. I was personally affected by this when I came to Israel on Sunday's morning, I wasn't able to withdraw any money, and my credit cards weren't worth the plastic they are made of (a bit of a problem when I need a cab to go home). I am scared to think what would have happened if I was still abroad, and my bank is basically in system meltdown and inaccessible.

I was at the bank yesterday, one of the few times that I actually had to physically go there, and I was told that this is the first time that they had such a problem ever, and the people I was speaking with has more than 30 years of working for the bank.

I am dying to know what exactly happened, not that I expect that I ever will, but professional curiosity is eating me up. My personal estimate of the damage to the bank is upward of 250 million, in addition to reputation & trust damage. That doesn't take into account lawsuits that are going to be filed against the bank, nor does it take into account the additional costs that they are going to incur as a result of that just from what the auditors are going to do to them.

Oh, conspiracy theories are flourishing, but that most damning piece as far as I am concern is how little attention the media has paid for this issue overall.

Leaving aside the actual cause, I am now much more concern with the disaster recovery procedures there...

[Politics] Unforgivable, Unacceptable and Unforgettable

I don't generally comments on politics in this blog, but this is something that I cannot ignore.

I was just watching the evening news, and I saw Ehud Barak, Israel's Minister of Security, state (for the record!) that the goal of the current government is to reduce the number of missile launched on Israel to 10 - 7 a month.

Allow me to repeat that.

The Minister of Security for Israel, as part of answering an inquiry for parliament, has stated the official position of the government. And that government goal, the thing it strive for, is to to have missiles launched on Israel.

I almost puked when I heard that, and I am still furious.

Aspects of Domain Design

On of the things that I like most about conference is that they are the place to meet and exchange a whole boat load of ideas with a large number of people. Most of the time, this is actually done on a normal level, just far more frequently than usually happen. Just this is a reason enough to visit conferences.

Sometimes, however, you get into a conversation that really open your mind. That is what Aslam Khan managed to do to me in about five minutes of hallway conversation. This short talk literally opened up for me a whole new way of thinking about the utilization of aspects and other behavioral patterns. I am going to try explaining this in this post.

Aspects are a way to add behavior to an object. Usually we are talking about being able to add pre and post actions to method calls on the fly. Aspects are commonly used to handle cross cutting infrastructure concerns. The typical examples are: logging, transactions, security and caching.

In the past, I have put domain logic in aspects, and has come to regret it very much, to the point where I believed that the only reason to use aspects is to handle cross cutting infrastructure concerns. Aslam changed my view on that.

Let us take the typical bank account example, with the Withdraw method. Ideally, I want the Withdraw method to contain just the core logic relating to its operation, so it would look something like this:

public virtual Status Withdraw(Money money)
	ammountInAccount -= money;
return Status.Success; }

However, what is going to happen when we have the following business role?

You may not withdraw from the account in the first week after is was opened, you may only deposit.

As usual, stupid business rule, but the example doesn't matter. The question is, where does this behavior belongs? Something that you should take into account is that there are likely to be many such rules, and any answer that you give should handle this scenario gracefully.

According to Aslam, the answer for this is in an aspect. Something like this:

public class MayNotWithdrawFromAccountOnFirstWeek : AnAspectOf<Account> // this is invoked on Account.Withdraw method
	public override void BeforeExecution()
		if( (DateTime.Now - Entity.DateCreated).Days > 7 )
			ReturnValue = Status.ActionCancelled;

This allow to maintain both the single responsibility principle and create clean separation between the core entity behavior and extended behavior, which is usually implemented in a service layer somewhere.

My first thought when I heard this (I have a deep rooted suspicion of aspects for domain logic, remember) is that I want to make this explicit, so have something like the entity publish some sort of event that the container can route to all subscribers. Thinking about this further, I realize that the code to actually raise the event is exactly the kind of cross cutting technological concern that I want to use an aspect for anyway.

I am not sure about the implications of this technique, but it resonates very well with my JFHCI notion.

The NHibernate Community

In response to Davy's post about the rise of interesting in NHibernate,I decided to run the numbers.

Nine months ago we created the NHibernate Users, which has taken off quite nicely, with over thousand members and a very active community.


For that matter, the stats for NHibernate downloads are quite nice as well.


The dip just toward the end is just before we released NHibernate 2.0.


Parallel Monologues

I was sitting in Claudio Perrone's talk about agile tales of creative customer collaboration. It was a good talk, and the first one that I saw that actually had a teaser video.

One statement that he said really caught my attention: Parallel Monologues.

I really like the term :-)


Published at

Implementing Methodologies

I am writing this in a bar in Malmo. Sitting with a bunch of guys and discussing software methodologies. I had an observation that I really feel that I should share.

Most software methodologies are making a lot of implicit assumptions. The most common implicit assumption is that the people working on the software are Good people. That is, people who care, know how and willing to do.

This work, quite well, in the early adopters phase, since most early adopters trend toward these set of qualities anyway. The problem is when the early adoption phase is over, and the methodologies hit the main stream. At this point, the implicit assumption about Good people no longer hold true.

Just to annoy Scott Bellware, here is a big problem that I can find with Lean just from the things that I heard about Lean from him. Toyota hires the top 2% of the engineers that they interview. A lot of the methodology assume a competent team, or at the very least, a competent chief engineer.

I don't think that I need to state that competency is not universal.

If your methodology assumes competence, and most of the methodologies that we (as in I and like minded people) would find favorable do assume competence, it is going to fail in the treches.

Published at

Originally posted at

Comments (4)

Building Frameworks: It is one day or three months

Quite often, people remark to me that I set unreasonable time estimates for work because I am somehow much better than anyone else. I strongly disagree with this statement. Both the qualitative assessment of my skills, and that this is an unreasonable timeframe.

There are several things that I am doing that I think that people are ignoring. One of them is not starting from zero, but starting already at high speed. But that isn't the major thing here. The major thing is that people fail to notice what I am giving those estimates very well defined things. You want to build an IoC container? It takes less than ten minutes to build a useful container. And I built an application on top of a container that wasn't much more complex than that.

That isn't a good IoC tool by any means, but it took very little time to build, and it works. If you click the link to see the container code, the container whose code you are reading is going to be involved in serving the code. It works and it fits to purpose.

That is a major thing to remember. A lot of the really useful concepts are trivially simple to build if what you need is a very simple. You can do that in a day.

And I include in this concepts such as OR/M, Aspect Orientation, IoC, Logging, Hot Code Swapping, Message Buses, Search Engines, Event Brokers and many other basic infrastructures.

The catch is that in a day, you get something that works, for the scenario that you need it for. If you want to take it to the next level, however, this requires a much higher degree of investment.

Coming back to the title of this post, you either do it in a day, or you dedicate three months for it.

Stealing from your client

I had a (great) talk yesterday, introducing Active Record. During this talk, I reused Jeremy's statements about data access. If you write data access code, you are stealing from your client.

Frans Bouma puts it beautifully:

No customer should accept that the team hired for writing the line-of-business application has spend time on writing a report-control or for example a grid control. So why should a customer accept to pay for time spend on other plumbing code? Just because the customer doesn't know any better? If so, isn't that erm... taking advantage of the inability of the customer to judge what the 'software guys' did was correct?

The context for Frans' post was a design decision from the Entity Framework team. I read Frans' post first, and I didn't quite know how to react to it, until I saw what the proposed design is. You can read the entire EF post here, but I'll try to summarize it so you can actually understand the point without going and reading the whole thing.

The problem is supporting change tracking in disconnected scenarios. In particular, you take an object from the database and send it to the presentation layer, some time later, you get the object from the presentation layer and persist it to the database again. There is a whole host of bad practices and really bad design decisions that are implicit in the problem statement, but we will leave that alone for now.

This is still a pretty common scenario, and it is more than reasonable that you would want your data access approach to support this.

Here is how you do this using NHibernate:

session.Merge( entityFromPresentationLayer );

Frans' LLBLGen support a very similar feature. In other words, it is the business of the data access framework to do this, none of the developer's business to do any such thing.

The current proposed EF design problem can be summarized in a single line.



This is code that you as the user of EF, is supposed to write.

If you write this type of code, you are stealing from your client.

This design violates the Infrastructure Responsibility Principle: Developers doesn't write infrastructure code for supported scenarios.

In other words, it is okay not to support a feature, but it is not okay to say that this is how you are supporting a feature.

This Is Broken, By Design

Why Øredev rocks

I spent a great day at the Oredev conference, and even a greater evening.

It started with a full day in the ALT.Net track, in which we agreed that ALT.Net shouldn't actually exist, and the main purpose of ALT.Net is to change its nature from alternative practices to merely a label for the set of principles and practices that we believe in.

I think that the name ALT.Net will change its meaning in the near to medium future, and change from being alternative to being an alternative to the mainstream .Net culture to subsuming the .Net culture, at least the high end of it. At that point, I think that ALT.Net will no longer be alternative, but the actual name will still be used as a label for this set of practices that ALT.Net currently represent.

Moving on from there, we (me, Glenn and Scott) had a great dinner, followed by several hours at a bar, just talking with speakers and attendees from Oredev.

I don't know if I mentioned it already, but just about everyone I met so far is very friendly.

After spending several hours in a pleasant conversation over bear in the bar, we got kicked out (something that seem to happen to me regularly now), so a bunch of guys (and one gal) went to a night club. I will just hint that Scott Bellware and bicycles doesn't add up in a coherent discussion.

The night club itself was great. Expect for Creepy Guy.

Let me put it this way, Creepy Guy made me think about date rape drugs and rectum reconstruction surgery. He made me extremely uncomfortable. I don't think I would have liked the retired-cremationist anyway, but I haven't had the chance to meet people as obnoxious and spooky as this guy in a long time.

Luckily, he was just one guy, and we were able to foster him off to a designated victim at a time. I blame the night club for my current state of non drunkness.

If I can still type in English, then I am not drunk.

Anyway, so far, I am really enjoying Øredev. I have two talks to give tomorrow, and it is now 3:30 AM local time, so I better lay off the laptop and get some sleep. If you are in Øredev, you might want to check my sessions for the amusement value alone.

Oh, and since I am going to be in DevTeach in a week or two, I dare the DevTeach attendees to get me drunker than that...

A Subtext feature request

Okay, Subtext is a great software, tested in the only real test that matter, in production. I am using it for a log time now, and it has given me very few problems, that is the hallmark of truly great software. It does what it needs to do, and it doesn't bother the owner with the details.

Phil Haack, and the rest Subtext gang, has done a great job in creating a zero friction tool for blogging.

It is missing one thing, however, I would really like to have a breathalyzer test as an optional security measure for the blog. I don't want to be able to post while I am drunk, something that seems to coincide with end of conference / end of a really good night at a conference.

I am sure that if you try to map the amount of controversy that my posts generate and correlate that to the end of conference times in which I took part, you would have pretty straightforward work.

As you can imagine, I deny being drunk at the moment. I am... slightly impaired in my thinking, perhaps, but I am not drunk.

I know that I am not drunk by the simple fact that I can type in English, without getting too much errors out.

Nevertheless, I think that such a feature would be a great addition to Subtext. I am pretty sure that tomorrow I am not going to be certain that I want this post out.

A good conference

One indication of a good conference is that it completely swallow the entire time that you have.

Oredev excel by this criteria. I am busy all the time, and having fun while doing so.

Why NH Prof isn't functional

One of the things that I wanted to do with NH Prof is to build it in a way that would be very close to the Erlang way. That is, functional, immutable message passing. After spending some time trying to do this, I backed off, and used mutable OO with message passing.

The reason for that is quite simple. State.

Erlang can get away with being functional language with immutable state because it has a framework that manages that state around, and allow you to replace your state all the time. With C#, while I can create immutable data structures, if I want to actually create a large scale application using this manner, I have to write the state management framework, which is something that I didn't feel like doing.

Instead, I am using a more natural model for C#, and using the bus model to manage thread safety and multi threading scenarios.

By popular demand...

There have been a couple of items that people kept pinging me about. Here they are:

  • The Hibernating Rhinos webcasts are now offered as direct downloads, instead of torrent links. You can access them here.
  • I uploaded the Advanced NHibernate Workshop and it can be found here.  The code for them can be found here.
  • The blog now has an integrated search


The Cuyahoga Project is a .Net CMS based on NHibernate. It used to be the best application to look for NHibernate patterns for beginners. Now I suggest people should look at this as a great sample app in general, not just for NHibernate patterns.

Among other things, it has been the engine behind ayende.com for the last four years or so. Along the way, it gave me very little trouble, and allowed me to manage my site without much trouble. A CMS, by nature, is an extensible system. But it is a tribute to Cuyahoga (and to what I am doing in the main site) that up until now, I didn't actually have to go into the code to make it do what I want.

I did my usual read-the-code-before-use, of course, but that was about it.

Today, I spent several hours implementing a very simple module in Cuyahoga. This module simply displays and redirect links. You can access the source for this module here, and general instructions about how to build Cuyahoga modules can be found here.

The reason for this post is that I had a change of mind regarding Cuyahoga. Before, I thought about it just as a good application. After, I think about it as a great framework. There is very little need for me to actually do anything in the module except what I actually want, and the overall design is clean, easy to understand and easy to follow. The hardest part was doing the UI, and that is as it should be.

But I do wonder about the name, why Cuyahoga?