Ayende @ Rahien

My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by email or phone:


+972 52-548-6969

, @ Q c

Posts: 6,400 | Comments: 47,411

filter by tags archive

A New Methodology: Blog Driven Development 2.0

time to read 3 min | 538 words

I would like to introduce a new Methodology for development. I call it Blog Driven Development and it is a modified version of XP.

Here is the entire Methodology in its simplest form:


What projects this Methodology is fit for?

Just about any project you like, but I strongly recommend that you will use BDD for Web 2.0 projects, since it mesh well with empowering the developers and result in better product that can be readily consumed by millions of eager users.

As you can see, I'm practicing my own craft and use BDD daily for the last two years or so. I've been very successful so far.

Please note that this Methodology means that you Blog a Lot, Test a Little and Code a Little. The implicit assumption here is that you slowly shift the effort from coding and testing toward blogging. As blogging is a field that is far more deterministic than software development, I am sure you can imagine the productivity increases.

You can set clear goals and schedule with confidence. Imagine how much better your standing will improve when you repeatedly meet deadlines.

Other benefits of Blog Driven Development:

  • Sending a blogger to a journalist's conference cost much less than sending a developer conference.
  • You are on top of the hype wave, using BDD to develop Web 2.0 projects is the Big Next Thing, I assure you.
  • You can reasonably expect to catch most "bugs" using Word's spelling & grammar checking.
  • Less code means less bugs


I am currently taking a sabbatical in order to write the Tome Of Blog Driven Development, you can read a sample chapter here.

Castle And Bicycle Riding

time to read 2 min | 359 words

A while ago I was trying to explain the benefits of Castle to developers, and I just couldn't get through. It is pretty easy to explain the benefits of using Active Record, but I had a hard time explaining MonoRail and Inversion Of Control.

Here is my elevator speach for MonoRail:

MonoRail is RAD MVC framework built on ASP.Net that supports the Front Controller pattern and force strict seperation between UI and Business Logic.

I got into a lot of problems with this, since it doesn't convey the benefits to the people I am talking to. The response that I got where:

  • RAD without wizards? There is no such thing.
  • MVC? Front Controller?
  • I already has seperation between the UI (aspx) and the Business Logic (aspx.cs)

I can show how quickly I can create a page, but it will seem slow compared to doing the same thing in VS.Net, since I don't have a designer to "help" me. How do you explain the benefits of design patterns to those who doesn't feel the pain*? How can I explain the benefit of strict seperation to those who think that aspx / aspx.cs is good enough seperation?

I got the same problem when explaining Inversion Of Control, they couldn't get why they shouldn't just use new. Do you think that those things, like bicycle riding, can only be understood by actually trying them?

I know the motivation parts of those patterns, but I don't think that I managed to make the people I was talking to get the advantages of this new way of doing stuff.

* Those Who Doesn't Feel The Pain - I like this pharse.

Test Driven Development Is About Failing

time to read 2 min | 349 words

This post has nothing to do with Rocky's comments on DNR

I was testing using NHibernate in ways that I never did before*, and the test kept failing. Each time I would fix one thing, and another part would break. It wasn't very obvious how to make it work. I had a guy watch over my shoulder while I worked throught the kinks of the problem. After the test failed repeatedly for the 10th time or so he just muttered something about "never making it work right" and walked away to do something else.

To me, TDD is all about failures. I know that I need to do something only when I get a failing test. And the failing test kept pointing me to this error and that error, and I kept researching and fxing the errors. Eventually I had enough understanding of what I was trying to do and how NHibernate was trying to accomplish it that I could get it to work (perfectly). All in all, it took about 30-40 minutes of trying.

I am not upset to see failing tests, they are the best sign that I have that something is wrong.

This is a Failing Test.
There are many other tests, but this one is a Failing Test.
My Failing Test directs me where I need to be going.

* Highly complex schema with some indirection thrown in to add to the mix.

Avoid Mixing O/RM and SQL

time to read 3 min | 478 words

It is common to people that are starting use O/RM to get into a situation that they don't know how to solve using the O/RM (or even aware that such a thing is possible). The easiest example is loading just Id and Description for a certain class without loading all the (possibility expensive) properties.

There are several ways to do this using most O/RMs, but they are usually not obvious to the beginner. This is usualy a pain point since they can easily see how they would do it using plain SQL, but they either don't know how to use the tool appropriately, or they just don't think this is possible using the O/RM. The immediate solution to that is to just use plain SQL (or SP) to solve this one problem.

It is wrong because it pokes big holes in the model you show the rest of the system. If means that the model that most of the system is using doesn't match the model that the rest of the system is using.

This is bad because you lose of of the key benefits of using O/RM, abstracting the database. A good O/RM will always let you work against the objects structure, not the data structure. This is important because it frees you from dealing with a potentially ugly schmea and work with a good model all the time.

In nearly all the cases, you can use the options that the O/RM gives you, and get what you need without dropping to SQL. That is not to say that SQL isn't important. It is very important, because O/RM tend to be leaky abstractions, it is just that you shouldn't reach to the familiar tool when you need to solve a problem without an immediate solution.

What I'm more afraid of is that using SQL to solve the immediate problem will start to spread through the code, since "it is already the way we did it in XYZ". And that is a whole new can of worms. Letting this spread means that you effectively have two data access layers for a single application for the same model. I trust that I don't need to explain why this is bad. Just the syncornization between the two will be a pain, and considerring the way the SQL Access Layer is likely to appear (growing from quick solutions to problems), I am willing to be that it is not going to exhibit good design (at all levels).

If you really need to use both, make sure that you investigate the options to solve the problem using the O/RM first, and then make sure that this solution goes through the front door with all the bells (Design, Tests, Code).

Rocky Lhotka on TDD, Take #2

time to read 7 min | 1226 words

Rocky took the time to clarify his comments on TDD in dotNetRocks #169. There seems to be a couple of main points there:

As a testing methodology TDD (as I understand it) is totally inadequate... ...the idea that you’d have developers write your QA unit tests is unrealistic

TDD is not a QA methodology. TDD is about designing the system so it is easy to test. This makes you think about what the code does, rather how it does it. The end result is usually interfaces that are easier for the client to work with, coherent naming for classes/interfaces/methods, the level of coupling goes down and the cohesion of the system goes up. Then you run the test.

The idea in TDD is not to catch bugs (although it certainly helps to reduce them). The idea is to make sure that developer think about the way they structure their code before they start writing it. It also means that developers has far more confidence to make changes, since they have a test harness to catch them if they break something.

And it was pointed out on Palermo ’s blog that TDD isn’t about writing comprehensive tests anyway, but rather is about writing a single test for the method – which is exceedingly limited from a testing perspective.

You start with a single test, to check the simple case, then you write another test, for a more complex case, etc. The purpose is to assert what you are going to do, do it, and then verify that it does what you assrted it should. QA isn't going away in TDD, for the reasons that Rocky mention in his post (developers rarely try to break their app the way a Tester does). QA and Developers takes two different approaches. When I get a bug from QA, I write a failing test for this bug, and then I fix it. Now I got a regression test for this bug.

When I touch this part in 6 months time, I can do it with confidence. I know that I will not:

  • Break any of the assumtions of the original developer - I got tests that verify that the original functionality works
  • Regress any bugs that were found before - I got tests that would verify the bug is fixed.
  • Any future maintainer of the code will not break the sutff I'm doing now - I write tests that will verify that the new functionality is working.

Those and clear design are the main benefits fo TDD, in my opinion. Again, I agree that this doesn't remove the need to have QA team that hates the product and really want to break it :-).

I don't buy into TDD as a “design methodology" either. You can't "design" a system when you are focused at the method level. There’s that whole forest-and-trees thing... ...But frankly I don’t care a whole lot, because I use a modified CRC (class, responsibility and collaboration) approach.

CRC is cool, I agree, but the issue is how detailed you go. I can probably think of the major classes that I need for a project up front, and maybe even come up with their responsabilities and some of the collaborators. Until I sit down with the code, I can't really say what methods and what parameters will be at each class, what other stuff I need to do this work, etc. TDD is about the interface/class/method design, not directly about full system design (although it will influence the system design for sure).

And as long as we all realize that these are developer tests, and that some real quality assurance tests are also required (and which should be written by test-minded people) then that is good.
None of which [developer tests and QA tests], of course, is part of the design process, because that’s done using a modified CRC approach.

This is something I really do not agree with. If you don't use TDD to influence your design, you are not doing TDD, period. You may have tests, but it's not TDD. Writing tests after the fact means that you don't get to think about what the client of the code will see, or use things that make it much harder to test the class. You lose much of the qualities of TDD.

What you end up with is a bunch of one-off tests (even if they are NUnit tests, mind) that doesn't cover the whole system, and are only good for testing very spesific things. They don't affect the design of the code, which can lead to very cumbersome interfaces, and they don't cover the simple to complex scenarios. They may represent several hours (or days!) of programming effort that culimate in a long test that test very spesific scenario. Those are integration tests, and while they have their place, they are not Unit Tests. NUnit != Unit Tests.

And here is a reply to some of the comments in the post:

Sahil Malik commented:

TDD does_not reduce your debugging time to zero. You do end up with an army of tests to maintain. Since the tests are written by and maintained developers, you cannot go with the assumption that the tests themselves aren't bug free. So you write tests to check your tests? Inifinite loop.

You do not stop debugging, that is correct. It does mean that you have a very clean arrow that points to where it failed, including the details, so you have much easier time to fix it. And having an army of tests to maintain would be bad if they didn't have an army of code that they test. Yes, tests has bugs, and the code has bugs, but the chance that you would get the exact same bug in both the test and the code is not very high. When you will see the test fail, you'll investigate and discover that the code is correct and the bug is in the test. No need to write TestTheTestThatTestTheTest... scenario.

Code Annotations

time to read 1 min | 92 words

I want something like this thing:



It got to be as easy to use and it should be integrated into VS.Net, so when I open a file, I can see the annotations for the code. I know I saw something like this, but it was using pen (and presumably tablet), and I can't find it right now.


time to read 2 min | 214 words

Do you ever print any of your code? I can't recall the last time I did it, and I know that it was never something that I did often. I'm not a real example, since I print about 2 pages a month on average (and even that I'm doing while muttering bitterly about dinosaurs).


I can see one good use for printouts, and that is for paper code review. I can see the value there, especially since red pen is one of the easiest ways to communicate. Although, I guess I would get stuff like this one all too often:

The only things that I print right now are object / tables diagrams, and even then, I don't refer to them, I'm using them as darts targets. (You wouldn't believe the amount of holes the Page class had during the time I did ASP.Net work).

Outlook Spam

time to read 1 min | 54 words

I just noticed that emails that goes to the Junk E-Mail in Outlook are displayed as plain text, and that the links there are not working. That is, they are clickable, but they won't take affect (it was annoying to find this tidbit, because it looked like it should work).

Obfuscation Question

time to read 4 min | 752 words

The question is actually very short: Why?

I just saw an obfuscation package for .Net that goes for over 600$, and I'm not sure if that is per developer or not. I know that there is this belief that obfuscation will protect your IP, etc, but I fail to see the point in this endavor.

I can see several scenarios for wanting to obfuscate the code:

  • You are a component vendor and you don't want people ripping you work and using it elsewhere.
  • You ship a product out and fear that people will decompile your code, remove the registration routine and re-compile it.
  • Your code contains classified information or secret business logic.
  • You think that it is your code, and I have no business looking in there.

For the component developer:

What is valuable is not how you do that trick with triple buffering or custom event handling, or the Ajax capabilities, or whatever. What is important is that the whole component mesh well with the rest of the envionment and that it is easy to work with. If I got a problem in your component I will go into Reflector and try to find what is wrong. I will be seriously pissed off if I can't figure out that I should've prefixed a \ because the code is obfuscated. If someone is going to rip you off, obfuscation isn't going to help, if they need to take the whole thing as a black box, they will. And then were you are?

I'm going to use maybe 10% of the capabilities of what you give me, if I'm going to have to fight it, I'll rather just roll my own, it's simpler to do so and at least I can control what is going on.

For the obfuscation will make it harder to crack my product:

No, it won't. I can run your product under the profiler, find the path of calls that blocked me, and just return "true" from the IsLicensed() method. It's a lot of work, and in general, I have better things to do with my time than break products when I can just pay for them. That said, there is a certain class of people that enjoy this challange, and yet another section of humanity that uses their labor to use your labor without paying.

Obfuscation only make the game more interesting, and considerring that obfuscation doesn't change the meaning of the code, it all turns into a mental challange similar to this. At worst, they will just check the x86 assembly that is being JITed and see what you are doing in that level.

For the classified information / secret stuff in my code:

What are you doing giving away your confidential data to the users anyway? If it is really secret, then you should put it somewhere you control, and only let people access it if they are authorized. Putting the classified information in the client just means that A) they can access it if they need it and B) they likely to do so without you knowing so. If it is secret, you should keep it under lock. The moment it is in the hands of the users, it is only secret as long as no one tries to find it. If it is not really secret, than it's either #1 or #4.

For the guys that thinks that I shouldn't look at their code:

You code is not likely to be anything speical, sorry. I regulary goes over libraries I use with Reflector, it's a mini code review that I do, mostly to familiarize myself with the library and the options that I can use. Several times I rejected libraries because the way the structured their code, under the assumstion that if they did such a bad job in this, it's not going to be easy / fun to work with the API that exists, and the general quality of the code isn't going to be stellar (to say the least).


  1. Bug stories: The data corruption in the cluster - 4 hours from now
  2. Bug stories: How do I call myself? - about one day from now
  3. Bug stories: The memory ownership in the timeout - 2 days from now
  4. We won’t be fixing this race condition - 3 days from now
  5. Batch processing with subscriptions in RavenDB 4.0 - 6 days from now

And 2 more posts are pending...

There are posts all the way to Jul 05, 2017


  1. RavenDB 4.0 (8):
    13 Jun 2017 - The etag simplification
  2. PR Review (2):
    23 Jun 2017 - avoid too many parameters
  3. Reviewing Noise Search Engine (4):
    20 Jun 2017 - Summary
  4. De-virtualization in CoreCLR (2):
    01 May 2017 - Part II
View all series



Main feed Feed Stats
Comments feed   Comments Feed Stats