Ayende @ Rahien

My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:


+972 52-548-6969

, @ Q c

Posts: 6,128 | Comments: 45,548

filter by tags archive

Architecting for Performance

time to read 3 min | 531 words

In the comments to my OR/M Smackdown post Adam Tybor noted:  

Don't we all know that performance is the thing you tweak last?

To which Ted Neward has replied:

Well, if you don't think about perf until the very end of the project, you usually find yourself having to either just shrug your shoulders and say, "Well, faster hardware will make it run fast", or backtrack and refactor significant chunks of the application in order to cut out round trips from the system as a whole.

Which reminds me of a conversation that I had with Udi Dahan recently, which we concluded with this great quote from him:

In order to design performance domain models, you need to know the kinds of data volumes you’re dealing with. It affects both internals and the API of the model – when can you assume cascade, and when not. It’s important to make these kinds of things explicit in the Domain Model’s API.

All of which brings me to the following conclusion, performance tuning in the microseconds is a waste of time until you have profiling data in place, but that doesn't meant that you should think about performance from the get go. You can code your way out of a lot of things, but an archtiecture that is inherently slow (for instance, chatty on the network) is going to be very hard to modify later on.

Udi had an example of a Customer that has millions of orders, in this case, the performance consideration has a direct affect on the domain model (Yes, I know about filters). From a desing perspective, it basically means that the entity contains too much data and need to be broken. From a performance perspective, that is making it explicit that a potentially very costly call is made (and obviously filtered for the need).

A good rule of the thumb for performance is that you should consider an order of magnitude increase in the number of users/transactions before you need to consider a significant change in the archtiecture of the application.

That is absolutely not to say that you should consider everything in advance, and I had my greatest performance success by simply batching a few thousands remote calls into a single one, but architecutre matters, and that should be considered in advance, and built accordingly. (And no, it doesn't necessiate a Big Architecture Up Front either, although where I would need to scale very high I would spent a while thinking about the way I am going to build the app in advance. Probably with some IT/DBA/Network guys as well, to get a good overview.

Oh, nevertheless, premature optimization...



I think Adam either didn't word his statement well, or misquoted the accepted maxim ...

You don't OPTIMIZE till last.

You ALWAYS design and code with performance in mind.

The more experienced you are, the more likely that the second will become second nature and you will do it without needing to think too much about it, but no matter how experienced you are, you cannot optimize until last.

Rik Hemsley

There's nothing wrong with designing and coding with performance in mind - as long as you understand performance issues. Too often I've watched people start writing some code in an obfuscated way, saying that they're doing it like that because it's important for 'performance' and not caring that this piece of code will run once per day, would normally take about 10ms of CPU and isn't going to be noticed by the user. It's not really impressive to get a 1000x speedup in that situation.

The other problem with their code tends to be that they don't even write comments to say why their design is so strange. I think this is because they realise that if they explained it, they'd have to admit they were wasting their time doing something they enjoy and that they thought development was about (hacking) rather than doing the less exciting but more important stuff which development is really about (building in an easy migration path for the new feature, getting the tab order right in the user interface, testing the code actually works).

Sorry, rant over!

Adam Tybor

@Casey, I think I did write a little out of context.

My point was simply that "optimizing" for performance in advance in my opinion is a huge waste of time. Designing for scalability and performance is always important, after all we are developers and this is a big part of the job. I use the word "optimize" meaning to squeak out that extra bit of performance and I use word "design" meaning understanding your domain, data volume, and data usage as Ayende pointed out in his post.

I still feel some of Ted's points were about tweaking for microseconds. Yes NHibernate has some overhead compared to a straight sproc and datareader but it has considerable advantages, and how much slower is it presenting a 100 row datagrid from a reader, dataset, and domain object. I would guess the datareader would be pretty fast but is the comparison between NH and DataSets, probably not that much different as DataSets have a lot of overhead? Now jump that up to a 1,000,000 row table with paging. How quickly can you get it to page efficiently with the previous three methods? I have also seen Criteria queries in NH generate far better performing SQL than some complicate sprocs. Think about advanced search cases...

I have to agree with Ayende that its about making decisions that cause the “least amount of pain”. This is a very subjective statement and in some shops it may mean doing everything in the DB and in other shops it will main building a domain model.

I would bet my hat and a bowl soup that a consultant walking into a project done with logic split between sprocs, triggers, and code versus a DDD project with an ORM and unit tests would take the DDD with smile because they would imagine the pain and struggle of tracing all the logic through code -> sprocs -> triggers.

Another big point lost in this whole argument is that NH brings with it the DDD approach which is based around a ubiquitous language. NH & DDD projects tend to be very self documenting and much easier to dive into and find out what’s going on. Besides, NH is just pure magic so why waste energy, time, and money doing something it does automatically.

In the end it’s about delivering to your customer, who doesn’t understand any of this anyway, and giving them the best solution for their money. Do you really think your customer gets more bang for their buck if you have to write and maintain sql code, data access code, and application code?

I would love to see cost, time, and code metrics on projects done with ADO.Net vs ORM.



Sam Gentile

Going to try to find time to link to this later but just wanted to say that this post was very well said and I agree with pretty much all of it.

Colin Jack

@Ayende: I guess I go test but does NHibernate really load the entire contents of a bag when you add an item?

Comment preview

Comments have been closed on this topic.


  1. The low level interview question - 3 hours from now
  2. The worker pattern - 3 days from now

There are posts all the way to May 30, 2016


  1. The design of RavenDB 4.0 (14):
    26 May 2016 - The client side
  2. RavenDB 3.5 whirl wind tour (14):
    25 May 2016 - Got anything to declare, ya smuggler?
  3. Tasks for the new comer (2):
    15 Apr 2016 - Quartz.NET with RavenDB
  4. Code through the looking glass (5):
    18 Mar 2016 - And a linear search to rule them
  5. Find the bug (8):
    29 Feb 2016 - When you can't rely on your own identity
View all series



Main feed Feed Stats
Comments feed   Comments Feed Stats