# Ayende @ Rahien

Hi!
My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:

ayende@ayende.com

+972 52-548-6969

## A question of scale

time to read 2 min | 298 words

I am using a different meaning to the terms "scaling" and "scalable". I am usually not really worried about performance or scaling out a solution. I am often thinking about how I can take a certain approach and scale it out to the more complex scenarios.

Dragging a table to the form as an example, doesn't scale. It doesn't scale because when you need to handle logic, you need to go into significant complexity just to get it. NHibernate does scale, because so far it handled everything that I threw at her without increasing the complexity of the solution beyond the complexity of the problem.

I think that this graph should explain it better.

What we see here are the complexities of solutions vs. the complexity of the problem. The unscalable solution complexity increase more and more as the complexity of the problem grows.

The scalable solution's complexity increase as well, but it increase in direct relationship to the problem at hand. If we need to have a problem twice a complex, then the solution will be about twice as complex.

It can't be less than twice as complex, because you can't escape the complexity, but the other solution is nine times as complex, and the difference between the two only grows when the problem gets more complex.

And that is how I evaluate most of the things that I use. Do they scale? Will I be able to handle more complex scenarios without tool-induced pain?

The solution is allowed to be complex because the problem that we are trying to solve is complex. It must not be complex because the tool that I am using need crutches to handle complex scenarios.

It becomes interesting when the scalable solution's initial complexity is greater than the unscalable solution. In that case there is a switch over point, when the scalable solution starts to pay dividends.

I figure this is why it can sometimes be hard to convince people that a particular approach (that scales nicely) is worth some initial effort.

Mind you, sometimes its not.

Couldn't agree more. In the past I've worked with a popular DAL generator that employed a "half-hearted" AR pattern. All worked well until the generated entities required more and more functionality. The app boundaries blurred into one big mess over time!

Likewise, same project worked on ASP.NET but having entity objects bound to some less than object friendly ASP.NET control turned into an even bigger mess with "be-there-dragons" patchwork.

At work we use a heavily customized EntLib with a dozen or so extensions and CAB. Yet again, working with these is great until you reach that lucid point where implementation complexity rises exponentially compared to problem complexity and the EntLib in all it's greatness collapses like a giant behemoth of base classes, configurations and what else not.

Out of necessity, the first project was rebuilt from scratch, save the physical data model, using NHibernate, Windsor, NServiceBus and a lot of Ayende's Rhino.Commons magic... in a 16th of the time, much reduced code body and no signs of clutter or scalability issues yet. Word!

So I Googled for NServiceBus, not having heard of it, and came across the project home page on Udi Dahan's site.

That page made my anti-virus software go nuts; apparently the site is trying to push down something called the "trufelsite" Trojan.

Probably not intentionally, but still, LOOK OUT if you want to research that piece.

I've been investigating NServiceBus myself. Ive had no problems with the site (FF 2.05)

If I were to guess, I'd guess that it was a spurious thing due to a compromised 3rd party rotating ad server.

Yup. I think your graph is a very effective way to present this topic. However, as @dansquid said, there is often a cross-over point. But that cross-over point tends to occur much sooner than many people will think.

I'm always somewhat skeptical of "code-free", "configuration-only", "by convention" and "no programmer required" "frameworks". They usually don't scale.

This is especially true when I'm evaluating a platform for my own use.

My needs are almost never routine or simple. If they were, I would probably be doing something else.

This is an excellent way to explain the general pushback against drag and drop features coming from the ALT.NET community.

The only change that I would make is to represent the unscalable solution as starting off with less complexity than the scalable solution. This may not be the case in all scenarios, but I think that generally non-scalable tools make that complexity scalability trade-off in order to optimize the experience for simple scenarios.

This fits the same pattern with performance scalability. When I first started using COM+ over COM way back in the day, I remember that it was very difficult to explain to clients that they were making a fair trade-off in order to scale for performance. I asked if they wanted it fast for just a few users and then slow as soon as the load peeked or if they wanted it to be a reasonable speed for at all times, including when the load peeked.

If they knew that they would never have many users, then it didn't make sense to add the overhead of COM+, just as it probably doesn't make sense to spend the extra effort to try to optimize for complexity if you are just doing a throw-away app that needs to be done quickly but not maintained.

#### Comment preview

Comments have been closed on this topic.

#### FUTURE POSTS

No future posts left, oh my!

#### RECENT SERIES

1. Technical observations from my wife (3):
13 Nov 2015 - Production issues
2. Production postmortem (13):
13 Nov 2015 - The case of the “it is slow on that machine (only)”
3. Speaking (5):
09 Nov 2015 - Community talk in Kiev, Ukraine–What does it take to be a good developer
4. Find the bug (5):
11 Sep 2015 - The concurrent memory buster
5. Buffer allocation strategies (3):
09 Sep 2015 - Bad usage patterns
View all series