Ayende @ Rahien

Oren Eini aka Ayende Rahien CEO of Hibernating Rhinos LTD, which develops RavenDB, a NoSQL Open Source Document Database.

Get in touch with me:

oren@ravendb.net

+972 52-548-6969

Posts: 7,213 | Comments: 50,332

Privacy Policy Terms
filter by tags archive
time to read 2 min | 358 words

In a previous post, I made the statement that I believe that using an IoC container should be transparent to the application code, and that a good IoC can and should be invisible to anything but the Main() method.

I would like to expand a bit on this topic, cover the sample problem that Tobias had brought up:

I never could achieve this and I'm wondering, what you mean by this statement. There a are always a lot of references to the DI/IoC container throughout the application. E.g. each IView that needs to be created to show a WinForm requires to ask the Container to give me the instances:

                ObjectFactory.GetInstance<IMyView>()

It might be an IController as well and IView is automatically resolved by the container, but as long as I need to instantiate new objects dynamically, I need to reference the IoC container at serveral places outside Main(). Maybe in very simple cases, the whole object tree required during the lifecycle of the application can be set up in Main().

Tobias has indeed hit the nail on the head, lifecycle issues are the one thing that can cause a lot of issues there. I would preface this by saying that I do not always follow this advice fully, and I have no compunctions about using the IoC as a service locator where appropriate, but I would try to limit it to the infrastructure part of the application only.

In this scenario, what I would generally do is define something like this:

// infrastructure service
public interface IRedirectionService
{
	IRedirectionOperation To<TController>();
}

And then use it like this:

Redirection
  .To<OrderController>()
  .ActivateAndWait();

It depends on the given scenario, but in this case, I am handing control from one controller to the other, and waiting until the second controller is done doing its job (think about it like modal dialog).

The Redirection Service is responsible for creating the controller / view (or find them if they are already active), and activate them.

IoC & DI are under the cover, but I don't have it in my face, and I have nice API against which I can work.

time to read 9 min | 1721 words

Jacob Proffitt continues the DI discussion, and he opens with this:

Ayende Rahien responded individually to each of my posts himself, which is more than a little bit intimidating all on its own

Despite rumors to the contrary, I want to mention that I abhor violence in all its forms. It always leads to having to fill forms in triplicate, and I got sick at that when they pulled form 34-C12 from the archives, just for me. Someone should remind me (on a non durable medium only) to demonstrate how you can get whatever you want, explicitly without violence in any form. It is both an interesting story, and it works.

Now, to Jacob's post, I will usually respond to things that I find interesting and rational, even if (or especially) if I do not agree.

All the heavyweight advocates of DI rely on powerful, relatively invasive tools to realize the benefits of Dependency Injection. Until you get those tools, the benefits of DI seem mainly theoretical.

Actually.... *drums roll* no.

Using DI with a container will certainly give you quite a few benefits, but you can get quite a bit from using DI directly. I met a guy in DevTeach which had done just that, with great success, using anonymous delegates and functional style. Nothing beyond a coding convention and what the language offered.

As an aside, I would accept the powerful, but I reject the invasive tools part. An good IoC can and should be invisible to anything but the Main() method. (There are shortcuts that are easier to take if you are explicitly aware of the container, but it is most often used on naked CLR objects).

The problem with DI, which is what the container want to solve, is that the moment you start injecting dependencies, you want some way to manage that, you can do that with a factory or a provider, but you get more benefits from using a container, since this allows you to centrally leverage some interesting concepts across the entire application.

If my client or company needs web pages that validate users against Active Directory and then allows them access to reports based on their group memberships, then telling me that I need some robust loose coupling framework in order to test my user maintenance library is flat out wrong.

Hm, let us see...

public interface IAuthorozationService
{
	bool Authenticate(string username, string password);
	bool AllowedTo(string action);
}

Simple, isn't it? That is what we used on our last project, and that is what enabled us to switch the implementation from hard coded (used on the staging site) to database back-end (staging site, when doing it hard coded started to take too long) and to active directory (production).

If it takes you more than a minute to write that, there is an issue here. If you consider direct coupling to the Active Directory classes, then there is an issue. If you tell me that you can get he same thing from:

public static class AuthorizationProvider
{
	public static bool Authenticate(string username, string password)
	{
		//...
	}

	public static bool AllowedTo(string action)
	{
		//...
	}

}

Then I argue that there really isn't any difference between the two, except that the interface based approach makes your collaborators easier to understand and work with. The second approach is a static Service Locator, which hides the dependency, but can do much the same as an IoC, but is more limited in terms of handling such things as life cycle.

Outside of major corporate infrastructure projects, stringent loose coupling just isn’t as much benefit as more naturally encapsulated OO techniques.

I strongly disagree, I find loose coupling to be an extremely useful idiom, important even in very small projects.image The ability to look at a piece of code in isolation is a key factor for maintainability.

So here we have a lot of architecture big wigs selling DI as best practices and not bothering to contextualize those statements at all. Again, Ayende is a good example of this. In Dependency Injection: More than a testing seam, he wraps up with "Dependency Injection isn’t (just) for testing, it is to Enable Change." Nice. So if I don’t use DI, my projects can’t change at all?

 And here I was hoping to avoid a wig for the next two years at least.

I am very bad in debate tactics (see first paragraph for why), but I do believe that reversing a sentence and then attacking the reverse conclusion is not a fair tactic.

The ability to change the code is strictly a matter of the quality of the code, the amount of tight coupling, the amount of separation of concerns that you have there. DI or lack of DI doesn't really into the equation on its own. You can certainly build DI-based projects that are impossible to change.

My argument is that DI is one of the tools that I use to make change possible. It is not the only one, naturally, and I am not saying it is the sole factor, but it is an important tool.

In addition to that, I don't think that I have ever said that DI is good for everything and you should use it the next time you want to build a hello world program. Most of the advice that I tend to give (or that given by the people I respect) usually has the "when applicable" part.

It all comes down to cost vs. benefit and that’s what I’m not seeing enough discussion of, here. What are the costs of using DI?

I think that the issues here are twofold, first, you see to significantly exaggerate the cost of using DI, out of proportion, I would say. The second is that you are trying to evaluate a part of the whole while refusing to consider the whole. DI is useful on its own, yes. It is much more useful when you have a container. There are different patterns for DI when you use a container and when you don't, and there are different usage scenarios.

Loose coupling is so far down my project priorities that standard OO practices are more than sufficient

Ookaaay.... so, according to you, this is a good approach?

public int GetCustomerCount()
{
	using(SqlConnection connection = new SqlConnection(Configuration.ConnectionString))
	{
		// ...
	}
}

To me such code suffers from a numerous major issues, ranging from flexibility of implementation to performance concerns to transaction semantics.

Only, everywhere I go for mock objects, I’m being told that all the best projects are loosely coupled so what kind of incompetent fool am I for not implementing DI even before I ever thought of needing a mock object framework?

Jacob, in this case, consider this an official invitation to the Rhino Mocks mailing list, and a promise to ban anyone calling anyone else a fool or an incompetent.

Yes, mocks objects goes hand in hand with DI, because you need to get the mock behavior there. Yes, you can use TypeMock, but as far as I am concerned, the moment that I am doing that, I am in the same boat as I am when I am trying to test private methods. There is no need to do that.

If I ever get into a project where loose coupling is a big enough concern that I have to consider seriously intrusive techniques in order to reduce the risks of dependency change,

Just to clarify, I find it hardly intrusive, and I am sadden to hear that you don't consider loose coupling (which also means SoC) is important to the code base.

Again, Ayende is an example of this attitude when he says, "The main weakness of Type Mock is its power, it allow me to take shortcuts that I don’t want to take, I want to get a system with low coupling and high cohesion." As a personal statement, I guess this is fine. If Ayende doesn’t trust himself to write code appropriate to his requirements and resources, then by all means, he’s free to handcuff himself with less powerful tools.

I appreciate your permission, but I think that you missed the point. My requirements are high cohesion and low coupling. In the end, I want code that I can read, understand and work with.

That’s not at all what he means, though. Too often this kind of statement isn’t intended to actually target the speaker. The speaker is perfectly confident in their own abilities - in this case to create low coupling and high cohesion. It’s the abilities of other developers they want to constrain. It’s another manifestation of the "programmers not doing things my way are idiots" meme used against Visual Basic for so long.

I take offence at this statement. Rhino Mocks is a project that I developed because I wanted to have a mocking framework that I could deal with. It is an OSS project because I feel that other people can also use this functionality. It has never been my intention to use Rhino Mocks to preach My Way to the masses.

Nevertheless, Rhino Mocks reflect the way that I use a mocking framework, and the best practices that I and the community that has grown around it have discovered. I am using the same Rhino Mocks build as anyone else, for the same reason that I quoted above. This is not the case where I am handing down architecture to the stupid programmers down the hall: "Thou Shall Have Loose Coupling, And That Will Be Enforced With Rhino Mocks".

Frankly, loose coupling isn’t an absolute value anyway, so arguments that every decrease in coupling is automatically worth the cost are absurd on their face

Loose coupling comes with the cost of needing to understand the interactions within the system as a whole from the individual parts. I find that the ability to look at a piece of code in isolation, and then look at its interactions much easier than trying to grok a system that has tight coupling, since I can't usually understand a piece of code is isolation. And, as always, it is entirely possible to have too much loose coupling.

To summarize this decidedly long post,

time to read 4 min | 683 words

Jacob Proffitt has another post, called Tilting at Windmill that expound on this subject, and Nate Kohari has a good response to Jacob's first post here.

In this post, Jacob starts with:

I’ve been giving poor Nate Kohari a hard time over at Discord & Rhyme. He has been very patient in Defending Dependency Injection. His attitude stands in sharp contrast to Ayende Rahien's post today about testing Linq for SQL. Ayende’s snide "(Best Practices, anyone?)" is exactly the attitude I lamented in my original post on Dependency Injection

Jacob, yes, I fully believe that it is a best practice to be able to test code without using a hammer (TypeMock) to force the dependencies into order. And as I have already mention, I am doing quite a lot of stuff with dependency injection & inversion of control that explicitly require separating the dependencies. Seams are just that, seams where I can put my own stuff, be it a mock object or a decorator or a proxy. They are incredibly useful in reducing the amount of concerns that I need to deal with at any given point.

Jacob then continues to:

My hypothesis (entirely untestable, so technically a mere speculation) is that what happened is that in order to achieve 100% code coverage in their unit tests with methods that read databases or call web services or have other potentially nasty dependencies, someone came up with the neat idea to invert control of those dependencies so that unit tests could feed in an abstracted object that fakes those calls and the real logic of the methods could be tested without bothering the poor database (or whatever). That’s a noticeable, pervasive, broad architectural change, though, especially if it is only used for unit testing, so in order to defend their idea, developers began looking for additional justification for this pattern.

I don't think that you will find anyone accomplished in unit testing that would say that achieving 100% coverage is a goal that you should strive for. In fact, you are usually explicitly warned from chasing that number just because is a nice round shiny one. Hell, Rhino Mocks itself has only 95% coverage, and I am not worried about it in the least.

Dependency Injection & Mocking often goes hand in hand, but they don't need to be, and I value DI because I use it in conjunction with IoC, which means that I really don't need to think about dependencies at all. That is a much better solution than any other alternative:

Seriously, with providers that allow you to contextualize defined sub-systems and something that allowed you to Mock objects in testing without having to feed those objects to the tested class, can you think of a good reason to use Dependency Injection?

Providers are just an abstract factory with a static access, not really anything new. In fact, it is a service locator that can locate a single server. That means that I don't have easy visibility for my dependency map. If I look at a class, I can see at a glance what it is dependant on when using DI, not so when I am using the static accessors.

It also mean that I have no ability to pick & choose what the dependencies that I will pass to my objects. Jacob's answer to that is:

Any dependency that needs to be altered more frequently than a contextual provider framework would allow needs to be exposed for access by calling objects anyway, I’m thinking.

And who shoulder the burden of handling that? Who decide what should use what?

I prefer a consistent approach, aided by smart tools, which, in the end, means that I have to spend little to no time thinking and managing dependencies. In fact, in most scenarios, you quickly forget that you even use DI, because that is so natural and the framework simply supports what you do.

Trying to look at DI alone, without seeing how it is used in practice means that you miss the overall picture.

time to read 3 min | 553 words

Jacob Proffitt has a post about Dependency Injection, which he doesn't seem to like very much.

The real reason that DI has become so popular lately, however, has nothing to do with orthogonality, encapsulation, or other "purely" architectural concerns. The real reason that so many developers are using DI is to facilitate Unit Testing using mock objects. Talk around it all you want to, but this is the factor that actually convinces bright developers to prefer DI over simpler implementations.

I do wish that people would admit that DI doesn’t have compelling applicability outside of Unit Testing, however. I’m reading articles and discussions lately that seem to take the superiority of DI for granted

While the ability to easily mock the code is a nice property, it is almost a side benefit to the main benefits, which is getting low coupling between our objects. I get to modify my data access code without having to touch the salary calculation engine, that is the major benefit I get.

I have had more than one occasion to experience the result of not doing so, and I have seen the benefits of breaking the application to highly independent components that collaborate together to create a working whole.

Take that and join it with a powerful container (such as Windsor), and suddenly you gain a whole new insight into the way you are constructing your application, because the moment you are tunneling a lot of the dependencies through a single source, you find that there are a lot of smart things that you can do. Decorators, interceptors and proxies come immediately to mind, but there are quite a few other options as well.

All of this becomes particularly exasperating in the dead silence regarding TypeMock.

Type Mock is a great library, and you can check this blog to see my discussion with Eli Lupian about it. The main weakness of Type Mock is its power, it allow me to take shortcuts that I don't want to take, I want to get a system with low coupling and high cohesion.

And from the comments to that post, again by Jacob:

How can you say that dependency injection (I'm not taking on the whole inversion of control pattern, but I might. Jury's still out on that one.) creates loosely coupled units that can be reused easily when the whole point of DI is to require the caller to provide the callee's needs?

Because there isn't a coupling from the unit to its dependencies, and there shouldn't be a coupling between the caller to the callee's dependencies. But even without the use of an IoC container, you can get a lot of benefit from DI, just the ability to pass a CachingDbConnection instead of an IDbConnection is very valuable, and not, this is not something that you can just put in some sort of a factory or provider, because you want that kind of decision to be made on case by case basis.

Ugh. I wish Ayende would admit that the only advantage to DI in his examples is to allow the mock objects to work, is all. There is no other compelling reason.

Sorry, but I don't think that this is the case at all. Dependency Injection isn't (just) for testing, it is to Enable Change.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. A PKI-less secure communication channel (6):
    12 Oct 2021 - Using TLS
  2. Postmortem (3):
    27 Sep 2021 - Partial RavenDB Cloud outage
  3. Production postmortem (31):
    17 Sep 2021 - The Guinness record for page faults & high CPU
  4. RavenDB 5.2 (2):
    06 Aug 2021 - Simplifying atomic cluster wide transactions
  5. re (28):
    23 Jun 2021 - The performance regression odyssey
View all series

RECENT COMMENTS

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats