Ayende @ Rahien

Refunds available at head office

Dependency Injection: IAmDonQuixote ?

Jacob Proffitt has another post, called Tilting at Windmill that expound on this subject, and Nate Kohari has a good response to Jacob's first post here.

In this post, Jacob starts with:

I’ve been giving poor Nate Kohari a hard time over at Discord & Rhyme. He has been very patient in Defending Dependency Injection. His attitude stands in sharp contrast to Ayende Rahien's post today about testing Linq for SQL. Ayende’s snide "(Best Practices, anyone?)" is exactly the attitude I lamented in my original post on Dependency Injection

Jacob, yes, I fully believe that it is a best practice to be able to test code without using a hammer (TypeMock) to force the dependencies into order. And as I have already mention, I am doing quite a lot of stuff with dependency injection & inversion of control that explicitly require separating the dependencies. Seams are just that, seams where I can put my own stuff, be it a mock object or a decorator or a proxy. They are incredibly useful in reducing the amount of concerns that I need to deal with at any given point.

Jacob then continues to:

My hypothesis (entirely untestable, so technically a mere speculation) is that what happened is that in order to achieve 100% code coverage in their unit tests with methods that read databases or call web services or have other potentially nasty dependencies, someone came up with the neat idea to invert control of those dependencies so that unit tests could feed in an abstracted object that fakes those calls and the real logic of the methods could be tested without bothering the poor database (or whatever). That’s a noticeable, pervasive, broad architectural change, though, especially if it is only used for unit testing, so in order to defend their idea, developers began looking for additional justification for this pattern.

I don't think that you will find anyone accomplished in unit testing that would say that achieving 100% coverage is a goal that you should strive for. In fact, you are usually explicitly warned from chasing that number just because is a nice round shiny one. Hell, Rhino Mocks itself has only 95% coverage, and I am not worried about it in the least.

Dependency Injection & Mocking often goes hand in hand, but they don't need to be, and I value DI because I use it in conjunction with IoC, which means that I really don't need to think about dependencies at all. That is a much better solution than any other alternative:

Seriously, with providers that allow you to contextualize defined sub-systems and something that allowed you to Mock objects in testing without having to feed those objects to the tested class, can you think of a good reason to use Dependency Injection?

Providers are just an abstract factory with a static access, not really anything new. In fact, it is a service locator that can locate a single server. That means that I don't have easy visibility for my dependency map. If I look at a class, I can see at a glance what it is dependant on when using DI, not so when I am using the static accessors.

It also mean that I have no ability to pick & choose what the dependencies that I will pass to my objects. Jacob's answer to that is:

Any dependency that needs to be altered more frequently than a contextual provider framework would allow needs to be exposed for access by calling objects anyway, I’m thinking.

And who shoulder the burden of handling that? Who decide what should use what?

I prefer a consistent approach, aided by smart tools, which, in the end, means that I have to spend little to no time thinking and managing dependencies. In fact, in most scenarios, you quickly forget that you even use DI, because that is so natural and the framework simply supports what you do.

Trying to look at DI alone, without seeing how it is used in practice means that you miss the overall picture.

Comments

No comments posted yet.

Comments have been closed on this topic.