Wednesday, February 1, 2012

Interfaces, DI, and IoC are the Devil

I want to write unit tests. To do that, I need to be able to swap out the "dependencies" of the object under test with mocks, or stubs, or fakes. In static languages like C# or Java this is accomplished by two means:
  1. Using interfaces instead of the concrete class
  2. Injecting instances of those interfaces through the constructor (Dependency Injection)
There are other ways, but this seems to be the widely regarded "best" way.  And I don't like it.

The core of my dislike stems from one simple thing: interfaces don't describe constructors.  So once I'm forced to depend on interfaces, I can no longer use the programming language to create objects.  I can't use the new keyword.  The simplest most fundamental feature of all objects, creating them, is barred from me.

To be clear, this isn't Dependency Injection's fault, nor is it IoC's fault.  It's the interface's fault.  Here's my favorite feature of Ruby:
MyClass.Stub(:new) {...}
That simple line of code stubs out the new 'message' to the MyClass type, allowing me to return a stub instead of an actual instance of MyClass.  This demonstrates why Ruby is so much easier to test than today's static languages:
  1. You can intercept any message to the object w/o having to depend on an interface
  2. You can intercept the constructor the same way you'd intercept any other message
But back to topic, why is it a problem that I can't use the new method?  In my experience this causes some nasty coding practices to emerge.  I'll take them in turn:

Everything is a singleton
As soon as I couldn't create instances, I wrote all of my objects in a stateless style (no instance variables) and registered them as singletons in the IoC container.  This leads to all sorts of other code smells within that object.  Since you can't promote variables to instance variables you end up passing all your state into methods, leading to methods with lots of parameters, which is a Clean Code smell.  Methods passing parameters to other methods to other methods to other methods...

Custom "init" methods are introduced
To combat the last problem, you might add an initialize method to take the place of the constructor.  This will work, though it's non-standard and confusing.  You also have to decide how to register the class in the IoC container.  I've seen this done with a singleton, where the Init method clears the old state and setups the new state WHICH IS EVIL.  Or you can have it create a new instance each time, but you'll never know how it's configured when you're using it, more on that later.

Classes that could benefit from refactoring to have instance variables, don't get refactored
Both of the above contribute to this problem.  When you add that next "context" parameter  to your stateless class, it may be the straw to break the camels back, causing you to refactor those parameters to instance variables.  But time and time again I've seen the uncertainty around the lifecycle of the class in IoC lead people to delay this refactoring.  Again, more on the lifecycles later.

Factory interfaces are created to wrap the constructors
Finally, when I'm fed up with the above, I introduce a factory class who's sole purpose is to wrap the constructor of the class I actually want.  This is just bloat, plain and simple.  And it's yet another class that gets registered with the IoC container.

I also have some complaints about DI and IoC when they are leveraged simply to enable unit testing.  I'd like to call into question some of the typical assumptions around DI/IoC, and we'll see where it takes me.

Interfaces are good because they make your code loosely coupled
This is the most common assumption I see that I disagree with.  I've seen this in Fowler's writing on DI and in the GOOS book, which makes me seriously doubt myself here.  But no amount of time or practice has changed my opinion.  The argument is that it's good to use interfaces over ALL OF YOUR CLASSES because it allows you to make them ALL PLUGABLE.

I find this completely ridiculous!  There are certainly classes that it is useful to have be "plugable" but they are few and far between.  The majority of the classes that I create are Single Responsibility, extracted from other classes, and small.  They serve a very pointed, very well understood, and very well defined purpose in the larger web of objects.  The chances I would want to PLUGIN in a different version of this class are virtually zero (aside from for testing).  I might change it as the design of my domain changes over time, and that will be easy because it's small and focused.  But I'm not going to suddenly wish it was plugable!  I might even argue that wrapping these things in interfaces hurts cohesion.

I should clarify, I do want to swap these classes out so I can test, and that's why I'm forced to use interfaces.  But the argument that using interfaces for their own sake, even if I wasn't testing, for all classes is a good thing is one I just find outrageous.  It's an unnecessary level of indirection, adding more complexity, and giving me nothing useful in return: YAGNI.

IoC is good because it makes the lifecycle of my objects someone else's problem
This is another argument I've frequently seen in favor of IoC.  That moving the responsibility for the "lifecycle" of your objects all the way up to the top of your application is a good thing.  This is another argument I find mind boggling.  Again, there are a minority of cases where this is useful.  An NHibernate session that needs to be created for each web request comes to mind.

But again, 90% of the time I've got an instance of an object that wants to delegate some task pertaining to its current state to some other object.  I want that object to come into existence, perform some task for me, and go away.  Basically, if I wasn't encombered by interfaces and DI and IoC, I'd new up the object passing in the necessary state and tell it to do what I need.  The garbage collector would take care of the rest.  But when I'm burdened by IoC, suddenly there is a lot of uncertainty around what the lifecycle of each object should be.  And you can't tell what lifecycle a given injected object has.  This is a significant issue that affects how you will use that object, and how that object is designed.

Dependency Injection is good because it makes my object's dependencies obvious
I've encountered this argument in many places too, notably this blog post.  The argument roughly goes: "To unit test an object you need to know what it's dependencies are so you can mock them.  But if those dependencies aren't clearly specified in the constructor, how will you know what they are?!  So DI is awesome because it clearly specifies the dependencies!"

This is such nonsense it's really quite amusing.  Knowing what the dependencies are doesn't even BEGIN to help me mock them out.  I have to know what methods are called, with what parameters, expecting what return value, and how many times!  In other words, I have to know everything about how the object uses that dependency to successfully and cleanly mock it out.  A list of what dependencies are used barely scratches the surface and is definitely NOT a compelling reason to use Dependency Injection!

Another downside of IoC/DI that I've often run into is that it spreads like a virus through your code.  Once you start using DI, every class ABOVE that class in the call stack also has to use DI.  Unless you're OK with using the container as a service locator, which few people are.  So essentially, if you're going to use DI you're going to have it in the majority of your classes.

I'm painfully aware that this post is just a lot of complaining and contains no useful suggestions toward correcting any of these issues.  What I want is what Ruby has, but C# just doesn't support that.  I may experiment with some forms of service locator, leveraging extension methods.  Or I might try to create some sort of internally mockable reusable factory concept.  I'll write it up if it leads anywhere, but I doubt it will.

So I guess the only point of this post is to vent some of my frustrations with DI, IoC, and interfaces and get feedback from all of you.  Perhaps you have felt some of this pain too and can validate that I'm not totally crazy?  Or maybe, and I sincerely hope this is true, I'm missing some fundamental concept or perspective on this and you can fill me in?  Or might it even be possible that this could be a first step toward cleaning up some of these pain points, even if just in a small way?

Or you might read all of this and shrug it off.  You're used to it.  It's always been this way, it can't be any other way, go about your business.  If that's your thinking as you're reading this sentence, I'd just encourage you to re-examine your assumptions about what's possible.  Take a look at Bernhardt's testing in DAS.  THEN come back and give me a piece of your mind.


  1. Kevin,
    I agree with probably 90% of this. One-off interfaces just for testability are crazy, IoC moves important information away from the call site and creates a whole new class of bugs that can be hard to track down because of the excessive magic involved. And seriously, who needs all the power that IoC provides? Have you ever hot-swapped a dependency at runtime? The Gang of Four guidance is "encapsulate the concept that varies". Encapsulating everything is the road to madness.

    But I think there is another way! I am still a huge fan of constructor injection. Mostly I value the testability, but I also like the explicit dependencies and the way it forces you to think about the needs and responsibilities of your code.

    However, I rarely refactor to interfaces (YAGNI, and all). With modern mocking frameworks there is no need to have an interface just for testibility. As long as your class is inheritable you're fine. And then you don't have the constructor problem. To be concrete, here is an example of how I write my MVC controllers:

    public class SearchController : Controller
    public SearchController() : this(new AspNetSession(), MvcApplication.CurrentSession,
    new SearchQueries(MvcApplication.CurrentSession))


    public SearchController(IWebSession webSess,
    NHibernate.ISession dbSess, SearchQueries search)
    _sess = webSess;
    _dbSess = dbSess;
    _search = search;

    Do you see what is going on? I provide a default constructor (called by the framework) which creates dependencies and passes them on to a second constructor for "injection". Unit-tests, of course, will call this second constructor with mock dependencies. And notice that even though some of the dependencies are interfaces, I still new-up a concrete instance. In general I try to inject dependencies manually like this at the top level of the code, whatever that happens to be (the Controller in this case).

    Constructor injection has its own problems, of course. It can get unwieldy with a large number of dependencies, and IoC advocates like to point out that their method looks cleaner and has less code (if you don't count the large, complicated dependency you've taken on an IoC dll :-). But the code, if ugly, is at least straightforward and unlikely to contain bugs. And you could always write a factory method for it.

    There have been some cases where this approach became so unwieldy that I felt the extra complexity of an IoC container was justified, but you don't have to cross that bridge until you come to it and you don't have to use it pervasively.

  2. Hey Gabe, thanks for the comment! I like your perspective on this.

    When I first started doing DI I was using the method you demonstrated and it did work very well.

    There may still be one downside. You can call new now, but only from within your default constructor. Which means if there's some data you want to pass to the constructor of one of those dependencies, you may not have that data available yet. It also means you can only have one instance of each dependency (usually not a problem though).

    Step a step in the right direction by eliminating the need for IoC containers.

  3. My advice to you would be to forget about IoC for now. You can come back to it later, if you want. Or not, if you don't see a need. Forget even about lifetime management and scoping. For now, just focus on making your classes very small. Keep their responsibilities highly focused. When you have more than a handful of dependencies (whether injected or not), try really hard to identify two or more different things that class is doing, and pull out all but one into a new layer of dependencies.

    Does this cause an "explosion" of classes? Yes. It means your class file list for a project gets really long, and people start to scream "complexity!" But with the right namespace organization, you can achieve a high degree of order and discoverability this way. And the unstated, and often disrespected, benefit derived from this is that for any one thing you need to do in your code base, you'll likely have to touch only a handful of those many objects/files. And if you've been careful about organization, you'll have a very high level of certainty which those are, and which they are not.

    One thing you'll find once you start doing this is that a large number of your objects can actually very easily be written statelessly. I'd say 90% or better of the objects I write in my C# programs are stateless/immutable other than their dependency references, which *very* rarely change after construction. This means you also don't need to care whether your consuming object has its very own instance, and how it was initialized, unless there is some immediately relevant context that needs to be accounted for. In which case, a little factory which handles all the non-contextual initilization is often a perfect solution.

    What is the point of all this object down-sizing? Reducing interaction complexity. Simplifying the patterns of interaction between functional chunks of your code. And the benefit of that is that it makes maintenance easier. That includes bug fixes, tweaks, and even large-scale changes. This is not YAGNI type stuff. This is the response to the reality that your code WILL have to change and you probably have no real idea how it will need to change. So make things small and focused to reduce the impact cross-section of *all* changes.

    This is easier than you think, but it *does* take some adjustment and getting used to. It's not going to start raining unicorns and cupcakes the first day, the first week, or even probably the first month. It's a learning process. But once you've stabilized and have started to see the benefits of a near-universal improvement in change impact, you'll be able to focus on the higher-order effects and problems that solutions like IoC begin to address.

    With small objects, the benefits of DI will be clear. Interfaces are a red-herring. They've got nothing to do with anything except insofar as they facilitate safe test doubles. But all of these things are higher-order issues. Focus on solid modelling of simple/single responsibilities first. Everything else builds on top of that.

  4. Sigh, in all my editing and trimming of that comment, I missed out on delivering the real moneyshot...

    Given what you've described of your code both before and after you attempt to apply DI and IoC, I would say that IoC is not an effective solution to any problem you likely have in evidence right now. The problems may be there under the covers (or maybe not), but you likely have too much else to worry about that is far more immediate.

    The problems you likely have right now are problems that will likely exist in the future of this project, and pretty much every other coding project you'll ever encounter. They are the fundamental challenges of programming. And the first and most impactful thing you can do about them is to make the functional units of your solution as small as possible. This is *step one* of good design.

    Once those ubiquitous problems have been mitigated, you'll start to see the problems that design and architecture patterns address, and after that you'll see the problems that IoC addresses, and after that you'll see the problems that strict layering and modularity address.

    Worry about all the patterns and architecture guidance once you've got that first step down pat.

  5. Hey Chris, I appreciate your comments, thanks for taking the time to give me your thoughts on this.

    In fact, what you are describing is exactly how we build software at my company. The objects are small and there are lots of them.

    A few years ago, I was not writing code like that. Back then I didn't feel a lot of pain w/ interfaces, DI, and IoC. Interestingly, it was as the design got better and more SOLID that I become increasingly frustrated with the constraints of interfaces and DI. Partly because all that complexity and loose coupling became more apparent as my objects become simpler and cohesive.

  6. I realize this is very easy to say, hard to back up without having your code in my hands. And I appreciate that you have made strides. But in my experience, all of what you say indicates that your objects should be smaller and more focused yet.

    Possibly lifetime and context management are exactly the right responsibilities to start extracting next. A good IoC (like autofac) can handle common types of both situations very automatically, if it's properly extracted.