The “single implementation” paradox

We got into a bit of a debate at work recently. It went a bit like this:

“Gah! Why do we have this interface when there is only a single implementation?”

(The stock answer to this goes:) “Because we need the interface in order to mock this class in our tests.”

“Oh no you don’t, you can use the FingleWidget [insert appropriate technology of your mocking framework of choice here – e.g. JMock ClassImposteriser]! I’m smarter than you!”

“Well, yes, you can. But if you’ve correctly followed Design for Extension principles, you’ve made the class final, right? And you definitely can’t mock that! Hah! I’m smarter than you!”

“Ah ha! But you could always use the JDave Unfinaliser Agent! I’m so smart it hurts!”

Hmmm… So this is where we are. Is this good or bad? My view on this is:

  • You’re no longer testing the code as it will go into production. I mean, you’re not really going to run the production system with this agent enabled, right? If nothing else, it’s going to remove some useful hints to the hotspot optimiser.
  • It’s hard to run the tests in your IDE. Well, to be fair, it’s hard in Eclipse because it doesn’t have a default run configuration. You can kind of hack it by adding the appropriate switches to the JDK definition, but that’s definitely in the category of smelly things. IntelliJ is better in this respect – you can at least set defaults for auto-created run configurations.

So, what’s the answer? I really don’t know. I’m on the fence about this one. I hate having interfaces with a single implementation. But I also hate having to remove final from classes that have no business being extended just so I can mock the class in my tests.


Share this:
Facebook Twitter Digg Email

This entry was posted in Design, Unit testing. Bookmark the permalink.

2 Responses to The “single implementation” paradox

  1. Pingback: Symphonious » The Single Implementation Fallacy

  2. Stephen says:

    I have two thoughts about the single-implementation problem:

    1) In my experience, code bases that have lots of single-implementation interfaces have generally been really bad. Over-engineered, pointless layers, etc. I’m sure this isn’t the case for all projects that use them (especially yours, of course :-), it’s just my experience.

    2) Stepping back, in some ways I think the distinction between classes and interfaces is no longer useful and instead becoming a hindrance. AFAIK the perf for method dispatch is the same, so why not have all classes have interfaces? Except you don’t declare them, it just happens.

    E.g. “class Foo” gets a “Foo” interface and “Foo$” implementation. Whenever you do “new Foo”, the compiler knows to do “new Foo$” but the variable type stays “Foo”. Now you can mock/whatever Foo to your heart’s content.

    If, for some reason, Foo really shouldn’t get an interface, perhaps you could add a java.lang.NoImplicitInterface annotation, which would tell the compiler to behave in the current/standard way (not that such a drastic change in compiler behavior would ever make it into Java itself).

    A little while ago I was feeling out an architecture that used single-impl interfaces, and so built an annotation processor that does basically this:

    So you write Foo, add @GenInterface, and it will automatically create/update as-you-save an IFoo.

    I’m not saying it’s perfect (and annotation processors in general can be finicky–especially this one, IIRC), but if you really want single-impl interfaces, it seems like a good compromise compared to maintaining both by hand.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.