This is a short reply to Ward Bell’s post, Do Not Make Every Method Virtual. More specifically, I have an issue with this statement:
My argument is with opening up the class blindly and totally by making everything virtual. Suddenly nothing in the class is truly “closed for modification.” The “virtual” keyword announces to the world “here is my extension point.” When every method is virtual, the world is invited to change every method.
I think that there is some confusion here. The original class is closed for modification. I can’t change it. I can create a new class extending the behavior. That is what OCP is all about.
Ward is giving the example of an Elevator, and overriding its Up() method:
I knew that Up() sent the elevator ascending. But I can’t stop someone from re-implementing Up() so that it descends instead. Maybe base.Up() triggers the doors to close. The developer might call base.Up() too late, sending the elevator in motion before the doors have closed. The developer could replace my base.Up with something that juggled the sequence of door closing and elevator motion methods, interleaving custom behaviors, yielding upward motion that failed to satisfy some other elevator guarantees.
And the problem here is… ?
I am sorry, but I see some descriptions of potential bugs (or desired behaviors) listed here. I fail to see a problem.
But I write frameworks for you to use. You’ve licensed my product and you’re paying me for support. When the elevator goes down instead of up; when the doors close suddenly and injure a rider; when the elevator simply stops … you don’t say “I wonder what I did?” You say “that elevator we bought is a piece of crap.”
Well, guess what, if you switch the wires in the elevator, it will behave strangely. Nothing out of the ordinary here. And by switching the wires you accept responsibility for the actions you did.
I have several frameworks available that are nothing but huge extension points that the user can plug into at any point and at any time. More than that, I have those extension points at several layers, so they can choose at what level to interact. And no, those aren’t designed extension points, they fall out of the way I build software. And so far, based on actual evidence on the field, it is working.
I can descend from any non-sealed class, override the constructor, and throw a NotImplementedException.
Shit on a dinner plate; get shit for dinner :-)
I guess I'm going to have to start declaring all my classes sealed, now. Thanks for warning me about this potential behavior. ;)
The problem with virtual methods is that they become a source of maintenance. Once virtual you have to permanently support overriding. As your public footprint grows so does your maintenance burden, thus making changes and creating new versions becomes increasingly complex and potentially impossibly without breaking backwards compatibility.
Constraint is increases scalability. Pick your extension points wisely.
Could you elaborate on the relationship between OCP and LSP that Ward discusses. I tend to agree with you in that if you inherit from Elevator and completely change the implementation of Up so that it has a different meaning that is on you and the original class is OCP. But I think Ward has a point that you are now violating LSP. If you override Up to make your new elevator calculate taxes, is it still and elevator?
I think it comes back to the theme of not protected the developer. If the guy is dumb enough to make Up calculate taxes, that is his problem. Don't close off any meaningful extension of Up becomes you are afraid of idiots.
Well, it really depends... When you write an open source framework, you can make everything virtual (although i believe it is wrong). If some developer uses your framework and is stupid enough to override everything with senseless implementation, nobody will hold you responsible for that, and even if they do, you can always say 'screw you, i am not paid to do this. Do it properly or send a patch'. If you write a commercial framework (and I believe this is Ward's background) you can't afford this attitude. If your customer's software breaks, they will call you first, and even if you can prove that it's their fault, it is still not something you want to experience. That's why you want to minimize the possibility of doing stupid things.
It never made sense to me why classes are not sealed by default but methods are in .NET. Virtual methods (by default) would encourage better development practices such as OCP mentioned by Oren. There is probably a small performance decrease due to v-table. But if performance was more important than good practices/ease of development, we would be using C/C++.
A client would have to use derived classes to introduce a breaking change and they can still do so unless the class is marked as sealed.
What would worry me more is a company making a framework/API decides to seal method in the second revision and claiming it is not a breaking change because you were not supposed to override the method in the first place.
Oren, I assume you've read Anders' point of view on why he made the decision to go with not virtual on everything
I think if you're builiding a framework, you just have to be explicit about the extensibility points you provide rather than just leaving everything open.
I can see the counterargument but it just isn't a strong enough for me. There haven't been THAT many times when I've said "I wish so and so was virtual"... perhaps you have been bitten a lot more than me on that.
I'm not sure I agree with what you are saying about Commercial Frameworks. .NET is a Commercial Framework afterall, and if it's my job to extend something, and that something breaks, everyone in my department will know that I broke it, not Microsoft.
Using the Elevator example, if suddenly Up starts calculating taxes, the company won't immediately run to the Acme Elevator Code Company and complain that Up is broken, they'll run to Developer X and say "Hey, weren't you working on modifiying the Up code?".
Now, if Developer X lies and says that he didn't touch it and they do contact our imaginary software company, it wouldn't take more than a few minutes to discover if they've overridden the virtual Up method once the souce code is examined.
You are also ignoring that fact that your framework is extensible might be one of it's selling points as well.
I think one of the challenges we face as developers is dealing with complexity. If you make all your methods virtual you are opening up a can worms that you really didn't need to open. Of course in some cases you want to expose extention points, but then you should (IMHO) be explicit about it, as in every other aspect of software development.
One problem with inheritance is that you never know what context the parent method is running in. This makes inhertance something that should be treated with great care :) Isn't that where the guideline "composition over inheritance" comes from ?
yeah ayende.... don't you know people should just be able to arbitrarily call stuff with no understanding of the api and without unit tests!
and if they do need to change its behavior, they should have to hack the crap out of the class that extends it. without unit tests.
When you make things extensible, you are declaring, "OK, you can extend that"; and as an honest API developer you have to also say "You can extend that as long as you do not violate the so and so invariants and maintain the so and so protocols".
The more things you make extensible, the more complex the correctness conditions become, and the easier it is for the API developer to underspecify them (in which case it is the developer's fault!), let alone for the client to break them (in which case, OK, it's the client's fault).
For example, if you are making a Number class extensible, you should specify that if the client is overriding the Add method, he should maintain commutativity, associativity, units, distributivity over multiplication etc., because all the rest of the code in Number class depends on that, and because everyone expects a Number to behave in this way.
This way you might be able to extend a RealNumber to a ComplexNumber, arbitrarily extending the .compareTo method to compare norms, because the API developer has just specified that comparison should impose a transitive, reflexive relation.
However, he has forgotten to tell about antisymmetry; and now you've extended the class in a way that satisfies all requirements written in the documentation about extensibility, but that in some cases breaks some equation-solving library code that expects its arguments to come from a completely linearly ordered set.
So, the API developer has made Number so extensible that he himself had forgotten to document a not-so-obvious requirement to extenders, and the client extended the class in a fashion that generally makes sense but nevertheless broke many things.
Excuse me for the somewhat perverse example, I'm reading a book on abstract algebra ;) I hope it illustrates my point.
No, it's not an Elevator anymore.
And that's YOUR problem for overriding it and making that change. I think that's Oren's point - if you override it and change the behavior, that's your responsibility.
I think Eyston hit the key driver for this kind of behaviour - we are so concerned with protecting the ( stupid ) developer that we cripple anyone else. Building frameworks for the lowest common denominator only results in something that can do one thing, one way which severely restricts its usefulness.
Is this attitude something that is only prevalent in the MS / .Net world or is it something that is associated with static languages in general? From my ( limited ) understanding of dynamic languages like ruby, the problem simply doesn't exist and therefore developers take more care to ensure what they are doing is correct.
Too early... perhaps for .NET5 we will have "virtual" by default... more thinking that 'sealed' is there since long time ;)
That the person that made the change assumes the responsibility of the effects doesn't change the fact that when they mess up they are going to turn around and blame you for their mistake.
That's just how people are.
Knowing this, its your job to make working with your code/elevator/product as easy as possible. That means it needs to be able to be extended however they want to extend it, without allowing them to seriously shoot themselves in the foot. That's a hard line to walk.
I don't think there is any question that blindly making everything virtual is placing the proverbial gun in the user's proverbial hands. Sometimes that'll be the right move, sometimes not.
Typically, no one will write Elevator::Up() to calculate taxes. Overridden implementation will be different just enough to violate some constraint (e.g. some postcondition or calling another method when it should not be called). Honestly, how many frameworks you know where preconditions and postconditions for every virtual method are documented? If we live in the same world, not a single one.
In a sense, overriding a method is most of time betting that the framework is written 'properly' and that you will get the constraints right, even though they may be implicit. This is even worse for commercial frameworks where you don't have source code so you can't look up the implementation of the base class (leaving aside Reflector - this is not a topic about dev tools).
Regarding extensibility as a selling point, you can have an extensible tool without having all methods virtual. NHibernate, Castle and many others are good examples and majority of methods there are non virtual.
Actually, here is the great example of what is the root of the problem: ayende.com/.../Count-the-interfaces.aspx
a) 'logical behavior' is different for different people.
b) even the best of us don't read the documentation before coding a call to a method - most of time we expect 'logical behavior', and read the documentation when we run into problems
This is not to say that you don't have to know what you are doing or that you should drop unit tests - that would be dumb. However, minimizing the possibility of shooting yourself in the foot is good thing.
Good devs like Ayende will test and analyze the code and see where is the problem. However, when you sell a tool to an average overworked developer on tight schedule with lacking test suite, and they run into something like this, they will ask you to see what's wrong and even if you show them, they will remember you as 'guys who write braindead frameworks' which is never good publicity.
Yes, and while your at it - please put warning labels on everything because I might not know my McDonald's coffee is going to be hot.
I am familiar with the argument, and disagree with it.
There has been plenty of times where that bit me, hard.
If they don't do all of that, they have a bug. It is quite simple
I think one of the problems of virtual in framework code is backward compatibility. Once you mark a method as a virtual, you have to make sure that any changes/refactoring you make in the next release do not break all kind of weird classes people make by inheritting your methods.
Basically once it's marked as virtual, you have to kinda officially support it for indefinite future.
I guess that's a question of attitude, then: you're telling that explicitly allowing (by underdocumenting) to make hard-to-spot bugs with your framework is "just" a bug, and I'm telling that prohibiting to extend it in a certain useful way (by making certain methods non-virtual) is "just" lack of a feature.
I argue that hard-to-spot bugs are worse in some sense because they can shoot you in the foot at the most unexpected moment and cost you money, whereas lack of a feature can usually be worked around because you are, at least, aware of it.
LOL, now you've done it Ayende, you provoked me into blogging again! ;-)
I wrote a reply but it became way to long, so here's the link to my new blog and the reply to your post:
Oh, and since this not be clear from the (joking) tone of my comment above - I agree with you, and I want to debunk the idea that making methods (in a non-sealed class) non-virtual protects anyone from anything.
Coming originally from Java-land I have to say: default-virtual is not the solution. It opens up a wide range of problems ("Fragile Baseclass Problem", ...). This is not about protecting dump or young developers. It is about having control over extensability. Explicitly opening extension points feels better than the default-virtual in Java. Of course, the pain within C# comes when proxying such classes. So with "default-virtual", the possibility to explicitly seal methods has to come. Is that really any better for dynamic proxying and other such (nice) stuff?
@Oren Fair enough, I guess this is one of those reasons why you prefer open source frameworks eh!? ;-)
I'm glad Neil Mosafi pointed out Anders Hejlsberg's comments on this topic, as they're very well thought out. Unfortunately, Ander's comments about "outgoing contracts" don't seem to have been addressed very explicitly by the comments above.
The critical thing Anders was getting at was about pre/post conditions of exposed virtual methods. These conditions are often implicit and subjective for any non-trivial class, and so inheritors are prone to make incorrect assumptions when they override. If these assumptions work in version N of a framework and are subtly broken in version N+1, then whose fault is it that?
It's not always reasonable to blame the inheritor, as false assumptions can sometimes stand up to very intense scrutiny. Since even the best of developers don't recognise all their assumptions, you can't rely on unit tests to mitigate this risk fully.
So we're left with blaming the framework provider for not making their pre/post conditions more explicit. Put in the position of having to preempt everybody's assumptions, a pragmatic framework provider would recognise the effort:reward ratio is ridiculous and therefore default to non-virtual methods. It's very convenient then, that C# defaults methods to non-virtual.
Thanks, Ayende, for providing another battlefield upon which this argument can be played out.
There is even more commentary on my post on my blog for those of you who can't get enough of the subject.
I think we've aired our differences. It doesn't look like anyone is moving. I appreciate the arguments: "it's your funeral" and "don't patronize the developer".
I hope you can appreciate the implications of the extra burden I carry as a commercial product provider who faces often unfair demands from paying clients. Your community has a different risk profile than mine ... and views the consequences of failure ... whatever the cause of failure ... differently than my community does.
There is no one "right way" for developers to feel, regardless of their skill. They pick a framework in part because of what they demand of it and the people who support it. This is a matter of their expectations, not of their developer acumen.
Finally, the consequences of my preference are not as dire as portrayed. We release every six weeks. If we have failed to open an extension point that has demonstrable value to our community, we can address that quickly.
The remedy may not be virtualizing the method!
Instead the actual use cases may lead to rethinking the class design and finding a more judicious and helpful way to satisfy the need. Responding to actual requirements ... rather than a shotgun approach to imagined ones ... is an agile practice that ought to resonate with you and your readers.
Thanks again, Ward.
I think in religious arguments such as this the default behaviour should be the one which requires the least understanding and maintenance to create solid, working code. To that end, I think C#'s choice of non-virtual by default is a good one.
I think both approach have their merit. Though I personally think that changing course in the middle is a bad idea. Framework developer still have the option to virtual everything if they choose to anyway.