Taking conventions to their obvious conclusion: The mandatory test language
I am considering having a language that mandates tests. If you don't have a matching test for the code in question, it will refuse to run. If the tests fail, it will refuse to run. If the tests takes too long, they are considered failed and the code will refuse to run.
This certainly ensure that there would be test. It wouldn't ensure that they would be meaningful, however. That is fine by me. I am not interested in policy through enforcement, just gentle encouragement in the right direction.
The technical challenges of implementing such a system are nil. The implications on the workflow and ease of use for such a system are unknown. On the surface, checked exceptions are great. In practice, they are very cumbersome. This is why I am warning that I have only toyed with the idea, not implemented it.
I would consider it if there was a system in place to auto generate the test stubs for me, so that you never default to a non-runnable state.
It would be nice to leverage more from the tests at the language level.
Do you mean making testing a declarative modeling language?
Does that mean you must have 100% code coverage? If so, that seems a bit dogmatic to me.
When is the big move?
Some time ago I had similar idea, but more at the OS level. I thought that it would be great if OS would require that each dll will have a tests and to use it as a version compatibility checker.
For example if your program uses foo.dll v 1.0 and new version is installed in system, OS just needs to run your tests with this new dll to verify if it's compatible with your software. If yes then we can throw away old dll or at last not use it and conserve the memory.
Dunno if it's worth the effort... people who care about tests write them already anyway... people who don't care, well... they don't care, so why would they use this new language?
I don't really see the benefit
Maybe an intermediate solution would be better: a mainstream language that really integrates tests, without the need to use external frameworks or tools except for (maybe) GUI runners and without the requirement of specific IDE suites.
Also, the problem of a test "taking too much time" to complete must be thought out very well. How would you define a test? What is the threshold time that makes the test fail? Measuring execution time should be done always in the same conditions. A build server that also performs other activities cannot be used for that purpose because tests surely execute in different times every time they're run.
I mean taking the test DSL that I already talked about, and using that as a core part of the primary language.
You don't have a runnable script if you don't have a passing test.
Having it generate automatic stubs takes away from the need to create the tests in the first place
No, it is not 100% code coverage, it is just ensuring you have some test(s).
It sounds very cumbersome to me. However I disagree about checked exceptions, I think they actually are a good idea, because they ensure that you can be sure of catching all exceptions.
This would not be TDD, anyway. Mind me, I don't disagree in principle, and I can see its advantages.
Have you heard about Guantanamo (http://docs.codehaus.org/display/ASH/Guantanamo)? Same goal, different implementation: It simply deletes all code that is not covered by tests.
Would this language ensure that the developer can't get away creating bogus tests by just making each test pass with Assert.That(true, Is.Equal(true))?
Retract that...missed the 2nd paragraph ;-)
I don't believe in that tough a love
Checked exceptions are fine in theory, but cumbersome in real use
I think that it would devalue the tests in the long run. I worked on a project once where we had had the build fail if any public methods or properties had no Xml Comments. The same guy who pushed this through also introduced a neat tool (forget what it was called - Ghost Doc?) which would generate Xml comments for you. It used the method and parameter names to generate the wording of the comments, so by default added no value at all.
I note that one commenter on your post has already suggested a tool that does the same thing for your mandatory tests.
I like your idea of enforcing design constraints through unit tests, but think that this would be a step too far.
Checked exceptions being diffucult or cumbersome in practice is exactly what I disagree on. Having programmed Java for a few years I never really saw the problem. On the other hand I very much liked the fact that a method declares all exceptions and the compiler ensures that they are handled correctly.
We can call it Xtrem TDD. I can see it coming.
I think efforts would be better spent either creating a DSL or frameworks of some sort. The higher level of abstractions that you can assume are correct the less you have to "test the language".
Not only that but it sounds as if this is an effort to get other employees onboard, but they aren't going to use it unless forced to by corporate policy.
Only masacist volunteer for their hands to be tied without having a key.
@ the other Jeremy - Having used it on a few projects now, I can tell you that GhostDoc adds plenty of value.
At the very least, it gives my developers an easy way to get the general boilerplate written, which encourages that they at least provide some documentation.
That documentation then gets easily spotted during peer review, at which point it becomes obvious if it they left the boilerplate unmodified and/or if it is in other need of improvement. With that, the documentation becomes good.
Finally, as they get used to using it, they find that GhostDoc makes better boilerplate (and reduces their manual documentation effort) when they use good naming of members, arguments, types, etc. They receive plenty of other encouragement (and enforcement) in that area, but the additional re-enforcement and trickle-down benefits that GhostDoc provides are certainly helpful.
If you haven't found it of value because the above isn't happening, you don't have a tool problem, you have a people and/or process problem.
Ayende, as last time the CQL language can help you.
The following convention warns if there is code not covered:
WARN IF Count > 0 IN SELECT TYPES
WHERE PercentageCoverage < 100
and, more useful, the following convention warns if code added or refactored since the last release is not 100% covered:
WARN IF Count > 0 IN SELECT METHODS
WHERE PercentageCoverage < 100 AND
(CodeWasChanged OR WasAdded)
What I have apparently not made clear is that I am not talking about strict code in this post.
I am talking about writing tests for DSL scripts
I had the same issue on a project in the early '90's where the code was reviewed by civil servants with no coding knowledge and I couldn't get paid until every function and if statement had comments
As the code was largely produced by a code generator, I modifed the core template to put "Goverment-mandated comment", in front of all the if statements
It passed the review :-)
I really like this idea, especially if you're going to leverage Boo to automate some of the tests, as in using some attributes to codegen some properties. I could see this working very well alongside a DbC tool.
I'd only suggest that it work as a compiler option like -wall from the C days, so you could optionally compile with this flag set to strict test verification. Then you could choose to adopt the approach, or not. Though arguably if you're going to use your language at all, you'd be doing exactly that.
Take a look at T#, a new testing framework based on the syntax of C#, but with a focus on testing by intention for .NET developers.
Paul Hatcher, That's what happened when somebody blindly followed the standard operating procedure without understand why it existed in the first place. I've work for and with the government sector so I understand the level of idiocy that you have to deal with.
"A gentle encouragement in the right direction" would be having a default set rules like you mentioned with a easy way to turn it off as well. I personally think that test should be consider at the design level and enforce at the project level. Like Davy said the people who want to do the right thing will do the right thing and vice versa.
Mandate testing at the language level is such a broad stroke that only add bloatness to the language. It would be a good set of options that you can turn on or off when testing.
If you think this is useful, you will certainly be interested in Design by Contract. It is perhaps an inverse of unit testing, where handeling state oriented testing is difficult & requires great work to do, but interaction testing is easy.
One nice thing is that it makes clear what code is at fault when a bug occurs. You can't just fake your contracts, because building useful contracts is often the easiest way to debug.
It can't remove the need for smoke tests, but it can allow the smoke tests to better represent the Behaviour Driven Design idea of agile documentation from tests.
If you think checked exceptions are cumbersome in real use, imagine how other programmers will find working with the sort of constraints your programming environment will place on them. If checked exceptions are considered annoying, mandatory test code is, if anything, far worse.
As an aside, I completely disagree with you on checked exceptions -- coming from a heavily (originally) C++- and later C#-oriented background I superficially considered checked exceptions to be annoying and pointless, but having worked almost exclusively with Java in the last few months I now find the lack of checked exceptions in C# actually disturbing. This is off-topic though, so if you want to open up a separate discussion on the subject I'll be happy to add my two cents.