Ayende @ Rahien

Refunds available at head office

A bad test

image

This is a bad test, because what it does is ensuring that something does not works. I just finished implementing the session.Advaned.Defer support, and this test got my attention by failing the build.

Bad test, you should be telling me when I broke something, not when I added new functionality.

Tags:

Posted By: Ayende Rahien

Published at

Originally posted at

Comments

Daan
05/18/2012 09:14 AM by
Daan

Not entirely true:

Your requirements are changed thus, your test doesn't meets the requirements anymore.

Indranil
05/18/2012 09:16 AM by
Indranil

Ah but it gives 100% code coverage. :-P

Dalibor Carapic
05/18/2012 09:23 AM by
Dalibor Carapic

I agree with Daan. If the test ensures that any client that uses your API will get a valid error when he tries something that is not supported then the test is just fine.

Rémi BOURGAREL
05/18/2012 09:31 AM by
Rémi BOURGAREL

If you consider your doc as part of the API, then your Test is testing your doc.

maciejk
05/18/2012 09:49 AM by
maciejk

Also, it can give great information to other programmers - they might be browsing tests to see how they're supposed to use this functionality and when they see this test they'll immediately know that it's not yet implemented.

Daniel Marbach
05/18/2012 10:14 AM by
Daniel Marbach

How about that: You are changing the behavior of your API. If you'd follow test first approach you'd need to write a new test for your changes. Let it fail with the not supported exception, remove the assert throws from the other test and then implement the new feature. This is not a bad test. It is you not following the principles.

Daniel

Sergio
05/18/2012 11:28 AM by
Sergio

It's a bad test, because you make more than one assertion in it, not because the reason you mentioned.

One test - one assertion. This way you know all failed assertions. In your test, all of them may be failing, or just the first one, you won't know.

Onofrio Panzarino
05/18/2012 11:40 AM by
Onofrio Panzarino

I think that in the real world no one would be find useful that kind of test, neither very fanatic agilists. Making tests to succeed for real feature is enough a hard work.

Paul Stovell
05/18/2012 01:57 PM by
Paul Stovell

I actually wrote a test like that against RavenDB.

A few versions ago there was a bug where Uri properties would cause Raven to think the data had always changed. So you could do this from memory:

var customer = session.Load(); Assert.IsTrue(session.Advanced.HasChanges(customer));

I didn't want a broken test failing my build for a week while I waited for a new RavenDB build so I wrote it as a negative test. When I upgraded RavenDB and the test failed it was a nice reminder about the issue.

In some ways I think the negative test had /some/ value - if I ever wondered if a problem still existed I had some proof for it. But having a test 'fail' because something 'worked' did leave a strange feeling. I'm not sure what the solution for this should be. Maybe Assert.Inconclusive()?

Paul

Federico Lois
05/18/2012 02:08 PM by
Federico Lois

As mentioned before the problem aint the test itself. It may be that it is hidden in an improper location without giving a semantic clue.

I do have a test file dedicated to "unsupported" or "defects" in third party components. When I update any of them (including RavenDB) and they are fixed I know right away and fix the workarounds in the code.

So I dont think it is a test, I believe it is a "note" to your team or yourself in the future where you are not actively thinking about it.

configurator
05/18/2012 03:11 PM by
configurator

Parts of that test are actually quite important. shardedDocumentStore.Url can't possibly return a result that makes sense - if it returned (for example) the first shard's URL that would have been a bug. Same goes for DatabaseCommands and AsyncDatabaseCommands.

I think it's even important for GetLastWrittenEtag and Defer - because as long as you've not implemented it, an incorrect result can be far, far worse than a NotSupportedException.

Vadim K
05/18/2012 03:20 PM by
Vadim K

This is why I love using machine.specifications for my testing framework. You can define a test without implementing it. This lets you know which features still need to be implemented.

Mike McG
05/18/2012 04:31 PM by
Mike McG

Tests are designed to enforce the expectations made of the code. Those expectations can be set by interfaces, but can be supplemented by traditional documentation or other out-of-code communications.

In this case, if such communications have established that these methods are not supported, then the tests correctly enforce that (and are "good"). By failing when one of these methods is implemented, the developer is reminded that the expectations of the code have changed, and out-of-code communication artifacts must be updated to reflect that.

João P. Bragança
05/18/2012 04:58 PM by
João P. Bragança

Crazy Idea: What about exposing

public static IEnumerable[Action[IDocumentStore]] OperationsNotSupported() {}

Then you could refactor without refuctoring.

Chris Wright
05/18/2012 11:37 PM by
Chris Wright

There should have been separate tests for each of those functions, positioned where you would have naturally written the tests for the real, working functionality. That way, it wouldn't have shown up at the build server; it would have shown up as soon as you considered working on the feature that wasn't supported.

Glomming everything together means you have a submarine test, which, no matter what it's testing, is bad.

Had these tests been closer to where you were expecting, you wouldn't have thought to write this post, most likely.

[ICR]
05/20/2012 08:28 PM by
[ICR]

Doesn't this suggest it should have been a NotImplementedException originally?

Comments have been closed on this topic.