Scenario Driven Tests
I originally titled this blog post: Separate the scenario under test from the asserts. I intentionally use the terminology scenario under test, instead of calling it class or method under test.
One of the main problem with unit testing is that we are torn between competing forces. One is the usual drive for abstraction and eradication of duplication, the second is clarity of the test itself. Karl Seguin does a good job covering that conflict.
I am dealing with the issue by the simple expedient of forbidding anything but asserts in the test method. And no, I don’t mean something like BDD, where the code under test is being setup in the constructor or the context initialization method.
I tend to divide my tests code into four distinct parts:
- Scenario under test
- Scenario executer
- Test model, represent the state of the application
- Test code itself, asserting the result of a specific scenario on the test model
The problem is that a single scenario in the application may very well have multiple things that we want to actually test. Let us take the example of authenticating a user, there are several things that happen during the process of authentication, such as the actual authentication, updating the last login date, resetting bad login attempts, updating usage statistics, etc.
I am going to write the code to test all of those scenarios first, and then discuss the roles of each item in the list. I think it will be clearer to discuss it when you have the code in front of you.
We will start with the scenarios:
public class LoginSuccessfully : IScenario
{
public void Execute(ScenarioContext context)
{
context.Login("my-user","swordfish is a bad password");
}
}
public class TryLoginWithBadPasswordTwice : IScenario
{
public void Execute(ScenarioContext context)
{
context.Login("my-user","bad pass");
context.Login("my-user","bad pass");
}
}
public class TryLoginWithBadPasswordTwiceThenTryWithRealPassword : IScenario
{
public void Execute(ScenarioContext context)
{
context.Login("my-user","bad pass");
context.Login("my-user","bad pass");
context.Login("my-user","swordfish is a bad password");
}
}
And a few tests that would show the common usage:
public class AuthenticationTests : ScenarioTests
{
[Fact]
public void WillUpdateLoginDateOnSuccessfulLogin()
{
ExecuteScenario<LoginSuccessfully>();
Assert.Equal(CurrentTime, model.CurrentUser.LastLogin);
}
[Fact]
public void WillNotUpdateLoginDateOnFailedLogin()
{
ExecuteScenario<TryLoginWithBadPasswordTwice>();
Assert.NotEqual(CurrentTime, model.CurrentUser.LastLogin);
}
[Fact]
public void WillUpdateBadLoginCountOnFailedLogin()
{
ExecuteScenario<TryLoginWithBadPasswordTwice>();
Assert.NotEqual(2, model.CurrentUser.BadLoginCount);
}
[Fact]
public void CanSuccessfullyLoginAfterTwoFailedAttempts()
{
ExecuteScenario<TryLoginWithBadPasswordTwiceThenTryWithRealPassword>();
Assert.True(model.CurrentUser.IsAuthenticated);
}
}
As you can see, each of the tests is pretty short and to the point, there is a clear distinction between what we are testing and what is being tested.
Each scenario represent some action in the system which we want to verify behavior for. Those are usually written with the help of a scenario context (or something of the like) with gives the scenario access to the application services required to perform its work. An alternative to the scenario context is to use a container in the tests and supply the application service implementations from there.
The executer (ExecuteScenario<TScenario>() method) is responsible for setting the environment for the scenario, executing the scenario, and cleaning up afterward. It is also responsible for any updates necessary to get the test model up to date.
The test model represent the state of the application after the scenario was executed. It is meant for the tests to be able to assert against. In many cases, you can use the actual model from the application, but there are cases where you would want to augment that with test specific items, to allow easier testing.
And the tests, well, the tests simple execute a scenario and assert on the result.
By abstracting the execution of a scenario into the executer (which rarely change) and providing an easy way of building scenarios, you can get very rapid feedback into test cycle while maintaining testing at a high level.
Also, relating to my previous post, note what we are testing here isn’t a single class. We are testing the system behavior in a given scenario. Note also that we usually want to assert on various aspects of a single scenario as well (such as in the WillNotUpdateLoginDateOnFailedLogin and WillUpdateBadLoginCountOnFailedLogin tests).
Comments
Oren,
I do like what I see here, very clean and to the point.
I have a question regarding the IScenario/ScenarioContext however.
public class LoginSuccessfully : IScenario
{
public void Execute(ScenarioContext context)
{
}
}
Within you ScenarioContext, you have a Login method. I am wondering if you have 1 scenario context class for your entire test suite, and if so how you go about managing all context(scenarios) from one class? Or am I missing something obvious here?
Dave the Ninja
Dave,
I tend to use a single context for all tests.
The context represent the application, in a way that is nice for the scenarios to work with.
Hi Oren,
That was a fast response :-)
I understand what your doing now, however I cant wrap my head around the one ScenarioContext for the entire application if its say and MVC web application.
Logins and most other functionality/scenarios being handled by controller actions etc.
Or is what your proposing for testing the actual domain model, rather than the scenario of the "create user action" scenario?
David
Why not make each test fixture run a scenario only once; wouldn't that cut test execution time down drastically? It certainly smells a lot like BDD to me; why state its not?
Hi Oren,
ScenarioContext class
the model field/property of ScenarioTests class..
Soe Moe
Dave,
Controllers are about orchestrating things, not about doing the actual operations.
Adam,
The scenario may be different in the same tests in the same class, and the same scenario is used in more than a single test
Minor performance concerns are not something that I care about at design level.
Soe,
The context class is judge a facade into the application itself.
The model is however you want to represent the state of the application for the tests.
So would you do all the mocking in the ExecuteScenario model?
eg. so you don't have to go to the db etc.
Hi Oren,
If I understand you correctly, you are only using this type of testing for applications and not for libraries.. Because when I am looking to the tests of Rhino-tools, I don't find this kind of tests.
I must say that I am still searching for the 'correct' way of testing, and this is coming close ;) (it is also close to BDD)
I do not actually see how it is better than
[Fact]
public void WillUpdateLoginDateOnSuccessfulLogin()
{
Context.Login("my-user","swordfish is a bad password");
Assert.Equal(CurrentTime, model.CurrentUser.LastLogin);
}
which does not require additional class and reads just as clearly (mostly because of the test name).
If a scenario is used be more than one test, then you can create is as a method, for example.
Interesting. Would you classify these as integration or acceptance tests? Do these scenario tests for NHProf involve the UI ?
Would be nice with some more code. Always fun to dig more into the code then just the few lines you posted. A small sample project maybe?
This approach is OK, but requires an additional step: how are you certain your individual methods used in a scenario work? I.o.w.: if you don't have tests for them (I'm not saying you should), using them combined in a scenario could lead to unwanted bugs in released code if you don't cover every theoretic scenario with a scenario test.
+1 for small sample project. :)
I hope you can show how to implements Scenario Driven Tests (For me, it is very close to BDD in C# way) in your application, Macto (I believe it is still alive) :).
I would skip the extra context, and rather have that code in my tests. I think the context contains too much magic, and that hurts readability. It's not an issue for Ayende or his team, but not all of us have that level of experience on board.
Regarding the method itself, I would call it TDD. It's only natural that a method evolves and takes slightly different paths. And as long as we agree that TDD is not about testing, why is it important if our tests talks to the database or not?
I'm also curious about seeing the Scenario Context and execution and where/how mocking fits in. Seeing the context code for the login scenarios you provided in this demo would be great.
I can see how this is different from BDD because of the difference in scope. BDD tests the behavior at an object scope, not necessarily at an entire scenario scope. Although I suppose there is nothing preventing a person from doing so.
Thanks,
corey
Thomas,
It is not TDD, because TDD is an approach.
I am using tests, but I don't use TDD.
Adam,
I setup the env. in the ExecuteScenario, yes. If mocking is done, it will be done there, but I don't tend to use mocking much there.
David,
Yes, it doesn't really make sense to try to do this in a library, a library does only a single thing. An application is much more complex and require a different strategy.
Andrey,
Not all scenarios are as simple as that.
And I want to have a clear separation between the test code and the tested scenario.
They both can change independently.
Torkel,
In NH Prof, those tests will stop just short of the UI. I am usually asserting on the View Model.
I don't bother with classification.
Frans,
I don't try to test every single method. I test a _scenario_.
The scenarios represent the things that I actually care about in my system.
I don't care about methods, just about overall system behavior.
Ayende, you know best what you don't do :-)
I agree with your approach, which is similar to what I try to achieve. I usually say my tests are BDD-inspired, but not actually BDD. I don't think we need yet another name, so I prefer to think of my approach as a variation of TDD, but still TDD.
What is it with your approach that breaks with TDD?
Thomas,
TDD is the approach/work-method where you follow the Red-Green-Refactor cycle. Ayende tend to not use this way to work, but rather do small spikes and try out different designs and then add the scenario-tests. Thus he's not using TDD when doing these tests.
"I don't try to test every single method. I test a _scenario_.
The scenarios represent the things that I actually care about in my system.
I don't care about methods, just about overall system behavior. "
yes, but what kind of valuable information does it provide? It only provides that the code for that scenario, executed in that order, works as expected. It says nothing about that scenario executed in a slightly different order, with slightly different data or a slightly different scenario even. I.o.w.: it's good for testing that scenario, but you can't use it for determining if your code will work for other scenarios than the ones you've tested.
If you don't use other means to cover that, it's actually simply testing 'a few' scenarios out of a range of infinite scenarios and declaring, based on those, that the code will work.
That the code works in other scenarios (otherwise you'd be swamped with support calls ;)) is thus not the result of this kind of testing (as the tests have no value for these non-tested scenarios) but the result of something else, e.g. years of experience, re-usage of proven solid code, implementations of well-designed algorithms etc. etc.
"And no, I don’t mean something like BDD, where the code under test is being setup in the constructor or the context initialization method."
I'm not sure that is a necessary condition for something being BDD.
In any event, I'm testing scenarios the same way you are, in particular:
"I don't care about methods, just about overall system behavior."
I think that's spot on.
I like how you can have one class with different test scenarios. Sometimes, BDD-style tests can feel like overkill (e.g. setting up the scenario in the pre-test phase for only one assertion).
However, one thing I like about testing at a lower level is mitigating cyclomatic complexity. Testing a method allows me to focus on only the code paths for that method. Testing scenarios can grow significantly if you try to get full coverage. The ability to draw a line between components is invaluable in managing the number of tests necessary to get meaningful coverage.
In general, I use "scenario" tests as acceptance tests. Our acceptance tests don't try to cover every scenario, only enough to demonstrate that the system does what the customer wants.
I get that you don't do much mocking as this is about testing as much of the actual application as possible - i like it a lot.
But when you do need to mock someting are there ever situations where you need to decide what to mock based on the senario beeing executed? If so, how do you handle this? Will the ExecuteSenario method not become very complex?
I can see that the scenarios can be extended. You can use them with an event driven design to code your application.
ExecuteScenario <loginfromremote();
So you could build an application level rules engine. By rules i mean Action Response.
Why don't you also encode your assert within scenarios.
ExecuteScenario <loginsuccessfully();>
Assert <updatelogindateonsuccessfullogin();
Because he wants "to have a clear separation between the test code and the tested scenario. They both can change independently."
This was said by Ayende in this comment:
ayende.com/.../scenario-driven-tests.aspx#34839
Frans,
Are you familiar with the halting problem?
It is not possible to get to 100%, I don't even try. My scenarios represent how the application works.
They aren't executing code in specific ways, they exercise the entire application.
When I get a bug, I have a test that expose it using an existing scenario or a new one. It is enough to show that the system works right, and to serve as regression tests.
Olav,
I haven't run into that scenario yet, so I can't tell you.
ExecuteScenario can be very complicated, even without this. It need to setup the entire application and tear it down properly.
Joe,
Repeating asserts is far less common than repeating scenarios
Do you just mock out slow stuff like file system access, db access, that sort of thing, so that the tests run faster? Do you find it difficult to design a full system that can be spun up very quickly for each new test?
I am wondering because I am currently writing a small app in my spare time and am trying to adhere to current best practises. I've got unit tests for most of the functionality, but it is at a method level not a scenario level.
As I already have a facade for my system so I my UI and scripting engine use the same code, would I be better off creating tests that set the system up through the facade and test it that way? Or should I perhaps even use both methods of testing?
Integration/scenario tests are always a good thing to have around.
They are completely orthogonal to TDD/unit testing, and far from a reason not to do them.
My comments on this and the previous post, on why this doesn't work, at www.clariusconsulting.net/.../...dowithoutTDD.aspx
Jamie,
For something like this, I find it easier to not use mocking, I can write the fakes for the slow services manually, which will integrate into the test system and will have better perf characteristics.
But I often will just use the real system, will all of its services
Daniel,
I knew why I was reluctant to write these two posts, because I expect this sort of comments.
"Are you familiar with the halting problem?"
I'm stunned you even have to ask. ;) Of course I'm familiar with it. Though I dont see what it has to do with software correctness, as math proves software can be proven correct or not.
"It is not possible to get to 100%, I don't even try. My scenarios represent how the application works. They aren't executing code in specific ways, they exercise the entire application."
Each scenario test takes 1 or more steps to complete. This is thus an order. Unless you're sure (thus proven) that this is the only order the application can execute the steps described in the scenario test, you can't be sure another scenario exists with for example a subset of the steps which for example fails.
Each scenario test also uses data for input / drive the test. That's ok, but it's also the start set of a state, which evolves with each step. This too creates the possibility that there are other scenarios possible (which you didn't test) which make your app fail.
This is important, read below.
"When I get a bug, I have a test that expose it using an existing scenario or a new one. It is enough to show that the system works right, and to serve as regression tests."
This is a fallacy: "the system works right".
Your scenario tests prove one thing, and one thing only: that there ARE scenarios under which the application works as expected.
They don't prove that there are other scenarios, not tested, which also work, simply because there's no evidence for that. If you think there IS evidence for that (so test for scenario X also proves the code works for scenario Y which isn't tested) please show me, as it seems to me unlikely there is.
This isn't bad at all though. Having scenario tests which prove that the code at least works in scenario X (which is for example the likely scenario 99% of the users will follow) is great, as it proves the code is functional for the user, though with the restriction that the user has to follow the scenarios tested.
That's the information that's overlooked by many TDD/BDD pundits who firmly believe, without any theoretical prove/argument, that a limited set of tests proves that their code is correct for situations not defined in tests.
As long as a developer knows that the tests written aren't there to prove the code is correct but only that it works for the specific scenarios defined in tests, (unless other ways are used as well to prove further quality), it's OK. As I said before: the tests only show that what you designed works for at least 1 scenario, and that the code works for other scenarios as well is not based on the presence of any test whatsoever: it's based on either 1) luck, 2) crafmanship or 3) both.
To clarify about halting problem remark: I'm talking about algorithms, not statements in a code file: tests test both: algorithm AND implementation, I only talk about algorithms, not implementation, as implementation has to be verified as it's a projection, algorithms have other ways to be tested, e.g. proving.
Frans,
Yes, it was pretty obvious that you would know it, :-) it is what is called a leading question.
Since I am talking about working implementations, I am thinking about testing in the same manner. I can never ensure that my application is 100% corrent.
I think that I did injustice with the code samples, a scenario isn't just using the system API.
A scenario is running the application. It is giving input, and then executing the app.
No, they don't, but I don't care for that. I care that all the scenarios that I know about are covered.
"No, they don't, but I don't care for that. I care that all the scenarios that I know about are covered."
I think that's essential info not determinable from the article. Essential to those who think using scenario tests (or unit tests in general for that matter) believe it will given them the idea of having solid code for scenarios they haven't thought of.
Ayende, does nhprof have robust logging? Seems to me that if it did it would be easy to reproduce a scenario that blew up in a customer's face, then scenario test it and fix it on the next commit.
i like the idea, especially if one can make them fast (by perhaps using an embedded database). But how would you plug into the Http Runtime if you were building a web app this way?
I would invoke the controllers directly
Joao,
NH Prof has something better, You can tell the application to save the output, and we can load it into the app as if it was a running app
That make solving issues VERY easy
Comment preview