Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,567
|
Comments: 51,185
Privacy Policy · Terms
filter by tags archive
time to read 1 min | 106 words

Dave is making the case for leaving a failing test for yourself when you finish for the day. This way you come back tomorrow, run the tests, get the failure, and have a starting point for the day.

I used this technique before several times, with varying levels of success. It only works if you write the test, see it fail and leave immediately afterward.

He also mentions that you sub-conciousness will work on it during the night, which I'm not sure I like. I think about computers too much as it is... :-)

NUnit 2.4 Alpha

time to read 2 min | 292 words

Halleluja! Check out the release notes for NUnit 2.4. Here are the things that I'm excited about.

  • Can install by non administrators
  • CollectionAssert - At long last, we have it. I can't tell you how often I wanted those asserts. And check out the methods that this has.
    • AllItemsAreNotNull
    • AllItemsAreUnique
    • Contains
    • IsEmpty
    • IsSubsetOf / IsNotSubsetOf
  • File Assert with support for steams!
  • App.config per assembly

The most important thing is that they have (finally) setup / teardown at the assembly level! This is a huge deal for me, since I write a lot of applications that require initial state, and then I need to remember initializing it in the setup of each test.

They also have support for setup / teardown per namespace, which is really strange.

This is looking really cool.

time to read 3 min | 538 words

I just finished a really nice project that involved heavy use of multi threaded code. The project is enterprisey (which has turned into a really bad word recently, thanks to Josling and The Daily WTF), so I needed to make sure that I was handling several failure cases appropriately.

Now, testing multi threaded code is hard. It is hard since you run into problems where the code under test and the test itself are running on seperate threads, and you need to add a way to tell the test that a unit of work was completed, so it could verify it. Worse, an unhandled exception in a thread can get lost, so you may get false positive on your tests.

In this case, I used events to signal to the tests that something happened, even though the code itself didn't need them at the beginning (and didn't need many of them at all in the end). Then it was fun making sure that none of the tests was going to complete before the code under test ( there has got to be  a technical term for this, CUT just doesn't do it for me ) has completed, or that there were no false positives.

As I was developing the application, I run into issues that broke the tests, sometimes it was plainly bugs in the thread safety of the tests, and I could see that the code is doing what it should, but the test is not handling it correctly. I was really getting annoyed with making sure that the tests run correctly in all scenarios.

In the end, though, I was just about done, and I run the tests one last time, and they failed. The error pointed to a threading problem in a test. It was accessing a disposed resource, and I couldn't quite figure why. Some time later, I found out that an event that was supposed to be fired once was firing twice. This one was a real bug in the code, which was caught (sometimes it passed, the timing had to be just wrong for it to fail) by the test.

Actually, this was a sort of an accident, since I never thought that this event could fire twice, and I didn't write a test to verify that it fired just once. (In this case, it was the WorkCompleted event, which was my signal to end the test, so I'm not sure how I could test it, but never mind that.)

I spent some non trivial time in writing the tests, but they allowed me to work a rough cut of the functionality, and then refine it, knowing that it is working correctly. That was how I managed to move safely from the delete each row by itself to the BulkDeleter that I blogged about earlier, or to tune the consumers, etc. That final bug was just a bonus, like a way to show me that the way I used was correct.

Now, the tests didn't show me that if I try to shove a 30,000 times more data than it expected, the program is going to fail, that was something that load testing discovered. But the tests will allow me to fix this without fear of breaking something else.

Rant

time to read 7 min | 1246 words

Note: This post started as a comment on the post just below, but it developed into a full blown rant along the way. I'm detailing many of my frustrations with the tools.

Scott Bellware has posted Mort or Elvis? A Question for a Bygone Era, which is a great post (as usual), but it also hit many of the pain points that I feel in working with VS 2005 recently. In fact, Scott does a great job of expressing those pain points than I could.

Microsoft's tools are getting progressively worse rather than better.  Microsoft development tools are showing the unmistakable signs of entropy and neglect that comes as a result of the parasitic infestation of product development groups by the cowardly side of the software business.... 

In the comments he talks about Microsoft's tools and then subtle product placement that I didn't recognize but drive me constantly crazy. I use VS.Net 2005 for the simple reason that it is the only one out there for .Net 2.0 development. ReSharper makes the job bearable, but it says quite a bit if I need an alpha quality tool just to feel good again.

Beyond VS.Net, my most common tools are not Microsoft ones (NAnt, NUnit, TestDriven.Net, etc). Scott comments on this:

And indeed, these folks [thinking developers] will bring an arsenal of better tools to the table.  Few of them will have been created by Microsoft, and most of them will be free, and open-source. 

Finally, there is this sentiment that I agree with wholeheartedly:

I would pay right out of my own pocket for a .NET IDE built by JetBrains. They could change me $1000 and I'd be delighted to fork over the cash if they could give me an IDE that was file-system and project-system compatible with Visual Studio with none of the bloat and none of the endless, insulting product placement clutter that has gained influence over the design of Visual Studio to the detriment of good, clean, usable product.

I wanted this for a long time. I hope that once they release ReSharper, they will rev up the work on .Net IDE. I know that I would buy Intelli.Net IDEA in a heart beat, and push it out as far as I can.

And now to my own ranting:

And my own note about the personas. They don't work anymore. I get questions from VB.Net developers that want to know how to use Rhino Mocks, there have been questions about VB.Net development with Castle, etc.(Not to go down on VB.Net programmers, but this is the language that is portrayed as Mort's favoriate). I has to do some UI work right now, and I tried hard to do this with VS.Net's Automatic UI & Coffee System(TM), but I couldn't make it do what I wanted, and I ended up doing most of it manually.

Simple UI should be the part where the tooling is excellent, but in order to use them, you need to know quite a bit about the tools, and that is not something that I see as highly productive. I know the WinForms (and Win32) API well enough to produce a working UI for most of everything, but I don't see investing time in learning the exact incantations that I need to do this or that (nothing highly complicated, by the way) for non standard scenario (binding Business Objects to a form).

By concentrating so hard on the tools, only the supposedly core scnerios are taken care of, leaving everyone that does something different high & dry. And the problem seems to be that there isn't any improvement along the way in the tools.

We have the Web Application Project, to get back the stuff they took away from VS 2003, but that is about it. Where is the promised Service Pack for Visual Studio 2005? I haven't even heard a whisper about it since it was announced, and it is not like there aren't bugs that need fixing in there.

Posts like this one, talking about how features are planned for VS, just scares me. It looks very much like the merit of a feature is its demo-ability. This produce software that demos great, but that just doesn't stand on its own when you need to develop a real scenario. Combine the non-tirivial time that you need to invest to learn how to work with the tool with the even more significant time to learn how to by-pass the tool, and you get a net loss situation. Non-demoable features end up being more important in the long run.

How many demos has seen the demo about File.ReadAllText() method? Yet it ends up saving me 5 lines of code every time that I need to read a file. Or the GC improvements? Or ASP.Net Build Providers? Linq will be cool if I could get a compiler support and intellisense alone. No need to build any UI on it, it is cool because of what it is, not because of what kind of UI I can show in a keynote.

I really like the framework, from ASP.Net to WinForms to Theards to ADO.Net. I just dislike the tool that I have to interact with it. It is nice and shiny from the outside, but don't take a too close look at the support beams. The IDE shouldn't try to help me. I know what I'm doing, thank you very much. I want to be able to work with code in my IDE, intelligence on the part of the IDE is required, otherwise I can just work in Notepad. I don't want to work like Microsoft thinks I should (TDD or DataBinding, for instance), I want to have the functionality in the class library, and the IDE should support it.

I get a far better value from TestDriven.Net than I do from VS' Test Tools, and TestDriven.Net has a far more primitive interface, with far less functionality, probably. But this is a zero friction tool, I don't need to adhere to Jamie's design philosophy if I want to use it.

Okay, that is enough ranting for now, I think

time to read 2 min | 388 words

After making such a point about commenting being mostly noise, and that code should be able to stand on its own, I found myself adding the following comment (related to this issue) to Rhino Mocks.

//This can happen only if a vritual method call originated from 
//the constructor, before Rhino Mocks knows about the existance 
//of this proxy. Those type of calls will be ignored and not count
//as expectations, since there is not way to relate them to the 
//proper state.
if(proxies.ContainsKey(proxy)==falsereturn null;

There is just no way that I'll remember in 2 week (let alone months) why this is important. Then again, the test for this piece of code is called "MockClassWithVirtualMethodCallFromConstructor", so this should probably be good enough.

Damn, but I love to work on code that is covered with tests. It was a ten minutes affair from getting the test to fail to finding exactly where the problem is, and then it was a simple matter of escelating the call stack until I had a good enough view on what was happening. Two lines of code (and 5 lines of comments) later, I got a passing test.

Three minutes afterward, I was uploading a new release to the website. This is the best thing about doing TDD, I know that I can't break the system without knowing about it. And if I do break something that I don't have a test for, well, if I don't have a test for it, then I'm probably not going to need it after all, so it wouldn't bother me. Perfect :-)

time to read 2 min | 349 words

This post has nothing to do with Rocky's comments on DNR

I was testing using NHibernate in ways that I never did before*, and the test kept failing. Each time I would fix one thing, and another part would break. It wasn't very obvious how to make it work. I had a guy watch over my shoulder while I worked throught the kinks of the problem. After the test failed repeatedly for the 10th time or so he just muttered something about "never making it work right" and walked away to do something else.

To me, TDD is all about failures. I know that I need to do something only when I get a failing test. And the failing test kept pointing me to this error and that error, and I kept researching and fxing the errors. Eventually I had enough understanding of what I was trying to do and how NHibernate was trying to accomplish it that I could get it to work (perfectly). All in all, it took about 30-40 minutes of trying.

I am not upset to see failing tests, they are the best sign that I have that something is wrong.

This is a Failing Test.
There are many other tests, but this one is a Failing Test.
My Failing Test directs me where I need to be going.

* Highly complex schema with some indirection thrown in to add to the mix.

time to read 7 min | 1226 words

Rocky took the time to clarify his comments on TDD in dotNetRocks #169. There seems to be a couple of main points there:

As a testing methodology TDD (as I understand it) is totally inadequate... ...the idea that you’d have developers write your QA unit tests is unrealistic

TDD is not a QA methodology. TDD is about designing the system so it is easy to test. This makes you think about what the code does, rather how it does it. The end result is usually interfaces that are easier for the client to work with, coherent naming for classes/interfaces/methods, the level of coupling goes down and the cohesion of the system goes up. Then you run the test.

The idea in TDD is not to catch bugs (although it certainly helps to reduce them). The idea is to make sure that developer think about the way they structure their code before they start writing it. It also means that developers has far more confidence to make changes, since they have a test harness to catch them if they break something.

And it was pointed out on Palermo ’s blog that TDD isn’t about writing comprehensive tests anyway, but rather is about writing a single test for the method – which is exceedingly limited from a testing perspective.

You start with a single test, to check the simple case, then you write another test, for a more complex case, etc. The purpose is to assert what you are going to do, do it, and then verify that it does what you assrted it should. QA isn't going away in TDD, for the reasons that Rocky mention in his post (developers rarely try to break their app the way a Tester does). QA and Developers takes two different approaches. When I get a bug from QA, I write a failing test for this bug, and then I fix it. Now I got a regression test for this bug.

When I touch this part in 6 months time, I can do it with confidence. I know that I will not:

  • Break any of the assumtions of the original developer - I got tests that verify that the original functionality works
  • Regress any bugs that were found before - I got tests that would verify the bug is fixed.
  • Any future maintainer of the code will not break the sutff I'm doing now - I write tests that will verify that the new functionality is working.

Those and clear design are the main benefits fo TDD, in my opinion. Again, I agree that this doesn't remove the need to have QA team that hates the product and really want to break it :-).

I don't buy into TDD as a “design methodology" either. You can't "design" a system when you are focused at the method level. There’s that whole forest-and-trees thing... ...But frankly I don’t care a whole lot, because I use a modified CRC (class, responsibility and collaboration) approach.

CRC is cool, I agree, but the issue is how detailed you go. I can probably think of the major classes that I need for a project up front, and maybe even come up with their responsabilities and some of the collaborators. Until I sit down with the code, I can't really say what methods and what parameters will be at each class, what other stuff I need to do this work, etc. TDD is about the interface/class/method design, not directly about full system design (although it will influence the system design for sure).

And as long as we all realize that these are developer tests, and that some real quality assurance tests are also required (and which should be written by test-minded people) then that is good.
None of which [developer tests and QA tests], of course, is part of the design process, because that’s done using a modified CRC approach.

This is something I really do not agree with. If you don't use TDD to influence your design, you are not doing TDD, period. You may have tests, but it's not TDD. Writing tests after the fact means that you don't get to think about what the client of the code will see, or use things that make it much harder to test the class. You lose much of the qualities of TDD.

What you end up with is a bunch of one-off tests (even if they are NUnit tests, mind) that doesn't cover the whole system, and are only good for testing very spesific things. They don't affect the design of the code, which can lead to very cumbersome interfaces, and they don't cover the simple to complex scenarios. They may represent several hours (or days!) of programming effort that culimate in a long test that test very spesific scenario. Those are integration tests, and while they have their place, they are not Unit Tests. NUnit != Unit Tests.

And here is a reply to some of the comments in the post:

Sahil Malik commented:

TDD does_not reduce your debugging time to zero. You do end up with an army of tests to maintain. Since the tests are written by and maintained developers, you cannot go with the assumption that the tests themselves aren't bug free. So you write tests to check your tests? Inifinite loop.

You do not stop debugging, that is correct. It does mean that you have a very clean arrow that points to where it failed, including the details, so you have much easier time to fix it. And having an army of tests to maintain would be bad if they didn't have an army of code that they test. Yes, tests has bugs, and the code has bugs, but the chance that you would get the exact same bug in both the test and the code is not very high. When you will see the test fail, you'll investigate and discover that the code is correct and the bug is in the test. No need to write TestTheTestThatTestTheTest... scenario.

time to read 2 min | 254 words

Here is the story:

When the a node in the graph is hovered via the mouse, a connection point (a node can have several) is made visible, and the user can drag the connection point to another node (when it enter the second node, the second's connection points will be visible) and create a connection between the two connection points. A visible line should exist when dragging the connection, and after it was attached it should be permenantly visible.

I got this to work without tests, since I just can't think of how to test this stuff. Right now I'm heavily using the functionality that Netron provides (which does 96% of the above) and the implementation uses windows events and state tracking quite heavily to get this functionality.

I suppose I can try to refactor the logic out so it would be testable, but that would mean duplicating the events structure for WinForms. In addition, I can see no way to test that the correct stuff is drawn to the screen and I am not fond of taking a screen shot and then comparing the known good to the result on a bit by bit basis. If for no other reason than that I can think of several dozens of tests for the functionality that I need.

Any suggestions on how to test this?

time to read 1 min | 192 words


I set out today to write something that I never did before, but which I thought was too small to deserve a true spike. I'm talking about my persistance to code implementation, of course.
I started by writing the following test:

[Test]
public void SaveEmptyGrap()
{
    Graph g = new Graph();
    StringWriter sw = new StringWriter();
   
    g.Save(sw);
   
    string actual = sw.ToString();
    Assert.AreEqual("??",actual);
}

I had no idea what I was going to get, but I went ahead and played with the code until it compiled, and then I looked at the result, verified that it was correct* and used that as the expected value.
I could then move on to the next test, graph with nodes, using the same method.

* Just to note, it turned out that I missed putting the method name in Code Dom, which meant that my expected string were wrong. I found out about that only when I tried to load the graphs again. It wasn't hard to fix, but I wanted to point out that this method has its weak points.

time to read 2 min | 248 words


There has been a lot said about testing and their affect on the code. I just had the tests affect me as I write the code. You are probably aware that I started a new project recently, and I've (as always) taken my time with a lot of spiking on the design.
Even when I am spiking, I feel very nervous without the tests, and because I'm making such profound changes to the application, there are long stretches of times when I can't even get the application to compile.
This mean that thinking about something and seeing it fail (and it always fails the first time) takes very long time. It also means that I waste quite a bit of time in going the wrong direction.

Today was the day when I finally settled on the design that I would have, and started sweeping thorugh the code, looking for stuff that needs to be tested. I think about this an un-legacifying the code that I wrote this past week (which had only two tests, one of them 40 lines long, yuck!).
I added a lot more tests, renamed a couple of methods, applied the Law of Demeter and voila, I got a code base that is a pleasure to work with.
Then I could start writing features using TDD, and get the rapid feedback that I crave. I actually added value today, which is a great feeling to have in the end of the day.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. Production Postmortem (52):
    07 Apr 2025 - The race condition in the interlock
  2. RavenDB (13):
    02 Apr 2025 - .NET Aspire integration
  3. RavenDB 7.1 (6):
    18 Mar 2025 - One IO Ring to rule them all
  4. RavenDB 7.0 Released (4):
    07 Mar 2025 - Moving to NLog
  5. Challenge (77):
    03 Feb 2025 - Giving file system developer ulcer
View all series

RECENT COMMENTS

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}