I have an interesting problem. I am currently working on adding sync support to SvnBridge. This mostly involves tracking down what SVN does and duplicating it myself. There isn't that many new code (I had to add a class and a method). I have nothing that I can _unit_ test. Oh, I could probably craft something, but I am reluctant to mock the world when what I want is to test the actual integration between the SVN client and the TFS server.
The problem is that I have no real way of testing this. Why do I mean by that? syncing is an operation that works over the entire repository, from start to finish. I can't perform an integration test, because that would take too long in most scenarios. I would need to setup a whole new TFS server per test. Not really a good solution, no matter how you turn the dice. And the other problem is that I need a rich set of situations to actually test this.
Right now I am driving that by going against the production CodePlex servers, and trying to see if I can get all the information from there. This is incredibly valuable, because it gives me access to a lot of source control practices that are out there, and expose a lot of false assumptions. But I can't write tests for those, because they are too big a scenario.
I can probably set an integration test that would execute against the production servers (they are publicly exposed, after all, so no issue there), but I am... less than thrill about having a single test that can potentially run for hours or days.
Right now I think that I am leaning toward partial fake of both the client and the server. That is, create a set of input values which, while not being the real world values, would still test that logic.
Considering that the actual interaction we are talking about take place over many requests, this is tricky, but possible.