Ayende @ Rahien

Refunds available at head office

Simple State Machine

Nathan has posted Simple State Machine to CodePlex, it is the first project that I am aware of that uses Rhino DSL and the techniques that I am talking about in the book.

What is impressive about this is the level of professionalism that is involved in the project. It is a full scale DSL, with all the supporting infrastructure. I spent half an hour or so going through the entire thing, and I am impressed.

Put simply, this is how I think state based work flows should be defined. I could easily see myself extending this a bit to add persistence support & integration with NServiceBus, and be done with it.

Like most state machines, it has the ideas of states, events that can cause the state to be changed, and legal transitions from state to state. You can define tasks which will be executed upon changing a state, or upon entering / leaving a certain state.

Enough talking, let us look at a reasonably complex work flow:

workflow "Order Lifecycle"

#Event & State Identifier Targets.
#This section controls which Types will be used
#to resolve Event or State names into strongly typed CLR objects.
#--------------------------------------------------------
state_identifier_target @OrderStatus
event_identifier_target @OrderEvents

#Global Actions
#--------------------------------------------------------
on_change_state      @WriteToHistory, "on_change_state"
on_workflow_start    @WriteToHistory, "on_workflow_start"
on_workflow_complete @WriteToHistory, "on_workflow_complete"

#Event Definitions
#--------------------------------------------------------
define_event  @OrderPlaced
define_event  @CreditCardApproved
define_event  @CreditCardDenied
define_event  @OrderCancelledByCustomer
define_event  @OutOfStock
define_event  @OrderStocked
define_event  @OrderShipped
define_event  @OrderReceived
define_event  @OrderLost

#State & Transition Definitions
#--------------------------------------------------------
state @AwaitingOrder:
       when @OrderPlaced              >> @AwaitingPayment

state @AwaitingPayment:
       when @CreditCardApproved       >> @AwaitingShipment
       when @CreditCardDenied         >> @OrderCancelled
       when @OrderCancelledByCustomer >> @OrderCancelled

state @AwaitingShipment:
       when @OrderCancelledByCustomer >> @OrderCancelled
       when @OutOfStock               >> @OnBackorder
       when @OrderShipped             >> @InTransit

       #Individual states can define transition events as well
       on_enter_state @WriteToHistory, "on_enter_state(AwaitingShipment)"

state @OnBackorder:
       when @OrderCancelledByCustomer >> @OrderCancelled
       when @OrderStocked             >> @AwaitingShipment

state @InTransit:
       when @OrderReceived            >> @OrderComplete
       when @OrderLost                >> @AwaitingShipment

#NOTE: State definitions without any transitions will cause
#the state machine to Complete when they are reached.
#------------------------------------------------------------
state @OrderComplete
state @OrderCancelled

Here is the demo application UI, for the order processing life cycle:

image

As I said, impressive.

BooLangStudio: Boo in Visual Studio

A few days ago, the BooLangStudio was announced in the Boo mailing list, bringing Boo support into Visual Studio.

Below you can see several screen shots. And you can find out more about it here.

This is a very promising move, especially since I soon have to write my tooling chapter :-)

Of course, this is still very early in the game, but it is good to see progress in this area again.

image

image

Observations on writing

  • It took me two months to write 5 pages, then I started from scratch and wrote the whole chapter (~30 page) in two days.
  • Reading is so much harder than writing. I went over what I wrote so far, and it is painful.
  • Taking time to just write things might be a mistake, as this proves:image

Taking conventions to their obvious conclusion: The mandatory test language

I am considering having a language that mandates tests. If you don't have a matching test for the code in question, it will refuse to run. If the tests fail, it will refuse to run. If the tests takes too long, they are considered failed and the code will refuse to run.

This certainly ensure that there would be test. It wouldn't ensure that they would be meaningful, however. That is fine by me. I am not interested in policy through enforcement, just gentle encouragement in the right direction.

The technical challenges of implementing such a system are nil. The implications on the workflow and ease of use for such a system are unknown. On the surface, checked exceptions are great. In practice, they are very cumbersome. This is why I am warning that I have only toyed with the idea, not implemented it.

Thoughts?

Audio Book Review: Starship

SciFi Inflation is the best term that I can use for this book series. It was engaging enough for me to go through all three books, but it bothered me enough to put a negative post about it.

Just about anything in those books is over-inflated. Interstellar travel times are measured in minutes, thousands of sentient races exists, sensors that can read the details of a ship from thirty light years away, an interstellar power has 300 million ships, etc.

This is like nails on board, highly disturbing for the flow of the story. And the story is good, it is just that those are beyond "wavehand physics away", I expect that. But I expect that to be done in a believable way.

Case in point, at one time the ship just blew up a few other ships, and it was hit with a bit of debris. The command that the Captain gibes? "Pilot, takes us half a light year out, I want to have a little time to respond if something like that happen again."

Does the author have any idea about how big a light year is?

Urgh!

Challenge: Striving for better syntax

Or, as I like to call them, yes another stupid language limitation. I did some work today on Rhino Mocks, and after being immersed for so long in DSL land, I had a rude awakening when I remembered just how much inflexible the C# language is.

Case in point, I have the following interface:

public interface IDuckPond
{
	Duck GetDuckById(int id);
	Duck GetDuckByNameAndQuack(string name, Quack q);
}

I want to get to a situation where the following will compile successfully:

IDuckPond pond = null;
pond.Stub( x => x.GetDuckById );
pond.Stub( x => x.GetDuckByNameAndQuack );

Any ideas?

Note that unlike my other challenges, I have no idea if this is possible. I am posting this after I got fed up with the limitations of the language.

The magic of boo - Flexible syntax

when I am writing DSL, I keep hitting one pain point. The CLR naming conventions, which are more or less imprinted on my eyelids, are not really conductive to clear reading in a DSL.

Let us take these entities, and see what we get when we try to build a DSL from them:

image

The DSL is for defining business rules, and it looks like this:

when User.IsPreferred and Order.TotalCost > 1000:
	AddDiscountPrecentage  5
	ApplyFreeShipping
when not User.IsPreferred and Order.TotalCost > 1000:
	SuggestUpgradeToPreferred 
	ApplyFreeShipping
when User.IsNotPreferred and Order.TotalCost > 500:
	ApplyFreeShipping

The main problem with this style of writing is that it is visually condense. I can read it pretty much as easily as I read natural English, but anyone who is not a developer really have to make an effort, and even for me, trying to read ruby styled code is easier. Here is how this would look like when using the ruby style conventions:

when User.is_preferred and Order.total_cost > 1000:
    add_discount_precentage 5
    apply_free_shipping
when
not User.is_preferred and Order.total_cost > 1000:
   suggest_upgrade_to_preferred 
    apply_free_shipping
when User.is_not_preferred and Order.total_cost > 500:
   apply_free_shipping

This is much easier to read, in my opinion. The problem is that I consider this extremely ugly.

image

Obviously a different solution is needed...

Wait a minute! Boo has an open compiler. Why not just change the way it handle references? And that is what I did:

///<summary>
/// Allow to use underscore separated names, which will be translated to pascal case names.
/// pascal_case -> PascalCase.
/// All names that contains an underscores will go through this treatment.
///</summary>
/// <example>
/// You can  enable this behavior using the following statement
/// <code>
/// compiler.Parameters.Pipeline
///		.Replace(typeof (ProcessMethodBodiesWithDuckTyping),
/// 				 new ProcessMethodBodiesWithDslNamesAndDuckTyping());
/// </code>
/// </example>
public class ProcessMethodBodiesWithDslNamesAndDuckTyping : ProcessMethodBodiesWithDuckTyping
{
	/// <summary>
	/// Called when we encounter a reference expression
	/// </summary>
	/// <param name="node">The node.</param>
	public override void OnReferenceExpression(ReferenceExpression node)
	{
		if(node.Name.Contains("_"))
			SetNodeNameToPascalCase(node);
		base.OnReferenceExpression(node);
	}

	/// <summary>
	/// Called when we encounters a member reference expression
	/// </summary>
	/// <param name="node">The node.</param>
	public override void OnMemberReferenceExpression(MemberReferenceExpression node)
	{
		if (node.Name.Contains("_"))
			SetNodeNameToPascalCase(node);
		base.OnMemberReferenceExpression(node);
	}

	/// <summary>
	/// Sets the node name to pascal case.
	/// </summary>
	/// <param name="node">The node.</param>
	private static void SetNodeNameToPascalCase(ReferenceExpression node)
	{
		string[] parts = node.Name.Split(new char[] { '_' },StringSplitOptions.RemoveEmptyEntries);
		StringBuilder name = new StringBuilder();
		foreach (var part in parts)
		{
			name.Append(char.ToUpperInvariant(part[0]))
				.Append(part.Substring(1));
		}
		node.Name = name.ToString();
	}
}

I love Boo, with cause.

Testing Domain Specific Languages

Roughly speaking, a DSL is composed of the following parts:

image

It should come as no surprise that when we test it, we test each of those components individually. When the time comes to test a DSL, I have the following tests:

  • CanCompile - This is the most trivial test, it assert that I can take a known script and compile it.
  • Syntax tests - Didn’t we just test that when we wrote the CanCompile() test? When I am talking about testing the syntax I am not talking about just verifying that it can compile successfully. I am talking about whatever the syntax that we have created has been compiled into the correct output. The CanCompile() test is only the first step in that direction. Here is an example of such a test.
  • DSL API tests - What exactly is the DSL API? In general, I think about the DSL API as any API that is directly exposed to the DSL. The methods and properties of the anonymous base class is an obvious candidate, of course. Anything else that was purposefully built to be used by the DSL also fall into this category. Those I test using standard unit tests, without involving the DSL at all. Testing in isolation again.
  • Engine tests - A DSL engine is the responsible for managing the interactions between the application and the DSL scripts. It is the gateway to the DSL in our application, allowing us to shell out policy decisions and oft-changed rules to an external entity. Since the engine is usually just a consumer of the DSL instances, we have several choices when the time comes to create test cases for the engine. We can perform a cross cutting test, which would involve the actual DSL, or test just the interaction of the engine with the provided instances. Since we generally want to test the engine behavior in invalid scenarios (a DSL script which cannot be compiled, for example), I tend to choose the first approach.

Testing the scripts

We have talked about how we can create tests for our DSL implementation, but we still haven’t talked about how we can actually test the DSL scripts themselves. Considering the typical scenarios for using a DSL (providing a policy, defining rules, making decisions, driving the application, etc), I don’t think anyone can argue against the need to have tests in place to verify that we actually do what we think we do.

In fact, because we usually use DSL as a way to define high level application behavior, there is an absolute need to be aware of what it is doing, and protect ourselves from accidental changes.

One of the more important things to remember when dealing with Boo based DSL is that the output of those DSL is just IL. This means that this output is subject to all the standard advantages and disadvantages of all other IL based languages.In this specific case, it means that we can just reference the resulting assembly and perform something write a test case directly against it.

In most cases, however, we can safely utilize the anonymous base class as a way to test the behavior of the scripts that we build. This allows us to have a nearly no-cost approach to building our tests. Let us see how we can test this piece of code:

specification @vacations:
	requires @scheduling_work
	requires @external_connections

specification @scheduling_work:
	return # doesn't require anything

And we can test this with this code:

[Test]
public void WhenUsingVacations_SchedulingWork_And_ExternalConnections_AreRequired()
{
	QuoteGeneratorRule rule = dslFactory.Create<QuoteGeneratorRule>(
		@"Quotes/simple.boo",
		new RequirementsInformation(200, "vacations"));
	rule.Evaluate();

	SystemModule module = rule.Modules[0];
	Assert.AreEqual("vacations", module.Name);
	Assert.AreEqual(2, module.Requirements.Count);
	Assert.AreEqual("scheduling_work", module.Requirements[0]);
	Assert.AreEqual("external_connections", module.Requirements[1]);
}

Or we can utilize a test DSL to do the same:

script "quotes/simple.boo"

with @vacations:
	should_require @scheduling_work
	should_require @external_connections	

with @scheduling_work:
	should_have_no_requirements

Note that creating a test DSL is only worth it if you expect to have a large number of DSL scripts of the tested language that you want to test.

Testing DSL Syntax with Interaction Based Testing

How do I test the syntax in this DSL? HandleWith should translate to a method call with typeof(RoutingTestHandler) and a delegate.

import BDSLiB.Tests

HandleWith RoutingTestHandler:
	lines = []
	return NewOrderMessage( 15,  "NewOrder", lines.ToArray(OrderLine) ) 

Well, I use interaction based testing, obviously. I find this test utterly fascinating, because it is fairly advance, in a roundabout sort of way, and yet it is so simple.

[Test]
public void WillCallHandlesWithWithRouteTestHanlderWhenRouteCalled()
{
	const IQuackFu msg = null;

	var mocks = new MockRepository();

	var routing = dslFactory.Create<RoutingBase>(@"Routing\simple.boo");

	var mockedRouting = (RoutingBase)mocks.PartialMock(routing.GetType());
	Expect.Call(() => mockedRouting.HandleWith(null, null))
		.Constraints(Is.Equal(typeof(RoutingTestHandler)), Is.Anything());

	mocks.ReplayAll();

	mockedRouting.Initialize(msg);

	mockedRouting.Route();

	mocks.VerifyAll();
}

Resharper 4.0 in now in beta

Just got the email about it. ReSharper 4.0 moved to a beta status. You can find the details here.

Personally, I have been using a fairly old build, with relatively few problems.

Easy extensibility: xUnit integration for DSL

I saw several solution for extending NUnit and MbUnit to add new functionality, all of them were far too complex for me. I didn't want this complexity. Here is the entire code that I had to write in order to make xUnit integrate with my DSL:

public class DslFactAttribute : FactAttribute
{
	private readonly string path;

	public DslFactAttribute(string path)
	{
		this.path = path;
	}

	protected override IEnumerable<ITestCommand> EnumerateTestCommands(MethodInfo method)
	{
		DslFactory dslFactory = new DslFactory();
		dslFactory.Register<TestQuoteGeneratorBase>(
				new TestQuoteGenerationDslEngine());
		TestQuoteGeneratorBase[] tests = dslFactory.CreateAll<TestQuoteGeneratorBase>(path);
		for(var test in tests)
		{
			Type dslType = test.GetType();
			BindingFlags flags = BindingFlags.DeclaredOnly |
				BindingFlags.Public |
				BindingFlags.Instance;
			foreach (MethodInfo info in dslType
				.GetMethods(flags))
			{
				if (info.Name.StartsWith("with"))
					yield return new DslRunnerTestCommand(dslType, info);
			}
		}
		
	}
}

And the DslTestRunnerCommand:

public class DslRunnerTestCommand : ITestCommand
{
	private readonly MethodInfo testToRun;
	private readonly Type dslType;

	public DslRunnerTestCommand(Type dslType, MethodInfo testToRun)
	{
		this.dslType = dslType;
		this.testToRun = testToRun;
	}

	public MethodResult Execute(object ignored)
	{
		object instance = Activator.CreateInstance(dslType);
		return new TestCommand(testToRun).Execute(instance);
	}

	public string Name
	{
		get { return testToRun.Name; }
	}
}

That is what I am talking about when I am talking about easy extensibility.

Unit testing frameworks extensibility

I just run into a road block, and I am posting before lunch, hopefully I'll get some ideas about how to solve this.

I have a directory full of source files, which contains tests. What I want to do is to write something like this:

public class MyTest
{
	[ExecuteAllTestsIn("tests/validators")]
	public void Validators()
	{
	}
}

The typical scenario for each source file is something like:

public void ExecuteSingleTest(string path)
{
      var test = CompileAndGetInstance(path);
ExecuteTestInUnitTestingFramework(test); }

A source file can contain more than a single test, and I want all those individual tests to be reported as part of the overall test run.

The problem is that I took a look at the extensibility mechanisms for both NUnit and MbUnit, and they are so big. It is not that I can't see how I would solve the issue, but it would require a lot of work to do so.

I know how I can do this with xUnit, where it would be a truly trivial affair.

Am I missing something? I never really had to deal with the extensibility mechanisms in NUnit or MbUnit before.

Create a test DSL to test the DSL

Yesterday I asked how we can efficiently test this piece of code:

specification @vacations:
	requires @scheduling_work
	requires @external_connections

Trying to test that with C# code resulted in 1500% disparity in number of lines of code. Obviously a different approach was needed. Since I am in a DSL state of mind, I wrote a test DSL for this:

script "quotes/simple.boo"

with @vacations:
	should_require @scheduling_work
	should_require @external_connections	

with @scheduling_work:
	should_have_no_requirements

I like this.

You can take a look at the code here.

Unit testing a DSL

There is something that really bothers me when I want to test this code:

specification @vacations:
	requires @scheduling_work
	requires @external_connections

And I come up with this test:

[TestFixture]
public class QuoteGenerationTest
{
	private DslFactory dslFactory;

	[SetUp]
	public void SetUp()
	{
		dslFactory = new DslFactory();
		dslFactory.Register<QuoteGeneratorRule>(new QuoteGenerationDslEngine());
	}

	[Test]
	public void CanCompile()
	{
		QuoteGeneratorRule rule = dslFactory.Create<QuoteGeneratorRule>(
			@"Quotes/simple.boo",
			new RequirementsInformation(200, "vacations"));
		Assert.IsNotNull(rule);
	}

	[Test]
	public void WhenUsingVacations_SchedulingWork_And_ExternalConnections_AreRequired()
	{
		QuoteGeneratorRule rule = dslFactory.Create<QuoteGeneratorRule>(
			@"Quotes/simple.boo",
			new RequirementsInformation(200, "vacations"));
		rule.Evaluate();

		SystemModule module = rule.Modules[0];
		Assert.AreEqual("vacations", module.Name);
		Assert.AreEqual(2, module.Requirements.Count);
		Assert.AreEqual("scheduling_work", module.Requirements[0]);
		Assert.AreEqual("external_connections", module.Requirements[1]);
	}

	[Test]
	public void WhenUsingSchedulingWork_HasNoRequirements()
	{
		QuoteGeneratorRule rule = dslFactory.Create<QuoteGeneratorRule>(
			@"Quotes/simple.boo",
			new RequirementsInformation(200, "scheduling_work"));
		rule.Evaluate();

		Assert.AreEqual(0, rule.Modules.Count);
	}
}

I mean, I heard about disparity in number of lines, but I think that this is beyond ridiculous.

It is not the parser I fear

Martin Fowler talks about the almost instinctive rejection of external DSLs because writing parsers is hard. I agree with Fowler on that writing a parser to deal with a fairly simple grammar is not a big task, certainly there isn't anything to recommend XML for the task over an textual parser.

The problem that I have with external DSL is actually different. It is not the actual parsing that I object to, it is the processing that needs to be done on the parse tree (or the XML DOM) in order to get to an interesting result that I dislike.

My own preference is to utilize an existing language to build an internal DSL. This allows me to build on top of all the existing facilities, without having to deal with all the work that is required to get from the parse tree to a usable output.

In the case of the example that Fowler uses for his book (the state machine outlined here), the use of an internal DSL allows me to go from the DSL script to a fully populated semantic model without any intermediate steps. I give up some syntactic flexibility in exchange of not worrying about all the details in the middle.

The benefit of that is huge, which is why I would almost always recommend going with an internal DSL over building an external one. Here is a simple example, a business rule DSL:

suggest_preferred_customer:
    when not customer.IsPreferred and order.Total > 1000

apply_discount_of 5.precent:
    when customer.IsPreferred and order.Total > 1000

I wrote the code to use this DSL as a bonus DSL for my DSL talk in DevTeach. It took half an hour to forty five minutes, and it was at 4 AM. I extended this DSL during the talk to support new concepts and to demonstrate how easy it is.

I got to that point by leaning heavily on the host language to provide as much facilities as I could.

In short, it is not the parsing that I fear, it is all the other work associated with it.

Compare and contrast: Rhino Mocks vs. Hand Rolled Stubs

For various reasons which will be made clear soon, I needed to write the same test twice, once using Rhino Mocks, the second time using hand rolled stubs. I thought it was an interesting exercise, since this is not demo level code.

[Test]
public void WillCallHandlesWithWithRouteTestHanlderWhenRouteCalled_UsingRhinoMocks()
{
	const IQuackFu msg = null;

	var mocks = new MockRepository();

	var routing = dslFactory.Create<RoutingBase>(@"Routing\simple.boo");

	var mockedRouting = (RoutingBase)mocks.PartialMock(routing.GetType());
	Expect.Call(() => mockedRouting.HandleWith(null, null))
		.Constraints(Is.Equal(typeof(RoutingTestHandler)), Is.Anything());

	mocks.ReplayAll();

	mockedRouting.Initialize(msg);

	mockedRouting.Route();

	mocks.VerifyAll();
}

[Test]
public void WillCallHandlesWithWithRouteTestHanlderWhenRouteCalled()
{
	const IQuackFu msg = null;

	dslFactory.Register<StubbedRoutingBase>(new StubbedRoutingDslEngine());

	var routing = dslFactory.Create<StubbedRoutingBase>(@"Routing\simple.boo");

	routing.Initialize(msg);

	routing.Route();

	Assert.AreEqual(typeof (RoutingTestHandler), routing.HandlerType);

	Assert.IsInstanceOfType(
		typeof(NewOrderMessage), 
		routing.Transformer()
		);
}

Would I add static mocking to Rhino Mocks?

Eli Lopian joined the discussion about Mocks, Statics and the automatic rejection some people have of Type Mock and its capabilities.

Here is the deal.

  • I don't like the idea of mocking statics.
  • Given a patch that would add this capability to Rhino Mocks *, I would apply it in a heartbeat.

Hm... that seems inconsistent of me...

Leaving aside the fact that I feel no urging need to be internally consistent, I don't have an issue with the feature itself. Mocking statics is not inherently bad. What I consider bad is the reliance of such a feature to write highly coupled code.

It is more an issue of guidance and form than a matter of actual features.

* Note that I said, "given a patch", the feature is not actually interesting enough for me to spend time implementing it.

Teaching and Speaking

D'Arcy has a post about teachers vs. speakers, which is a topic that came up a few times during DevTeach. My thoughts about this are fairly complex, but let me see if I can express them in a coherent fashion.

There is a definite difference between teaching and speaking. I like to think about speaking as a show that is targeted at increasing the knowledge of the audience on the subject at hand. Teaching is imparting that knowledge in an actionable form.

To take a concrete example, after my MonoRail talk, I don't expect anyone to be able to build a site using MonoRail. Certainly not with additional resources to go through. After my MonoRail course, however, I would consider this a personal failure if any of the participants wasn't able to build a site using MonoRail.

A success criteria for the MonoRail talk is that the audience groks the gestalt of MonoRail. They understand the tradeoffs in choosing it, what it would bring them and the overall model that is used.

Frankly, in the space of 60 to 75 minutes, I do not believe that you could do more.

Given those constraints, I do not think that you could do more than introduce a subject and open the mind of the people in the audience to why they should learn more about it.

Imparting knowledge takes time, far lower level of granularity when talking about how to do things, and a more gradual build up of the subject material. It takes much longer, since it is also requires a channel with much higher bandwidth for communication.

Unless the topic that I am talking about can be covered in a short span of time, when faced with the timing constraints of a typical speaking engagement, there is no other option than reverting to a lower-bandwidth, hit-the-high-notes, give-an-impression-not-complete-picture approach.

I like teaching, and I enjoy speaking, and I don't think that there should be a value judgement between them.

Use the right tool for the job

Roy is talking about certain dogmatic approaches in the ALT.Net community with regards to Type Mock. Go and read the comments, the thread is far more interesting than just the post.

Now that you have done so, let me try to explain my own thinking about this.

Type Mock has the reputation of supporting unmaintainable practices. You can argue if this reputation is accurate or not, but it is there. In fact, from Type Mock's own homepage:

Saves time by eliminating code refactoring and rewriting just to make it testable

Here is the deal, I don't think that you should eliminate code refactoring. I think that avoiding dealing with technical debt is a losing proposition in the long term.

Certainly Type Mock enables me to ignore highly coupled design and still be able to create tests for it.

Is this of value? Absolutely!

Is this desirable? I don't believe so.

Can I use Type Mock to create software with decoupled, testable design? Of course I can.

But then there is the question: what do I get from Type Mock?

Quite a lot of other goodies, actually:

  • Integration with the IDE
  • Understanding debugger func-eval (and evil feature which can cause no end of pain when doing mocking)
  • Visual Mock Tracer
  • Other things that I am not aware of because I am not a Type Mock user

The question whatever you should use it or not depends on your situation, as the author of Rhino Mocks, I don't think that I can give an unbiased opinion.

Reviewing the Entity Framework Source Control Support

Frankly, I am quite amazed that I even need to write this post. Source Control is such a basic part of the development process that I didn't want to believe there could be anything to say about it.

In the previous release of Entity Framework, there were deal breaker issues in the source control story. In short, it didn't work. I installed Visual Studio 2008 SP1 Beta to check how the new bits behave.

Let us start from the simplest scenario, defining a model with two classes, and commit to source control:

image

Now, let us say that Jane wants to add a Total property to the Order class:

image


At the same time, Joe is adding a Name property to the Customer class:

image

Jane commits first, and then Joe attempts to commit. Here is the result:

image

This is bad.

In fact, I would say that calling this unacceptable is not going far enough.

I made two unrelated modifications in the model, something that happens... all the time, and it puked on me.

This is not an acceptable Source Control story.

Now, to be fair, I dug a little deeper and tried to find what has cause the conflict. Here is what I find in the Model1.edmx file:

image

The conflict is in the visualization section of the file. This is a huge improvement over the previous version, but is is still broken. Leaving aside the fact that I have no idea why the visualization information is in the same file as the actual model, I shouldn't get merge conflicts on this trivial a change.

I didn't check the generated code, but there were conflicts there as well, which might cause additional issues. They could be regenerated, I assume.

In the end, I was able to get to the model that I wanted, after manually fixing the merge conflicts:

image

In summary, this is a huge improvement from the previous version, but it is still fall far from the minimum expected bar.

Please note that I have tested the most trivial scenario that I could, I have no idea what the behavior would be when dealing with more advance scenarios.

Visual Studio 2008 SP1 Beta: AVOID

After the pain of VS 2005 SP1 (which killed my machine, as a matter of fact), I decided to install the SP1 beta for VS2008 on a clean VM. That VM is a simple install of the OS + VS 2008, that is all.

Here is the result of installing VS 2008 SP1 Beta. I have no idea what happened, at one point it was installing, now it is rolling back.

image

I suppose I could try to figure out what is going on, by hunting in the logs and trying the cargo cult approaches.

But I have a simpler solution, just avoid the whole thing. Is it too much to ask that the installer will work?

Zero Friction & Maintainability

You probably heard me talk about zero friction and maintainability often enough in the past. But they were always separate subjects. When I prepared for my Zero Friction talk, I finally figured out what is the relation between the two.

I talk about zero friction as a way to reduce pain points in development. And I talk about maintainability as a way to ensure that we build sustainable solutions.

Let us go back a step and try to realize why we even have the issue of maintainability. Bits do not rot, why am I so worried about proactively ensuring that we will keep the code in a good shape?

As it turn out, while code may not rot, the design of the application does. But why?

If you have an environment that has friction in it, there is an incentive for the developers to subvert the design in order to produce a quick fix or hack a solution to solve a problem. Creating a zero friction environment will produce a system where there is no incentive to corrupt the design, the easiest thing to do is the right thing to do.

By reducing the friction in the environment, you increase the system maintainability

Rhino Mocks 3.5 Beta Released

3 Months ago I released Rhino Mocks 3.4, we have made some great improvement to Rhino Mocks in the meantime, and it is time for a new release. I generally don't believe in beta releases for Rhino Mocks, but this time we are coming up with a new syntax, and I want to get additional external input before we make a final release on that.

The biggest new feature is the new AAA syntax, which you can read about in the relevant post, but we had a few other new things as well.

  • CreateMock() is deprecated and marked with the [Obsolete] attribute. Use StrictMock() instead.
  • Better handling of exception in raising events from mock objects
  • Fixing an issue with mock objects that expose methods with output parameter of type System.IntPtr.
  • Allowing to return to record mode without losing expectations, thanks to Jordan Terrel, for submitting this patch.

I intend to write a lot more documentation about the new AAA syntax, but for now, you can visit the tests for the feature, to see how it works.

As usual, you can find the bits here.

Note that this release if for .Net 3.5 only. Rhino Mocks 3.5 RTM will be for .Net 2.0 and 3.5, but I am focusing on the capabilities that I can get from the 3.5 platform at the moment.