Ayende @ Rahien

It's a girl

Introducing MonoRail.HotSwap

I hate aimless bitching, and this post annoyed me enough to decide to do something about it. The basic problem is that making any change at all to an ASP.Net application requires an AppDomain load / unload, which takes a lot of time. This means that a quick change and browser refresh are not possible, you are trying to minimize those waits as much as possible, and that hurts the feedback cycle.

MonoRail makes it much easier, because you are not forced to do this if you make any change to the views, but changing the controller can get to be a PITA, because of the wait times.

Problem.

Let me walk you through what I was thinking:

  • I need to have a way to get the new version of the controller into the already running application.
  • I am using Windsor.
  • I had a discussion two weeks ago about Java's hot deployment capabilities.
  • Controllers are independent units, for the most part.

So, what does this mean? It means that I can probably take any single controller and recompile it independently of the rest of the application. As a direct result of that, I can register the new controller version in Windsor, so the next time MonoRail needs this controller, it will get the new version.

The rest was just a matter of runtime compilation...

The code is listed below, I was astonished to discover that this is so freaking easy!

The proof of concept code is 70 lines, and it allows to make any change to a controller and it will be immediately be reflected in the application.

Now, why is it proof of concept?

  • It doesn't really handles errors correctly, it just removes the controller if it was changed and that is it. You probably want to get an error page with the compilation errors.
  • It will recompile the minute that you are saving, you probably want to recompile on the first request.
  • You may want to recompile the entire application if the first compile did not succeed, that would allow you to handle more than a single change. That is fairly complex thing to do, though.

The first two issues can be fairly easily solved using a SubDependencyResolver on Windsor, which will intercept the incoming calls for controllers' instances. It would then try to compile the controller if it was changed, and throw an exception if it cannot compile.

Important: I am going to leave to JAOO in a few hours, so I won't have time to follow up on this as it deserves. I have every confidence that William will :-)

Here is the code:

namespace Castle.MonoRail.Framework
{
	using System;
	using System.CodeDom.Compiler;
	using System.IO;
	using System.Reflection;
	using Microsoft.CSharp;
	using Windsor;

	public class HotSwap
	{
		private readonly string directoryToWatch;
		private readonly IWindsorContainer container;
		private readonly Assembly assembly;
		private readonly string controllersNamespace;

		public HotSwap(string directoryToWatch, IWindsorContainer container, 
			Assembly assembly,
			string controllersNamespace)
		{
			this.directoryToWatch = directoryToWatch;
			this.container = container;
			this.assembly = assembly;
			this.controllersNamespace = controllersNamespace;
		}

		public void Start()
		{
			FileSystemWatcher watcher = new FileSystemWatcher(directoryToWatch, "*.cs");
			watcher.Created += CodeChanged;
			watcher.Changed += CodeChanged;
			watcher.Renamed += CodeChanged;
			watcher.EnableRaisingEvents = true;
		}

		void CodeChanged(object sender, FileSystemEventArgs e)
		{
			string fileName = Path.GetFileNameWithoutExtension(e.FullPath);
			string typeName = controllersNamespace+"."+fileName;
			CompilerParameters options = CreateCompilerOptions();

			CSharpCodeProvider provider = new CSharpCodeProvider();
			CompilerResults compilerResults = provider
				.CompileAssemblyFromFile(options, e.FullPath);

			container.Kernel.RemoveComponent(typeName);
			
			if(compilerResults.Errors.HasErrors)
				return;

			Type type = compilerResults.CompiledAssembly.GetType(typeName);
			container.AddComponent(type.FullName, type);
		}

		private CompilerParameters CreateCompilerOptions()
		{
			CompilerParameters options = new CompilerParameters();
			options.GenerateInMemory = true;
			options.GenerateExecutable = false;
			options.ReferencedAssemblies.Add(assembly.Location);
			foreach (AssemblyName name in assembly.GetReferencedAssemblies())
			{
				Assembly loaded = Assembly.Load(name.FullName);
				options.ReferencedAssemblies.Add(loaded.Location);
			}
			options.IncludeDebugInformation = true;
			return options;
		}
	}
}

And in the global.asax:

public class GlobalApplication : UnitOfWorkApplication
{
	public override void Init()
	{
		HotSwap swap = new HotSwap(directory, 
			Container, 
			Assembly.GetExecutingAssembly(), 
			"MyApp.Controllers");
		swap.Start();
	}
}

Introducing Boobs: Boo Build System

I hate XML, a long time ago, I also hated XML, but I also had some free time, and I played with building a build system in Boo. To match NAnt, I called it NUncle.

It never really gotten anywhere, but Georges Benatti has taken the code and created the Boo Build System. I am just taking a look, and it is fairly impressive. It has the concept of tasks and dependencies between them, as well as action that it can perform.

Here is a part of Boobs' own build script:

Task "build boobs", ["build engine", "build extensions"]:
	bc = Booc(
		SourcesSet   : FileSet("tools/boobs/**/*.boo"),
		OutputFile   : "build/boobs.exe"
		)
	bc.ReferencesSet.Include("build/boobs.engine.dll")
	bc.ReferencesSet.Include("build/boo.lang.useful.dll")
	bc.Execute()

Task "build engine":
	Booc(
		SourcesSet  : FileSet("src/boobs.engine/**/*.boo"),
		OutputFile  : "build/boobs.engine.dll",
		OutputTarget: TargetType.Library 
		).Execute()

Task "build extensions", ["build io.extensions", "build compiler.extensions"]

Task "build io.extensions":
	Booc(
		SourcesSet  : FileSet("src/extensions/boobs.io.extensions/**/*.boo"),
		OutputFile  : "build/boobs.io.extensions.dll",
		OutputTarget: TargetType.Library 
		).Execute()

Task "build compiler.extensions":
	bc = Booc(
		SourcesSet   : FileSet("src/extensions/boobs.compiler.extensions/**/*.boo"),
		OutputFile   : "build/boobs.compiler.extensions.dll",
		OutputTarget : TargetType.Library 
		)
	bc.ReferencesSet.Include("build/boobs.io.extensions.dll")
	bc.Execute()

I don't know about you, but this makes me feel very nice.

The concept is pretty obvious, I feel, and the really nice thing is that extending it is a piece of cake. Here is how you validate dates of two files:

def IsUpToDate(target as string, source as string):
	return true unless File.Exists(source)
	return false unless File.Exists(target)

	targetInfo = FileInfo(target)
	sourceInfo = FileInfo(source)
		
	return targetInfo.LastAccessTimeUtc >= sourceInfo.LastAccessTimeUtc

And its usage:

Cp("source.big", "dest.big") if not IsUpToDate("source.big", "dest.big")

Or, you know what, this is fairly routine, and it comes as part of the standard library for Boobs. Let us create something new, ConditionalCopy:

def ConditionalCp(src as string, dest as string):
	Cp(src, dest) if not IsUpToDate(src, dest)

Usage should be clear by now, I hope.

Playing with Boo's DSLs

Boo has gotten a lot better in terms of flexibility lately, a lot better. I was able to churn this little DSL in about an hour:

OnCreate Account:
	Entity.AccountNumber = date.Now.Ticks
	
	
 OnCreate Order:
	if Entity.Total > Entity.Account.MaxOrderTotal:
BeginManualApprovalFor Entity

From the perspective of the DSL, it is very easy to use, and from the implementer perspective, this was ridiculously easy to implement.

The only half-way complex thing here is the introduce base class action, which is usually the first approach that you take, which is why I don't consider it complex. Then you can just do create the base class:

abstract class DslBase:

	[property(Entity)] 
entity as object
        def OnCreate(entity as System.Type, action as callable()):
		Actions.RegisterOnCreate(entity.Name, action)
		
	abstract def Execute():
		pass
	
	def BeginManualApprovalFor(order as Order):
		print "Starting approval process for Order #${order.Id}"

In the base class you just do what you would normally do, in this case, it just register the new action. Here is the implementation:

class Actions:
	
	static onCreateActions = {}
	
	static def RegisterOnCreate(entityName as string, action as callable()):
		onCreateActions[entityName] = action
		
	
	static def OnCreate(entity as object):
		entityName = entity.GetType().Name
		action = onCreateActions[entityName] as callable()
		if not action:
			print "No action defined for ${entityName}"
			return
		dsl = (action.Target as DslBase)
		dsl.Entity = entity
		action()

The client code is simply:

SampleDSL().Prepare("Sample.boo")

account = Account(MaxOrderTotal: 100)

print "Before: ${account.AccountNumber}"
Actions.OnCreate( account )
print "After: ${account.AccountNumber}"

order = Order(Account: account, Total: 500 )
Actions.OnCreate( order )

All you have left then is to get a rich enough library of methods on the DslBase, for the business user to work with, document how this is done, and that is it.

Tags:

Published at

ReSharper for Boo

Well, it has not gotten to that level, but I just hit ALT+Insert in #Develop, and I got this dialog:

image

Wow!

I will not be at the ALT.Net conference

Yes, it sucks, and I feel really bad about it, but I will not be able to be at the ALT.Net conference. The main reason is that I was stupid and didn't take into account how long it will take to get a visa, and I also forgot that right now there is a huge concentration of holidays, so the embassy is apparently swamped with requests, and taking longer to process them even than the usual.

:-(

Speaking at JAOO

Consider this a public service announcement:

I am going to be at JAOO next week, talking about building DSLs with Boo and about Castle along with Hammett.

Ayende is not your helpdesk

I routinely get a lot of emails with questions about the various stuff that I do. That was wonderful while I could keep up with the flow, but it is not something that can really scale up. As a rule, I have started to refer all such questions to the appropriate channels for them.

For NHibernate: The NHibernate forums

For Castle: Castle Forums or the Users Mailing List

For Rhino Mocks: The Rhino Mocks Mailing List

For Rhino Tools: The Rhino Tools Developers mailing list

The reasoning behind this decision is very simple, I don't have the time to help everyone, and in all those places, there are other people that are doing great job answering those questions.

This also means that over time, there is a wealth of knowledge gathered there, so make a search first, to see if the question has already been asked.

As an exception to this, if you have a question about one of my posts, go ahead and send me an email about it.

If you really want me to help you, contact me about commercial support.

They don't owe you anything

This post really annoys me, the author is talking about moving to Subversion from Bazar as the source control of the project, because people didn't want to install Bazar, then he goes off and says this:

Unfortunately, this kind of laziness has become pervasive in the Free Software world, as compared to say 10 years ago. Back then it was all but expected that you’d have to fix the build to get something working. But it was fine, because you would fix it and send a quick patch. Believe it or not, this actually felt pretty awesome. You were helping to keep the train on the tracks, and that meant getting your hands messy.

I had to read it thrice to get it through. This guy is talking about Open Source software, and he seems to think that putting more road blocks in the way of participation in a project is a good thing. WTF?!

I don't know about that, but I know that as a member of several OSS projects, the teams usually try to minimize the amount of work that you need to either use and / or develop in the project. Not only does it make sense, but it means that other people can get started easily.

To do otherwise is to make sure that people would not use the project. This may be some sort of an "acceptance criteria", but it is a stupid one. You are not paying people to do it, you are asking them to give up valuable time in order to figure out your stuff. It is your duty to make it easier, not to complain about them not passing the hurdle that you put there.

Then there is this, in the comments, which is more relevant to the post:

By the way, saying how you “have to learn half a dozen different vcs tools” would mean that you’re working on half a dozen different projects, which is quite unlikely isn’t it?

Right now in my OSS folder there are 18 projects that I keep track of. And there are at least 10 others that I checkout and delete from time to time. They all use subversion. This means that the barrier to entry into a project is finding the project URL, checking out the trunk, and then waiting for it to complete. The barrier for entry in any other system is a lot higher. I need to install a new SCM, lean how to use it, how to generate a patch, diff, etc. This is a non trivial cost, and not something that I am willing to do unless you have a really good reason.

If presses, I would probably use CVS, but I don't really like it. Using anything else requires not only memorizing a new set of commands, but also grokking another SCM model entirely.

If you want to get contributors, you need to tempt, not to hunt.

The Performance Penalty of using ASP.Net

Bret is talking about tracking an issue with Watir that appeared after the application was migrated from ASP.Net to Ruby on Rails (IE issue, apparently).

I think the reason we haven’t seen this problem before is because our .Net apps have been a lot slower that Rails. Slow enough to keep this IE bug from showing up.

That is certainly something that I have seen before. The main problem is not the runtime performance, it is the initial performance. If I make a change to a page, I have to wait ~30 seconds for it to load. Contrast that with making a change to a MonoRail view, where a change in the view appears instantly. The problem is that changing the controller requires re-compilation, which has the same performance penalties.

This is something that kept me annoyed, but it is getting more so lately, because I can feel the difference between changing the view (brail, instant) and the controller (C#, ~30 seconds), where before anything that changed caused the same long delay.

As far as I can tell, this has to do with AppDomain load / unload, but I never really bothered to take a look. William was talking about this same problem a while ago. This may not affect the runtime production performance, but it plays hell with the developer's performance.

The half life of a project

The half-life of a radioactive substance is the time required for half of a sample to undergo radioactive decay.
en.wikipedia.org/wiki/Halflife

Scott Bellware is talking about the rate of decay of the maintainability of a project.

The rate of decay of maintainability is the measure of how fast your maintainable code looses it's maintainability after your agile development guru or consultant moves on to another project.

This is a subject of a particular concern of mine, since I was often accused of building software that only I could work with. I am currently teaching an MSPD course with the express purpose of grokking the way that the average developers go about their work. It has given me a lot of insight into why Microsoft does some things (and I still vehemently disagree with the decision).

I taught data access approach using With and anonymous delegates:

With.Transaction(delegate(SqlCommand cmd)
{
   cmd.CommandText = "SELECT COUNT(*) FORM Customers";
   return (int)cmd.ExecuteScalar();
});

This is for people who learned C# as part of the course. It works, and while this is not something that they are likely to build on their own, once you set up the infrastructure and explain how it works, there weren't particular issues with this.

The crux of Scott's post is this statement:

The keystone of intentional maintainability, nestled in the slot at the apex of an arch of XP practices, is code that is instantly understandable by the folks who have to do the maintenance.

That is an argument that I have had with quite a few people, quite often. Because the problem is with the definition of "the folks who have to do the maintenance". At one time, I had a... discussion with someone that thought that they can hire someone off the street (purposefully low level, to keep costs down) and have them start working on a big (10 months of continuous development) project from day 1.

That meant, as far as they were concerned, that we were to use only the "standard" approaches, so we wouldn't break the wizards and make the application "unmaintainable". That is not maintainable in my book, quite the opposite, frankly.

From my point of view, an application is maintainable if I can take someone of the street, give them the overall view of how things work, walk them through a use case or two, and then set them on the code on their own.

This method belongs to the LoginController, and I am going to assume that it is going to be instantly understandable my most people who can read code. But, and this is important, it is not maintainable unless you figure out what is going on all around it.

/// <summary>
/// Authenticates the specified user, will redirect to the destination page
/// if the login is successful.
/// </summary>
/// <remarks>
/// If the login is not successful, additional information may be found on:
/// Scope.Flash[Constants.ErrorMessage]
/// </remarks>
/// <param name="user">The user.</param>
/// <param name="password">The password.</param>
/// <returns>True if the user has authenticated successfully, false otherwise</returns>
public virtual bool Authenticate(string user, string pass word)
{
	
	bool isValidLogin = authenticationService.IsValidLogin(user, password);
	if (isValidLogin)
	{
		Usage.SuccessfulLogin(user);
		string returnUrl = Scope.Input[Constants.ReturnUrl];
		if (!string.IsNullOrEmpty(returnUrl))
			Context.AuthenticateAndRedirect(returnUrl, user);
		else
			Context.AuthenticateAndRedirect(Settings.Default.HomePageUrl, user);
		return true;
	}
	Usage.FailedLogin(user);
	Context.SignOut();
	Scope.ErrorMessage = Resources.BadUserOrPassword;
	return false;
}

I think that I have done a moderately good job in making my recent projects maintainable without me, and I am trying to make sure that there won't be many cases of "This is Oren's CodeTM, I can't touch that". We aren't there yet, but we are getting there. Another developer is going to take over a project that I have led, and I have confidence that after the initial panic, things would settle down.

My criteria for maintainability if that if you know X, Y and Z (usually technologies that I use in the project, mainly NHibernate), you should be able to get everything from the code.

Then again, my current project's Guide to the New Developer does have two pages dedicated to dependencies and links to the documentation...

Line count & complexity

Dave Laribee is talking about measuring line counts and code quality. He mentions that in the Ruby's community is is considered a value to express yourself in fewer lines of code.

Recently I had to do some statistics over my project, and I was astonished to discover that it was closing rapidly to the 150,000 lines of code. That made me wonder what another project line count was, I was pretty sure that it would be closer to half a million lines of code, but I was floored when I discovered that it barely reached the 30,000 lines of code.

There is no comparison between the ease of working with both projects. The bigger one is a pleasure to work with, and the smaller is just plain hard. I was the main developer of the small project, and I have learn a lot from it (what not to do :-) ).

I don't think that a lower line count is really meaningful, unless you also have the context of how it is used. Here is another example, Rhino Mocks is at about ~25,000 lines of code, most of those are tests, the core library is about 10,000 lines, but only 4,801 lines of code (vs. xml comments, etc).

To compare to that, I just check the most complex use case in my application, which is composed out of the following:

  • Controller: 400
  • Controller Finders: 531
  • Code Behind: 479
  • ASPX: 1,261 (roughly 500 of those are javascript)

So a bit over 2,600 lines of code (not including infrastructure), for a single use case. 

It is also the Murphy Zone in the application, the one we would rather not touch (it is about five times as large as any of the other use cases), because it touches so many other things. (And naturally, we had to fix a bug there yesterday :-), not liking to touch and afraid to of touching are two different things).

So, I don't want to agree with lower LOC is a good thing, and I do want to be more expressive when I write the code. I am not sure where it puts me.

But I do know that the end result is that we want to be able to get simple code at all costs. (Not stupid code, simple code.) This means that reducing line count should be accompanied with increase in the code clarity.

At any rate, just a bit of my own experience in this debate.

Per Application Database

Scott Bellware is talking about application embedded databases, an application that is the sole domain of the application that is using it. I don't like the term, because embedded databases have another meaning, but I fully support the idea that the database should be an implementation detail of the applications.

I had a project where we needed to use an existing database, and I want to cry when I think about the length and breath of of effort that we had to go through to get it to really work.

In the last project, I have had at one time to change the DB password daily, to keep [Some Other Party] from trying to integrate with the application through the database. (In the end, we create a separate database, just for them, and they integrate through it).

The problems with per application databases are silos and getting the data out to other applications. This bring us back to the data outlives the application, etc. I have a longer post in the pipeline that is going to talk about that.

Generated Code Quality

Jeremy Miller made the mistake of opening the DataSet.Designer.cs file and got:

You know the orange bars that ReSharper puts into the vertical scroll bar to denote warnings in the code?  In the DataSet code it looked like a solid bar of orange sherbert. 

The discussion in the comments is about the quality of generated code, and it is fairly interesting. As someone who does quite a bit with code generation, I wanted to note two things:

  • The quality of the generated code is the result of the generator, with the qualification that the generated code may be used to off load nasty parts of the code base. The code generated by NHQG is doing some really crazy stuff with generics and operator overloading, and it is something that can get very hairy very fast.
  • The other issue is that the use of CodeDOM means that you are often specifying redundant things. In this case, this this.myName and the fully qualified namespaces could go away, and this is something that ReSharper detects and warns against.
public virtual Query_Post<T1> With() {
	Query_Post<T1> query = new Query_Post<T1>(this, this.myName, this.associationPath);
	query.joinType = NHibernate.SqlCommand.JoinType.InnerJoin;
	query.fetchMode = NHibernate.FetchMode.Default;
	return query;
}

It is not code generation in general that is problematic, in my view, it is the use of code generation to cover problems with the underlying platform that is usually what is causing the pain.

A most applicable approach to the Fertilizer pattern

I recently had a chance to go through several legacy applications, bug spotting in one and evaluating the possibilities of code reuse in another. Stuff like "a most applicable approach to the Fertilizer pattern" began to run on my head as I read it, and at one point I had to go away to be able to curse in private (there were ladies present).

Some of the things that I run into (across all the applications):

  • Goto inside a grid implementation, after a little fact finding, it was discovered that they lost the original source, and just copy / pasted the result of Reflector's de-compilation and moved on.
  • SQL Injection problems in stored procedures that were found by reading the documentation.
  • Naming convention like bo_showinui, id_file, nm_folder, tx_folderpath

We fixed the bug (which made me learn some stuff about double hop windows authentication and citrix, of all the things in the world, and we will not be reusing the code.

In retrospect, I should have known, everyone else went to the far side of the room when I started reading the code...

Tags:

Published at

Refactoring MonoRail Views

imageAs I mentioned, I build a very quick & dirty solution to display a collection of scheduled task descriptions. The end result looked like this:

image

This works, and it should start producing value as of next week, but I didn't really like the way I built it.

Here is the original view code:

<table cellspacing="0" cellpadding="0">
	<thead>
		<tr>
			<th>
				Name:</th>
			<th>
				Occurances</th>
			<th>
				&nbsp;</th>
		</tr>
	</thead>
	<tbody>
		<% for task in tasks: %>
		<tr>
			<td>
			${Text.PascalCaseToWord(task.Name)}
			</td>
			<td>
			Every ${task.OccuranceEvery}
			</td>
			<td>
			${Html.LinkTo("Execute", "ScheduledTasks", "Execute", task.FullName)}
			</td>
		</tr>
		<% end %>
	</tbody>
</table>

This work, it is simple and easy to understand, but it still bothered me. So I replaced it with this:

<% component SmartGrid, {@source: tasks}  %>

Well, that was much shorter, but the result was this...

image

I am cropping things, because it is a fairly long picture, but it should be clear that this is not a really nice UI to use.

This was my second attempt;

<% component SmartGrid, {@source: tasks, @columns: [@Name, @OccuranceEvery] }  %>

And it produced this:

image

Better, but not really that much, let us try to have nicer names there, shall we?

<% 
component SmartGrid, {@source: tasks, @columns: [@Name, @OccuranceEvery] }:  
	section Name:
	%>
	<td>${Text.PascalCaseToWord(value)}</td>
	<%
	end
end
%>

And this produced:

image

That is much better on the name side, but we still have the "Occurance Every" column to fix...

<% 
component SmartGrid, {@source: tasks, @columns: [@Name, @OccuranceEvery] }:  
	section OccuranceEveryHeader:
	%>
	<th>Occurances</th>
	<%
	end
	section Name:
	%>
	<td>${Text.PascalCaseToWord(value)}</td>
	<%
	end
	section OccuranceEvery:
	%>
	<td>Every ${value}</td>
	<%
	end
end
%>

With the result being:

image

One last thing that we have left is the additional column at the end, we can manage it like this:

<% 
component SmartGrid, {@source: tasks, @columns: [@Name, @OccuranceEvery] }:  
	section OccuranceEveryHeader:
	%>
	<th>Occurances</th>
	<%
	end
	section MoreHeader:
	%>
	<th></th>
	<%
	end
	section Name:
	%>
	<td>${Text.PascalCaseToWord(value)}</td>
	<%
	end
	section OccuranceEvery:
	%>
	<td>Every ${value}</td>
	<%
	end
	section More:
	%>
	<td>${Html.LinkTo("Execute", "ScheduledTasks", "Execute", item.FullName)}</td>
	<%
	end
end
%>

So here is the final result:

image

Now that I did that, I am looking at both pieces of code and wondering:

  • What is the fuss about, anyway?
  • Which of those versions is more readable?

Granted, this is a fairly specialized case, but in terms of LoC, the second approach is actually longer, and the "major" benefit here is that I get less HTML in the view, but that is not a really major consideration.

The SmartGrid would produce a pager if needed, but that about it with regards to the differences in their abilities.

Convention over configuration: Structured approach

Hammett has a post that made me think. He is talking about configuring Spring for Java and has this to comment:

I can’t understand this uncontrolled passion that java programmers carry for xml.

I can certainly agree with him about that, but I got to the same point a long time ago with Windsor. I had an application which was rife with components. I am talking about many dozens of generic (as in MyComponent<T>) components, all of which required registration in the config file, which got to be fairly annoying very fast.

I solved that by creating Binsor, a Boo DSL for configuring an IoC container. Initially, it was very similar to the Batch Registration Facility, but I pushed it much further, until today I do all my Windsor configuration using Windsor.boo, and its existence is a core part of my application architecture. Why? Because I can do things like this:

controllersAssemblies = (Assembly.Load("MyApp.Web"), Assembly.Load("Castle.MonoRail.ViewComponents") )
for asm in controllersAssemblies:
	for type in asm.GetTypes():
		continue if type.Name == "ScheduledTasksController"
		if typeof(Controller).IsAssignableFrom(type):
			IoC.Container.AddComponent(type.FullName, type)
		if not typeof(ViewComponent).IsAssignableFrom(type):
			IoC.Container.AddComponent(type.Name, type)

The immediate result of this type of behavior is that I don't have any weight of the IoC, I just develop as I would normally would, and it catches up with everything that it needs to handle.

I recently had the chance to maintain a MonoRail application that used XML for configuring Windsor, and by the third time that I added a controller, I was pretty annoyed. Each and every time, I had an additional step to perform before I could run the code. Compared to the approach that I was using, it was hardly zero friction.

Of course, my approach is not flawless in any mean, there is a bug in the configuration above, can you spot it?

The point of this post, which I have been trying to get to for a while, while random electrical pulses moved my fingers in interesting ways, is that I have began not only to use this to a great extent (all projects in the last year or so are based on this idea), but I have started to actually design with this in mind.

The end result right now is that I am relying a lot more on commands patterns and attributes for selections. This means that I often do things like:

[SelectionCriteria(Selection.FirstOption, Selection.SecondOption)]
public void SomeSpecficCommand : Command<ObjectActingUpon>
{
	public override void Execute()
	{
	}
}

You can see a more real world sample here, but the idea is to declaratively express the intent, have the configuration figure out what I want, and make it happen. The nice thing, most of the time, It Just Works. And the nice thing about utilizing Binsor is that I can adapt the convention very easily. So I can have many nice conventions, each for a particular situation.

Development Platforms, again

Jeffery Pallermo commented about the applicability of SharePoint as a development platform, the conversation in the comments is fantastic. Sahil Malik replied to that in a post that says that SharePoint is a terrific development platform.

I want to go through Sahil's points for a moment:

I haven't ever walked into a shop with a home grown ASP.NET app that was any easier to install than sharepoint is.

Here are the steps to install (on a bare bone server) my current project:

Prerequisites:

  1. IIS
  2. .Net 2.0 + 3.0
  3. SQL Server DB  

Steps:

  1. Open cmd.exe
  2. \\storage\tools\subversion\svn co svn://subversion/MyProject/trunk MyProject
  3. msbuild default.build /p:ConnectionString="database connection string" /p:RebuildDatabase=True

The whole thing takes about ten minutes, and that is from scratch. From a built environment, we can push an update by just executing the last command, and waiting for about 2 minutes. At last count I had four different environments that I had bumping around.

Then there is the great fallacy of configuration by the UI:

You can configure the entire thing through a browser based UI. What else do ya want?

Well, source control would be a nice thing to have, as well as a way to do search / replace on the configuration, as well as a way to quickly scan through everything that was done in the last three days and see what happened there.

One thing that I really don't agree with Jeffery is the XP/Vista limitations. I am running Win2003 on all my machines, from the production servers to the workstations to the laptop, and it has been more than pleasurable enough. There really isn't any difference as far as I care.

You might want to check my criteria for evaluating a development platform.

A lot of the people on Jeffery's comments thread repeated the ability to do anything in Share Point. The problem I have with that statement is the relative cost of doing it in and outside of the platform. And the platform fails as a development platform if it is significantly harder to do it under the platform than outside it.

Scheduled Tasks in MonoRail: The Quick & Dirty solution

I need to develop a set of tasks that would run in internals. Now, we need to do several dozens of those, and at that point, it is not important how we actually schedule them, we can defer that decision. But we really need to be able to start develop them soon.

So, I came up with this idea. The basic structure is this:

[OccuresEvery(Occurances.Day)]
public class SendBirthdayEmails : ScheduledTask
{
	public override void Execute()
	{
		foreach(Employee emp in Repository<Employee>.FindAll(Where.Employee.Birthday == DateTime.Today)
		{
			Email
				.From(Settings.Default.HumanResourcesEmail)
				.To(emp.Email)
				.Template(Templates.Email)
				.Parameter("employee", emp)
			.Send();
		}
	}
}

This is not really interesting, but the rest is. Remember that I don't want to deal with deciding how to actually schedule them, but we need to be able to run them right now for test / debug / demo purposes.

In my windsor.boo:

//Controllers
controllersAssembly = Assembly.Load("MyApp.Web")
for type in controllersAssembly.GetTypes():
	continue if type.Name == "ScheduledTasksController"
	continue if not typeof(Controller).IsAssignableFrom(type)
	IoC.Container.AddComponent(type.FullName, type)

//register scheduled tasks
scheduledTasksAssembly = Assembly.Load("MyApp.ScheduledTasks")
scheduledTasks = []
for type in scheduledTasksAssembly.GetTypes():
	continue if not typeof(ScheduledTask).IsAssignableFrom(type)
	IoC.Container.AddComponent(type.FullName, type)
	scheduledTasks.Add(type)
//register scheduled tasks controller independently, since it requires special configuration 
Component("ScheduledTasksController", ScheduledTasksController,
	scheduledTasks: scheduledTasks.ToArray(Type) )

So it will automatically scan the scheduled tasks assembly, and the only thing that I have left is to write the ScheduledTasksController. This is a very simple one:

image

It has just two methods, one to list all the tasks, and the second to execute them. This is strictly a dev only part of the application, so I took most of the available shortcuts that I could. So the UI looks like this:

image

And the view code is:

<% for task in tasks: %>
<tr>
	<td>
	${Text.PascalCaseToWord(task.Name)}
	</td>
	<td>
	Every ${task.OccuranceEvery}
	</td>
	<td>
	${Html.LinkTo("Execute", "ScheduledTasks", "Execute", task.FullName)}
	</td>
</tr>
<% end %>

I really like the PascalCaseToWord helper, really nice.

On the controller's side of things, I have this:

public ScheduledTasksController(Type[] scheduledTasks)
{
	scheduledTaskDescriptions = new ScheduledTaskDescription[scheduledTasks.Length];
	for (int i = 0; i < scheduledTasks.Length; i++)
	{
		scheduledTaskDescriptions[i] = new ScheduledTaskDescription(scheduledTasks[i]);
	}
}

public void Index()
{
	PropertyBag["tasks"] = scheduledTaskDescriptions;
}

Not a best practice code, but I did knock the whole thing very quickly. ScheduledTaskDescription just takes a type and unpack it in terms of attributes, name, etc.

The end result is that the other developers on my team can add a new scheduled task by simply adding a class that inherits from ScheduledTask, go to the browser, hit F5 and start executing it.

Now that is RAD.

Employee Communication DSL

We have a client that is a Pro Microsoft big time. We are constantly running into SSIS issues at their end, and it has gotten to the point where I am willing to rewrite the entire ETL process from scratch, and I am quite certain that it will take me far less than it would on any other way.

Here is my boss' reply to the suggestion to use Rhino ETL:

If boo in configuration

   Explain to {John, Jane} onEach(day: is(Monday) )

If ETL in DataLoader

  While(true)

      Explain()

You meant going back to using SQL jobs + stored procedures, right?

In this case, SP and plain old SQL would do the trick, so that is indeed the direction we are going to take, but I was literally floored by the DSL.

* BTW, I feel mean because it bothers me that the If and the While are capitalized.

Microsoft Connect: Redefining bugs as features as a standard operation procedure

This and this bugs are really pissing me off. Both those bugs are related to the same source, and one of them was originally a Rhino Mocks issue. They both  stems from the CLR runtime bugs.  Basically, trying to inherit from a generic interface with generic method parameters that has constraints causes the runtime to puke with a TypeLoadException.

I am not one to cry that select() is broken, but in this case, it most certainly is. It is verified using both the C# compiler and Reflection.Emit, so it is definitely a runtime bug.

The really annoying parts that they were acknowledged and verified as bugs by Microsoft, and then closed By Design.  No communication whatsoever about the reasoning for that, just the customary "By Design" response that I have seen all too often in the past.

I personally have had to track down the root cause when it was first submitted to me as a Rhino Mocks bug. If this is a By Design bug, then either the design is broken, or we deserve a lot more information about this, because no way your turn this, it should be legal to do this.

As you can probably tell, I am very angry about this, not the least because I spent many hours with it before admitting that it was a runtime bug that I had no way around. I am pretty sure that some of the bugs in connect are resolved to their reporters satisfactions, I just have to meet such a thing.

Fluent Refactoring

I have just made the following refactoring, starting from here:

Owner owner = GetAccountOwner();
Lookup to = new Lookup(EntityName.queue, Settings.Default.ControlQueueID);
Lookup from = new Lookup(EntityName.systemuser, owner.Value);
Guid regarding = PostState.new_parentpolicy.Value;
Picklist priority = PickLists.EmailPriority.Type.High;
SendMail(from, to, title, body, owner, regarding, priority);

I moved here:

Owner owner = GetAccountOwner();
Email
	.Owner(GetAccountOwner())
	.From(EntityName.systemuser, owner.Value)
	.To(EntityName.queue, Settings.Default.ControlQueueID)
	.Regarding(PostState.new_parentpolicy.Value)
	.Priority(PickLists.EmailPriority.Type.High)
	.Title(title)
	.Body(body)
.Send();

Not much in terms of lines of code, but my sense of aesthetics is pleased, at least. And at least it ensures that we won't have this:

 

SendMail( 
	new Lookup(EntityName.systemuser, GetAccountOwner().Value),
	new Lookup(EntityName.queue, Settings.Default.ControlQueueID), 
	title, 
	body, 
	GetAccountOwner(), 
	PostState.new_parentpolicy.Value, 
	PickLists.EmailPriority.Type.High);

Anti Corruption Layers: Striving for FizzBuzz level

I think that I mentioned that I don't really like Microsoft CRM development options. Considering the typical quality of the code that I see online when I search for samples, I certainly see the CRM as corrupting influence.

That is why I pulled the big guns and built a whole new layer on top of it. I assume that you are already aware of my... reservations for leaky abstractions, and considering my relative lack of expertise on the CRM itself, I don't think that it would have been wise to diverge too far from the model that the CRM has, only to break the way we are handling it.

As a result of that, we have this piece of code, which is how I handles tasks that are usually handled by callouts. The major advantage is with the ability to develop on the local machine without being tied to the server, and a lot of smarts with regards to banishing XML parsing from the core business logic. The end result, as far as I am concerned, is that I should be able to write business logic using Fizz Buzz level of code, if that. It is a bit mind numbing, but that is what the client wants, not to struggle mightily with the underlying platform.

[Handles(CrmOperations.PreCreate)]
public class OnCreateFriend_SetFullName : NorthwindCommand<new_friend>
{
	public override void Execute()
	{
		if(string.IsNullOrEmpty(PreState.new_name)==false)
			return;
		PreState.new_name = PreState.new_firstname + " " + PreState.new_lastname;
	}
}

 And to foretell questions, the basis for that is not Open Source, although we may make a product out of it at some point. (Drop me a line if you are interested).

The Right Metaphor: Software as a garden, not a building

Sometimes it is a small change in perspective that makes a lot of the difference. Chris Holmes just made one such observation that I am going to use in the future.

He argues that we should think about software as a garden that requires nurturing rather than a building which requires maintenance. The big thing here is that I can already think of some parallels that I can draw (would you really want to decide up front how the garden should look, and put concrete lines all over the place?) that are more useful than the usual building metaphor.

Putting things in perspective

Josh Robb is talking about the sense of accomplishment of doing anything non trivial in WebForms, and concludes with:

When this happens I experience this strange sense of achievement and even pride. The feeling that I have really done some “hardcore” programming today. Then I remember that I just managed to inject a string into the middle of another string and write the result to a stream.

:-)

Pushing the limits

Okay, I just finished writing this piece of code, I am not quite sure how to treat this. It is either elegantly concise or horribly opaque.

image 

I suspect both.

Somehow, you can probably wee that I have a C++ background, and even though the CLR doesn't give me the entire thing, you can still push it fairly far.

Of course, it still makes my head feel like this sometimes:

image