Ayende @ Rahien

It's a girl

How to (not) develop big applications...

Check this out. This is a short explanation on how Microsoft manages to produce Windows.

After reading this;

  • four to six months to get the dependencies that you need for a feature?!
  • the same length of time until other people get to use your code and you hear some feedback?!
  • 64 people who has a say in the freaking Off menu?!
  • a year to develop a menu with seven options?!

I literally can't grasp how they manages to do anything done. This puts Big Bang integration to shame, it is a Continious Big Bang, but six months too late. Some of the immediate implications is that until very late into the product, there is no authorotive source for Windows. No wonder they had to do a recylce two years ago, they were trying to do that and build WinFX on top of the (moving) .net 2.0 / 3.0. Exactly how many pieces do they think that they can juggle at one time?

And why on earth are there 64 people with a say on single feature? A feature should have at most two owners, the business-side owner and the tech owner (usually the tech lead / developer), everyone else is a dependency either you on them or they on you. They don't get to make decisions. At one time when I was in the army I had 9 people who were my direct commands, and at one point i literally had 28 ocnflicting orders. I went home and let them deal with it (a very good way to deal with annoying people, by the way).

There are few (if any) software projects on the level of Windows, but I don't think that it is possible to develop software in this manner. There are all too many people in the way for it to work. Also, this information made Microsoft a lot less attractive place to work with. I don't like being in a no-code/all-meeting days, and there are very few things that annoys me more then people telling me things that they already did.

No wonder that they are bleeding the good people out. They are going to find places they can actually work for.

Tags:

Published at

Vista: What Is The Killer App

So, I have installed vista, wasn't very happy about it, and went back to XP. This raise the question, what is the killer application for Vista? Even in my brief use of Vista, I could see a lot of stuff that I liked. But none of them was really good in the level of "Oh, I must have this!"

Just repaving a system is something that take me days/weeks to recover from. Moving from XP/2003 to Vista is something that is going to take even longer. There should be something that would make me (and other users) want to move. For Windows 3.0, that was Excel / Word. For Windows XP, it was a very long process of just getting it out of the box (I can't recall any reason to upgrade from 2000, and indeed, I didn't for a long time).

According to several people who already jumped on the Vista wagon, the impvorements to the OS are many incrememntal ones, and not several big ones. This is a Good Thing, except that it takes a long while to get used to those improvement, and get annoyed when they are not there.

Is there a killer application for Vista?

Tags:

Published at

Vista.Install.RollBack()

My Vista Install Experiance:

  • I couldn't upgrade the machine, got some error about file corruption in:
  • I burned to DVD and installed from scratch, and then it worked.
  • Attempting to install VPN client (checkpoint, for what it worth) caused a blue screen and resulted in an un-operational system.
  • I repaved the machine again with Vista, this time not trying to install the VPN.

My Vista User Experiance:

  • It is pretty, and it has a lot nice effects.
  • Just about every action that I made at the start of the system cause a security dialog. This security dialog is accompanied with a screen flicker, which looks excatly like the start of a blue screen. I imagine that after stabalizing the system, I won't have to run into those so often, but it is still highly annoying.
  • IE7 is Not FireFox - It looks a lot alike, but it behaves differently, very annoying.
  • Microsoft has still not learned about Fitt' Law as it applies to the taskbar, or they never tries to use Vista with the taskbar set to more than one row.
  • There are a lot of UI features that I say "Wow, this is nice", and some that I have already to say "Wow, this is useful".
  • Visual Studio / Sql Server / MSDN - Refused to install.
  • Couldn't get the screen (LG L1800P) to portrait mode.
  • I spent most of the day reading Agile Development With Rails, since Rails does work on Vista.
  • I kept waiting for it to crush, since application did so with regular consistency. I never did actually install anything on it, just some Portable Applications, and that was it.

I am writing this on my "old" XP mahcine, where I can use VPN, use the screen in portrait mode and actually do some useful work.

I might try it again when there would be a consumer release, and all the drivers/applications would be more mature (or even exist). But for now, I don't see the benefit.

Tags:

Published at

On Vista

That took quite a while. At first I got the 0x80070241 error midway through install, which seems to only happen on upgrades. I finally just installed from scratch, which is very annoying, but at least worked.

As a matter of fact, I installed twice. The first time I tried to install CheckPoint SecureClient, and all hell broke lose. Blue screens all over the place, it got to the point that I couldn't even boot. I finally decided to scrap that install and install again from scrtach.

The annoying security dialogs are indeed annoying, what is more, the second before they appear, the entire screen blinks, I assume that this is because of the new desktop being created. the problem with that is that it looks excatly like it is going to blue screeen.

So far, I can't say that I am particularily impressed with anything there. The enw webby Look & Feel drives me nuts because I don't know if I need to double click or single click, for instnace.

It is prettier, I will give it that.

The problem is that until I get the VPN issue straighten out I literally can't do anything with the computer. No Visual Studio, no Office, nothing.

In the meantime, I think that I will purp

Published at

Now THAT is integration

Vista doesn't like PowerShell, so I removed it. This is what I got midway through it.

 (Image from clipboard).png

I must say that I was surprised to know that the XML Parser is based on PowerShell :-)

Tags:

Published at

Vista.TryInstall();

Wish me luck, I am going to try upgrading my machine from XP to Vista. I don't want to rebuild my enviornment from scratch (several days at least), so I am upgrading. I have no idea if I will end up with a usable machine.

The compatability tools says I am fine, but I guess that I'll have to see...

Presenting Trac UI

So, after praising Trac so much, I decided that I need another this from it, and that was a windows client. Specifically, what I wanted is for QA to be able to enter bug reports that included screen shots without a lot of hassle. There is nothing that explains most bugs better than a screen shot with some marking.

The result is this:

(Image from clipboard).png

(Image from clipboard).png

(Image from clipboard).png

(Image from clipboard).png

The easiest part of this application was capturing the screen, actually. The hardest was interfacing with Trac. Xml RPC is not as easy as it can be when you are used to wsdl generated code, I am afraid.

Note: This is not public yet. The code is in the rhino tools repository, but I want to test it in more scenarios, specifically, ones hat involve windows authentication.

Tags:

Published at

I love Trac

After searching high & low for a bug tracker to use, I have finally settled into Trac, but only recently I have had the time to really take a deep look at it. The #1 issue with Trac is that it is hard to setup for the first time. I am used to web applications that are either drop in (ScrewTurn Wiki) or Drop In, mess with config, done (just about anything else). Trac has a very different approach, in which most of the management of the project is done via a command line tool. It took me a while to grasp that, and then it was a lot easier.  (Yes, I know about web admin).

The #2 issue with Trac is the version number, it is currently 0.10.2, for a product that is feature rich and stable enough to call 1.0. I am used to OSS projects have ridiciously low version numbers, so it is not surprising, but I do believe that this would be a turnoff for many people. I believe that the low version number is because they are making breaking changes to the code base fairly often.

So far I have been very impressed with the amount of stuff I can do it. It is highly customizable via configuration, and there are all sort of really interesting plugins that exists for it. Gantt chart and export as PDF are the ones that are really cool. But there are others that seem to fit just about any concatenation of features that I can think of.

Tags:

Published at

Mem Cached Information Util

I have been doing a lot of work with Mem Cached lately, and I have been frustrated with the lack of information that I have from it as I tried to use it. The main problem is not that the information isn't avialable, it is that I feel that telneting to a service and querying its status using text commands (and them trying to turn bytes to MB in my head) is a bit primitive.

This utility will simply query the MemCached server every 5 seconds, and display a more friendly status. You can configure its server via the app.config. Needless to say, this is a one off application, so it isn't meant to be very robust or maintainable (not that it does that much).

You can get it here. And the source (for the curious) is here.

Have fun...

Tags:

Published at

NHibernate Query Analayzer 1.2 Beta: Now With Active Record Support

There is a new build of NQA out, this time it works against (and only against, sorry) NHibernate 1.2 Beta 2.

This time I also added support for Active Record. All you need to do is add the Active Record assembly to your project, add the app.config, and you are set. The only requirement is that the Active Record configuration section will be named "activerecord".

Now you should be able to execute queries against your objects directly. This release is marked as beta, since I have not yet been able to give it the testing that it needs. Let me know how it goes.

You can get the bits here.

Speed vs. Maintainability

I recently had several discussions about the usage of Castle / NHibernate in projects. The argument almost always revolve around the following issues, which both sides agree on:

  • It can take time to start start developing with Castle/NHibernate. To learn the libraries, to learn the things you should not do, etc.
  • After you learn to use the libraries, the cost of change goes down, usually radically.
  • It takes some time for a new developer to be productive*.

What do you think of this issue?

* I had three new developers trickle into my team and start working with an application that I wrote using NHibernate. I won't say that they now understand the full application stack, but they were productive after about a week, and most of the time was understanding the domain, not NHibernate.

Complex Queries With Active Record

There are some things that you simply can't do effectively with an abstraction. Take Hierarchical Queries as a good example. They are very hard in SQL, there is no good way to solve the issue in a portable way. Therefor, each database has its own method to handle this issue. In SQL 2000, you would build a table valued function that you would query, in Oracle, you use CONNECT BY, and in SQL 2005 and DB2 (that I know of) you can use Common Table Expressions.

Out of the box, NHibernate can't handle this, but is exposes the hooks to allow this. A classic example is bringing all the messages in a thread in a single query. This is an issue since if you try to handle this by default, you'll get a query per each level in the hierarchy. Here is a simple message class, which has a parent message.

(Image from clipboard).png

And here is how we get all the messages in the hierarchy, for the message with id 43:

SimpleQuery<Message> msgs = new SimpleQuery<Message>(QueryLanguage.Sql, @"

WITH    AllMsgs ( id )

      AS ( SELECT   messages.id               FROM     Messages

           WHERE    messages.id = ?

           UNION ALL

           SELECT   messages.id               FROM     Messages

           INNER JOIN AllMsgs parent

           ON       messages.parent = parent.id )

 SELECT {msg.*}

 FROM   Messages msg

 WHERE  msg.id IN ( SELECT  id                 FROM    AllMsgs )  

", 43);

 

msgs.AddSqlReturnDefinition(typeof(Message),"msg");

Message[] messages = msgs.Execute();

The SQL query is standard SQL for SQL Server 2005, with two slight modifications. The ? marks a positional parameter, in this case, the 43 that we pass to the query. And the {msg.*}, which NHibernate will expand for you to the appropriate columns. It is important to notice that we need to tell the query the mapping between the Message type and the "msg" alias.

We brought all the message hierarchy to the application in one shot, instead of N+1. This is really good, if you are interested only in the properties of the message. A slight complication may arise if you want to handle the author for each message. Now we need to bring not only use the messages, but their authors, preferably in a single query. Let us see how we tackle this.

We will want to pull everything in one go, which means that we need to join between the messages and users table. This, in turn, means that we will get a list of tuples back. Here is the query itself:

SimpleQuery<object[]> msgs = new SimpleQuery<object[]>(typeof(Message),QueryLanguage.Sql, @"

WITH    AllMsgs ( id )

      AS ( SELECT   messages.id               FROM     Messages

           WHERE    messages.id = ?

           UNION ALL

           SELECT   messages.id               FROM     Messages

           INNER JOIN AllMsgs parent

           ON       messages.parent = parent.id )

 SELECT {msg.*}, {u.*}

 FROM   Messages msg JOIN Users u on msg.Author = u.id

 WHERE  msg.id IN ( SELECT  id                 FROM    AllMsgs )

", 43);

 

msgs.AddSqlReturnDefinition(typeof(Message),"msg");

msgs.AddSqlReturnDefinition(typeof(User), "u");

 

object[][] messages_and_users = msgs.Execute();

Here is how you use this now:

foreach (object[] message_and_user in messages_and_users)

{

       Message message = (Message) message_and_user[0];

       Console.WriteLine("{0} - {1}", message.Author.Name, message.Id);

}

Notice that even though we brought the user from the database, we don't access it from the tuple, we access it normally, from the message object. NHibernate is smart enough to figure out that it already loaded it, and can use it directly.

A note, NHibernate itself can use the mapping file to hide all of this from you, which you can later modify without touching the code (or even recompiling). Active Record has no such facility that I know of, though.

This post is dedicated to the localhost blogger.

Israeli Blogger Dinner

So two days ago there was the Israeli Blogger dinner, it was a blast. 16 bloggers and a single localhost blogger showed up, which was more than I thought there were, as a matter of fact.

Highlights of the evening:

  • Getting to tell jokes about the value of Heaps Of Meat vs. Stacks of Meat.
  • Waiting for the meat.
  • Trying to participate in three very interesting conversation at once.
  • Hearing quite a lot about the community in Israel and abroad.

I am definately going to be at the next one, and it should defianately be a regular occurance. You can get some pictures here (look for the goofy looking fellow, that is me).

The dinner itself was great, but the end was soured by the place - Papagaio Azrieli - trying to scam us. I wouldn't have anything good to say on the service, cost or the quality of food, but the last part was over the top (charging for drinks that we thought were supposed to be part of the meal).

Next time, we are going to do it some place else (still a meat-joint, fear not), and it is definately going to need bigger tables, I think that I missed about 50% of the interesting talk because I was too far away.

Chears for Omer for organizing it.

Tags:

Published at

Book Review In Search Of Stupidity 2nd Editiondiv StylePADDINGRIGHT 4px PADDINGLEFT 4px PADDINGBOTTOM

Book Review: In Search Of Stupidity, 2nd Edition
Note: I got a free copy of the book.


In Search of Stupidity: Over Twenty Years of High Tech Marketing Disasters, Second Edition

I don't generally read the 2nd edition of books that I have already read, and I have read (and enjoyed) the first version of this book very much. I have pretty good memory and very low tolerance for boredom.

I was very pleased to se that there are a lot of new stuff in the book, and that the stuf that I have read already is still extremely entertaining. I just went back and read my first review of this book, and I still stand behind every word there. It is an hilarious book, and probably something that should be mandatory reading for a lot of people in the tech world.

A lot of the stuff that I read are stuff that I can personally relate to. (Throwing out a working printers database and starting from scratch sound a lot like running after the latest tech in a lot of customers that I see.)

The comment I made previously about errors the author has done still stand, and it still bothers me. And there are several instances where I felt that the author ignores other factors that are relevant to the issue at hand. A good example of this is that the positioning conflict between Windows 9x and Windows NT. There are a lot of technical / business reasons for this, which affected the decision, which the author simply ignored. That was not mal-practice that brought this, it was the very valid issues that brought this decision.

The new stuff from the first version seems to be case analsys that explains what he thinks would have been a good solutions to the stupid ations done. And some new (and recent) perspectives. His treatment of the Google's issues with "Do No Evil (for anyone with less than a billion)" was really good, for instance. And I fully agree with the stupidity of having 6 versions of Windows Vista.

You really should read this book, as an amusing passtime at the very least. It is comedy, history and warning combined into one book. At the very least, you can now (hopefully) say: "This hasn't work for XYZ when they tried it...", etc.

Highly recommended reading.

BooksFun | Books

Published at

NHibernate 1.2 Beta 2 - Performance

I was asked to help an application that was suffering un-optimal performance with NHibernate today. The first thing that I did was upgrade the application to NHibernate 1.2 (which took some time, damn non-virtual methods).

After I was done, I run the application, and was basically done. Just by upgrading to 1.2, we saw a major performance improvement. Now, to be fair, this is not just because NHibernate 1.2 is so much better (although it is :-) ). It is that it has different defaults (specifically about lazy loading) that make a huge difference in performance.

The incomprehensible NHibenrate error contest

Did you get a hard to understand error from NHibernate and didn't know quite what to do?

I am planning on going over NHibernate's error messages and see if I can improve the amount of information that they give you, but I would like to hear from you what the most annoying error messages are, and attempt to improve them (or at least explain why they are that way.

Please note that I am only talking about errors that are hard to understand when you view the entire exception stack. Not reading the inner exception is not excuse.

Tags:

Published at

Geek Motivation

There are a lot of stuff that I have been intending to do for quite some time. This mostly includes various releases for updates and fixes in the OSS projects that I work on.

I'm going to work on that today. But I said that before about 5/6 times. This time it is going to be different. This time, I have a prize at the end. I got Vista RTM just waiting to be installed. When I finish everything, it is time to hit the big blue button and see what would happen to my PC...

Introducing RollingSqlAppender

I have explained before what the constraints I have when working with logging. I got several good comments about it, and several has hit the nail directly on the head.

Just to recap, here is the list of my requirements:

  • The output from the logs should be deliverable by email by a layman. This basically means a file.
  • The ouptut should allow me to slice / dice the data. The best way is if I could utilize existing SQL skills to do this.
  • The system may run unattended for weeks at a time, so the log file can't reach huge proportions.
  • I'm not really interested in historical data - something that happened long time ago is not important.
  • Plug into my current logging infrastructure (log4net).
  • Nice to have - Convention over configuration

After giving this much thought, I decided to implement a RollingSqliteAppender. This is based of the existing ADO.Net appender, but is focused on utilizing Sqlite only. Sqlite needs only a single file for its database, so it answers the first requirement, it also has a runtime (for .Net) of about less than 500Kb, so size / installation is not an issue. Despite writing to a file, it is a fully fledged database, meaning that I get to do all sorts of interesting things to it.

The third / forth requirements are what prompted the creation of this appender, rather than using the existing ADO.Net one. I took a look at RollingFileAppender and implemented similar functionality for my appender as well. Although I am checking rows, rather size/date, since this is a bit easier  to check. What is basically does is save all the logs to a sqlite file, until the amount of rows in the database is filled, then it rename the file and create a new one. Another reason for the need to create my own appender is that when creating a new file, the database structure is created for you automatically.From the name, you can figure out that this is a log4net appender, so integrating it is not an issue.

The last part is about minimum configuration. I tend to use the pretty standard logging table fairly often. That means [date, thread, level, logger, message, exception], so the appender is setup to support this (90% scenario, I would guess) out of the box. Here is how you configure the loggerin:

<log4net>

       <appender name="rolling-sqlite-appender"

                       type="Rhino.Commons.Logging.RollingSqlliteAppender, Rhino.Commons"/>

       <root>

              <appender-ref ref="rolling-sqlite-appender"/>

       </root>

</log4net>

This is it.

Now, it does provide a set of options that you can override (because I need that functionality fairly often).

Here is a list of the ones unique for the RollingSqliteAppender:

  • CreateScript - the script to generate the table(s) when creating a new database file. Default is to create the standard table (date, thread, level, logger, message, exception) named "logs".
  • TableName - must match to the name of the table in CreateScript. Default: "logs"
  • FileNameFormat - file format for the file. Default is {0}.log4net, where {0} is the current process name (which result is mostly correct results in most cases).
  • Directory - where to save the files. Default: Current Directory
  • MaxNumberOfRows - number of rows in the database that triggers rolling the file. Note that this is not a hard number, it depends on your BufferSize settings and flush options. It is close enough for my needs. Default: 50,000
  • MaxNumberOfBackUps - number of backup files to save. Backup files are given a numeric prefix where lower is newer. Default: 3
Note: The default buffer size is set to 512, you may want to change that. This is probably the most common reason why people thinks that log4net is not working.

In my tests, each file was about 5Mb - 10Mb in size, and they compressed nicely to a more emailable size.

By the way, I really like the log4net architecture.

The code is here

Reading The Logs

For production, there is nothing that can replace well thought-of logs. They are the #1 tool for understanding what is going on in a sytem. Often, they are the only real way to understand what is going on even when you are developing. This is often true in multi threaded programs, where just debugging them is not very effective.

The problem with logging is that there are usually too much of them. Following the trace of logs can be a daunting task when your system produce 50 logs per minutes on idle, and can produce hundreds and thousands of messages per minute while working. Just playing with the levels is not enough to make sense of things, you often need to correlate between several message, often at various logging levels, by different loggers.

All of this points me to the not so surprising conclusion that logs are data, and that I should be able to handle them like I handle all other data sources (selecting, grouping, filtering, etc). Luckily for me, it is very easy to make log4net write to a database. I can't think of how many times the ability to slice and dice the data have saved me. For instnace, being able to find that between 10:05 - 10:15 the number of errors from the web service is passing 50, which cause a trigger in another part of the system, which caused all deliveries to arrive an hour late. There is simply no way I could look at the messages for a the whole day and find that out without this.

So, after extolling the value of databases, why do I bother to post this?

Well, there are two problems that I run into when logging to the database. The first is that if the database is down, you don't do any logging (and in log4net 1.2.9, if the database is down, you won't be doing any logging afterward to the database, even if it came back up). The second is that if the application is at a client, and I get a call about an issue, there isn't much I can do about it without the logs.

The guy in charge for letting me know about problems can't read the issues by himself. Idealy, he could just email me the logs, and I would go over them and understand what the problem was. The problem is that to get an export from the DB requires a DBA to handle it, who is not always avialable. Writing to file and getting that is possible, but I already explained how hard it is to get to the point from a file. Getting log4net to produce a file that can be imported to SQL is possible, but it probably isn't trivail.

Messy, messy situation. I became familiar with a lot of large file viewers and remembering a lot of things at the same time when trying to figure out a problem. I also learned to recognize when this is not a good idea and driving there to take a look at the database.

I have a solution for this, which I will post shortly, but in the meantime, I am interested in what you think about this issue...

Tags:

Published at

Active Metrics or Slapping the Sloppy Developers

Any performance advice you hear starts with "Measure". And indeed, I know of nothing worse than making "performance improvements" without numbers that you can check. But I am not going to talk about perf tuning in this post, I am going to talk about something a bit different, how to avoid the more major pitfalls of performance, and in general, how to put (strict) guidelines for developers.

It is safe to assume that the #1 cause of performance issues in most applications is talking to the database. It is really easy (especially if you are using an abstraction) to do some things that are atrociously bad to the database. But it is pretty easy to do a lot of damage even when you are just using the raw DB model.

The worst I have ever seen was in an Oracle-based application that was "hand optimized" for every single action. I am not sure what it was optimized for, because it was really hard to read and really bad from the database performance perspective (nested cursors on a tables with many millions of rows, for instance, with no where clause and hidden assumption about the ordering).

Other reasons usually include a hotspot in an unexpected location, or simple a complex piece of work that just takes time by its nature, but those aren't what I am talking about here.

The cause is usually oversight, something that is really easy to fix in place (or should be escelated early on) at the time you are writing it. Trying to fix this issue three months later, when you suddenly discover that loading a page results in unacceptable performance... well, that is a whole another story.

My method for this is very simple, actually. Measure and Fail. Set a limit to how many queries a page is allowed to perform to display its content. On the end of the request, check that it has not crossed that limit. If it had, fail. Throw an exception that would stop the developer from doing any more work until they fix this issue.  I have shown how this can be done for NHibernate here, and you can see that it is really easy to check for the number of queries per page using this method.

This all ties back to the Early Feedback Principal. It is very easy to see that that call over there is calling the database once per each customer, instead of loading all the data at once, a few minutes after you wrote it. It is a lot harder to look at a perf chart and try to analyze why accessing the contact information page is taking three minutes to load.

I would use this approach on the dev / test machines. This would mean that even if the developer didn't hit that limit during development, it is likely that the QA would catch it, and would file a bug: "ContactInformation.aspx is throwing PerformanceViolationException when I do XYZ..."

A few notes about this approach:

  • When you fail, fail hard. This means throwing an exception, not showing something on the trace. The idea is that the developer would get that yellow screen that would tell them "fix me!".
  • When you fail, give exact details about what happened:
    Performance Violation - The page ContactInformation.aspx execute 56 database queries, but only 10 are allowed. Reduce the number of queries the the allowed limit, or talk to Oren about why this page should get a special extention.
  • Make sure that you have an easy way to shut it off (for production, for instnace).
  • All the tests on the build server must have this setting on, so they would fail if this was violated.
  • Provide a simple way to temporarily disable this (to show functionality, to review all steps of a problem, etc). This query string is usually enough: "hack=42"
  • You also will want a list of exclusions, most probably
  • ASP.Net HttpModules are suited very well for this approach, by the way. And it is a really nice way to set up a set of this checks.

Of the top of my head, I can think of the following things that I would measure and fail in this way:

  • Queries per page
  • Total page execution time - note that this require that you would take into account if you are in the debugger or not.
  • Usage of session variables - which I tend to really dislike.
  • Total size of a page.
  • Total size of view state.
  • Validating XHTML / HTML output.

This is something that you probably would want to do from the start, but I have no compunction about enabling things like this in mid project. In my opinion, this is the simplest (and perhaps the only) way that you can maintain a set of standards in a team.

I found that using this approach for cross cutting changes is the best way of handling those issues. I can usually find the 90% of the places to change, but the last 10% are elusive, and maybe a developer in the future will not remember that we need to do something like this because of this or that. Failing fast means that I get visible issues, not invisible ones.

A good example of that is security. I had to add a column level security on all the grids in the application, after starting out with page level security. I solved this issue by adding the security logic to the grid itself. But the defination about the required operation for each column had to come from the column itself. This means that each and every (actionable) column in the application had to have an operation specified. I didn't get all of them when I implemented this approach. But when I try to load a grid with column that doesn't have an operation, I get a very nice error message saying that I really should care about security and what about putting an operation on the "Delete User" column?

This way, I ensure that either everything works like it should, or it doesn't work at all. There is not half-measures here. This is happening when you code, so the feedback cycle is very short and easy to solve.

I am going to get sued for temporal harassment

Go read the title again, first. It is not what you think.

I mentioned before that I am working on temporal system. Actually, it didn't start as temporal system. It started as you everyday business system. Fairly complex domain, very interesting problems. At the time, I had Evan's DDD heavy on my mind, and I had a really nice domain model that I could work with.

Just to make a point, the code I am going to show is a very simple piece of code, meant to sort a list of employee's contracts by their name.

Using this domain model, I could sort the contracts list like this:

Algorithms.SortInPlace(templates, delegate(ContractTemplate x, ContractTemplate y)

{

      return x.Name.CompareTo(y.Name);

});

Very simple to grok, using Power Collections to make this even nicer.

Then the Change Request came*. All objects in the system are time bound and may have multiply occurances. What does this mean? It means that I can no longer ask a contract for any of its properties. I need to ask it for its properties at a certain point in time. That was a fairly big change, and I handled it using this approach. This meant that I had to go on all my code and change each property access to use the relevant date. Naturally, I mostly used DateTime.Today. so the above code turned into this:

Algorithms.SortInPlace(templates, delegate(ContractTemplate x, ContractTemplate y)

{

      return x.At(DateTime.Today).Name.CompareTo(y.At(DateTime.Today).Name);

});

But here is the kicker, in temporal systems, one of the key elements is the time that you are looking at the system vs. the time of the objects. This mean that I can very well look at an object that doesn't exist at this point in time.

To make this more obvious, consider the following case. I am currently looking at a list of employees as it was three years ago. Obviously, any new employees do not exists in the list, even though they are on the system. The way I implemented this approach is simply throwing exception when you ask an object for its state at a date where it didn't exist.

So, we are happily developing the system and trying to understand our code through 9 levels of different date intervals. But, we show the system to the customer, and the first thing that they do is to create an object in the future and then try to view its state today. If you consider the case, you will see that of course it throws an exception. The problem is how to deal with it. Here is the current code to sort a list of contracts by their names:

Dictionary<Employee, ContractTemplate > employeeAndContracts = ...;

templates = Algorithms.Sort(employeeAndContracts, delegate(ContractTemplate x, ContractTemplate y)

{

   DateTime y_maybeValidDateToCheck = employeeAndContracts [y].Employement.Start;

   DateTime x_maybeValidDateToCheck = employeeAndContracts [x]. Employement.Start;

   if (x.Validity.Overlap(x_maybeValidDateToCheck) == false)

   {

       if (y.Validity.Overlap(y_maybeValidDateToCheck) == false)

           return 0;

       else

           return -1;

   }

   string name_x = x.At(x_maybeValidDateToCheck).Name;

   if (y.Validity.Overlap(y_maybeValidDateToCheck) == false)

       return 1;

   string name_y = y.At(y_maybeValidDateToCheck).Name;

   return name _x.CompareTo(name _y);

});

Yes , I did refactored it out to a method, but the sheer complexity of doing a simple sort is annoying. You can imagine the complexity of working with the model on a day to day basis and why I dislike System.DateTime so much...

* One important thing to mention, because of disclosure issues, I am not talking about the actual system, but a parallel one, which I can talk about safely. This is relevant here because the temporal decision is not an obvious one in this system, while employee's contracts is usually the classic place for this type of system.