Ayende @ Rahien

It's a girl

Hug a developer

This is something that has been making the rounds lately. Just about any developer that have seen it was greatly amused.

Check out the video.

This is so good, I intend to make this into a part of several of my presentations.

Does you application has a blog?

I was having dinner with Dru Sellers and Evan Hoff and Dru brought something up that really sparked my imagination. To put it in more concrete words, what Dru said started a train of thought that ended up with another mandatory requirement for any non trivial application.

Your application should have a blog.

Now, pay attention. I am not saying that the application team should have a blog, I am saying that the application should have one. What do I mean by that? As part of the deployment requirements for the application, we are going to setup a blog application that is an integrated component of the application.

Huh? I am building Order Management application, why the hell do I need to have a blog as part of the application? Yes, PR is important, and blogs can get good PR, but what are you talking about?

This is an internal blog, visible only for the internal users, and into it the application is going to blog about interesting events that happened. For example, starting up the application would also cause it to post to its blog about that, which can look like this:

image

This looks like log messages that were written by a PR guy. What is the whole point here? Isn't this just standard logging?

Not quite. This is an expansion of the idea of system alert, where the system can proactively determine and warn about various conditions. This idea is anything but new, you are probably familiar with the term the Operation Database. But this approach has one key factor that is different.

Social Engineering

Using a blog, and using this style of writing, making it extremely clear what should and should not go there as a message. You obviously are not going to want to treat this as a standard log, where you just dump stuff in. From the point of view of actually getting this through, this make a task that is often very hard into a very simple "show by example".

From the point of view of the system as a whole, now business users have a way to watch what the system is doing, check on status updates, etc. More than that, you can now use this as a way to post reports (weekly summary, for example) and in general vastly increase the visibility of the system.

Using RSS allows syndication, which in turn also also easy monitoring by an admin, without any real complexity getting in the way. For that matter, you can get the business user to subscribe to it with Outlook (if they don't already have a standard RSS reader) and get them on board as well.

Now, this is explicitly not a place where you want to put technical details. This should be reserved to some other system, this is a high level overview on what the system is doing. Posts are built to be human readable and human sounding, to avoid boring the readers and to ensure that people actually use this.

Thoughts?

Recursive Meta Programming

I am currently writing a DSL that is used to meta program another DSL that is used to to do some action (it is actually turtles 7 layers deep, but we will skip that). It gets to be fairly interesting, although trying to draw that as a diagram is... a bit challenging.

Oh, and there are at least a few parts that rewrite itself.

Ever tried to do incremental method munging? That is when you take code from several places and start applying logic to where to put it. Only useful because of a lot of interesting constraints that we have to deal with in this project, and probably will be actively harmful on other scenarios. And that is only one technique that I am using there.

But a damn elegant approach to solve a problem, wow!

Thinking about chapter 12 - DSL Patterns

I am giving a lot of thought to this chapter, because I want to be able to throw out as much best & worst practices as I can to the reader. Here is what I have right now:

  1. Auditable DSL - Dealing with the large scale - what the hell is going on?
  2. User extensible languages
  3. Multi lingual languages
  4. Multi file languages
  5. Data as a first class concept
  6. Code == Data == Code
  7. Strategies for editing DSL in production
  8. Code data mining
  9. DSL dialects

I am still looking for the tenth piece...

A legacy of conspiracy

Contrary to popular opinion, I have not been kidnapped, nor have I been hit on the head, nor have I started to seek that kind of job security.

I gave a talk about legacy code and refactoring, and I needed something concrete to talk about. Unfortunately, most legacy code is too intertwined to be able to extract out in order to talk about it in isolation. So I set out to write my own.

To all the people who assume that I don't live in the real world, or that I don't deal with legacy systems... well, I think that this code shows that I do know what is going out there.

And just to make it clear, no, I wouldn't write this type of code for any reason. Not for a spike or for a lark. But I think that this is a pretty good archeological fake, even if I say so myself. 

DSL Dialects

Let us take this fancy DSL:

image

And let us say that we want to give the user some sort of UI that shows how this DSL works. The implementation of this DSL isn't really friendly for the UI. It was built for execution, not for display.

So how are we going to solve the problem?  There are a couple of ways of doing that, but the easiest solution that I know of consists of creating a new language implementation that is focused on providing an easy to build UI. A dialect can be either a different language (or version of the language) that maps to the same backend engine, or it can be a different engine that is mapped to the same language.

This is part of the reason that it is so important to create strict separation between the two.

What I am working on...

I am just going to post that, and watch what happens. I will note that this is code that I just wrote, from scratch.

public class TaxCalculator
{
    private string conStr;
    private DataSet rates;

    public TaxCalculator(string conStr)
    {
        this.conStr = conStr;
        using (SqlConnection con = new SqlConnection(conStr))
        {
            con.Open();
            using (SqlCommand cmd = new SqlCommand("SELECT * FROM tblTxRtes", con))
            {
                rates = new DataSet();
                new SqlDataAdapter(cmd).Fill(rates);
                Log.Write("Read " + rates.Tables[0].Rows.Count + " rates from database");
                if (rates.Tables[0].Rows.Count == 0)
                {
                    MailMessage msg = new MailMessage("important@legacy.org", "joe@legacy.com");
                    msg.Subject = "NO RATES IN DATABASE!!!!!";
                    msg.Priority = MailPriority.High;
                    new SmtpClient("mail.legacy.com", 9089).Send(msg);
                    Log.Write("No rates for taxes found in " + conStr);
                    throw new ApplicationException("No rates, Joe forgot to load the rates AGAIN!");
                }
            }
        }
    }

    public bool Process(XmlDocument transaction)
    {
        try
        {
            Hashtable tx2tot = new Hashtable();
            foreach (XmlNode o in transaction.FirstChild.ChildNodes)
            {
            restart:
                if (o.Attributes["type"].Value == "2")
                {
                    Log.Write("Type two transaction processing");
                    decimal total = decimal.Parse(o.Attributes["tot"].Value);
                    XmlAttribute attribute = transaction.CreateAttribute("tax");
                    decimal r = -1;
                    foreach (DataRow dataRow in rates.Tables[0].Rows)
                    {
                        if ((string)dataRow[2] == o.SelectSingleNode("//cust-details/state").Value)
                        {
                            r = decimal.Parse(dataRow[2].ToString());
                        }
                    }
                    Log.Write("Rate calculated and is: " + r);
                    o.Attributes.Append(attribute);
                    if (r == -1)
                    {
                        MailMessage msg = new MailMessage("important@legacy.org", "joe@legacy.com");
                        msg.Subject = "NO RATES FOR " + o.SelectSingleNode("//cust-details/state").Value + " TRANSACTION !!!!ABORTED!!!!";
                        msg.Priority = MailPriority.High;
                        new SmtpClient("mail.legacy.com", 9089).Send(msg);
                        Log.Write("No rate for transaction in tranasction state");
                        throw new ApplicationException("No rates, Joe forgot to load the rates AGAIN!");
                    }
                    tx2tot.Add(o.Attributes["id"], total * r);
                    attribute.Value = (total * r).ToString();
                }
                else if (o.Attributes["type"].Value == "1")
                {
                    //2006-05-02 just need to do the calc
                    decimal total = 0;
                    foreach (XmlNode i in o.ChildNodes)
                    {
                        total += ProductPriceByNode(i);
                    }
                    try
                    {
                        // 2007-02-19 not so simple, TX has different rule
                        if (o.SelectSingleNode("//cust-details/state").Value == "TX")
                        {
                            total *= (decimal)1.02;
                        }
                    }
                    catch (NullReferenceException)
                    {
                        XmlElement element = transaction.CreateElement("state");
                        element.Value = "NJ";
                        o.SelectSingleNode("//cust-details").AppendChild(element);
                    }
                    XmlAttribute attribute = transaction.CreateAttribute("tax");
                    decimal r = -1;
                    foreach (DataRow dataRow in rates.Tables[0].Rows)
                    {
                        if ((string)dataRow[2] == o.SelectSingleNode("//cust-details/state").Value)
                        {
                            r = decimal.Parse(dataRow[2].ToString());
                        }
                    }
                    if (r == -1)
                    {
                        MailMessage msg = new MailMessage("important@legacy.org", "joe@legacy.com");
                        msg.Subject = "NO RATES FOR " + o.SelectSingleNode("//cust-details/state").Value + " TRANSACTION !!!!ABORTED!!!!";
                        msg.Priority = MailPriority.High;
                        new SmtpClient("mail.legacy.com", 9089).Send(msg);
                        throw new ApplicationException("No rates, Joe forgot to load the rates AGAIN!");
                    }
                    attribute.Value = (total * r).ToString();
                    tx2tot.Add(o.Attributes["id"], total * r);
                    o.Attributes.Append(attribute);
                }
                else if (o.Attributes["type"].Value == "@")
                {
                    o.Attributes["type"].Value = "2";
                    goto restart;
                    // 2007-04-30 some bastard from northwind made a mistake and they have 3 months release cycle, so we have to
                    // fix this because they won't until sep-07
                }
                else
                {
                    throw new Exception("UNKNOWN TX TYPE");
                }
            }
            SqlConnection con2 = new SqlConnection(conStr);
            SqlCommand cmd2 = new SqlCommand();
            cmd2.Connection = con2;
            con2.Open();
            foreach (DictionaryEntry d in tx2tot)
            {
                cmd2.CommandText = "usp_TrackTxNew";
                cmd2.Parameters.Add("cid", transaction.SelectSingleNode("//cust-details/@id").Value);
                cmd2.Parameters.Add("tx", d.Key);
                cmd2.Parameters.Add("tot", d.Value);
                cmd2.ExecuteNonQuery();
            }
            con2.Close();
        }
        catch (Exception e)
        {
            if (e.Message == "UNKNOWN TX TYPE")
            {
                return false;
            }
            throw e;
        }
        return true;
    }

    private decimal ProductPriceByNode(XmlNode item)
    {
        using (SqlConnection con = new SqlConnection(conStr))
        {
            con.Open();
            using (SqlCommand cmd = new SqlCommand("SELECT * FROM tblProducts WHERE pid=" + item.Attributes["id"], con))
            {
                DataSet set = new DataSet();
                new SqlDataAdapter(cmd).Fill(set);
                return (decimal)set.Tables[0].Rows[0][4];

            }
        }
    }
}

Legacy Driven Development

imageHere is an interesting problem that I run into. I needed to produce an XML document for an external system to consume. This is a fairly complex document format, and there are a lot of scenarios to support. I began to test drive the creation of the XML document, but it turn out that I kept having to make changes as I run into more scenarios that invalidated previous assumptions that I made.

Now, we are talking about a very short iteration cycle, I might write a test to validate an assumption (attempting to put two items in the same container should throws) and an hour later realize that it is a legal, if strange, behavior. The tests became a pain point, I had to keep updating things because the invariant that they were based upon were wrong.

At that point, I decided that TDD was exactly the wrong approach for this scenario. Therefor, I decided that I am going to fall back to the old "trial and error" method. In this case, producing the XML and comparing using a diff tool.

The friction in the process went down significantly, because I didn't have to go and fix the tests all the time. I did break things that used to work, but I caught them mostly with manual diff checks.

So far, not a really interesting story. What is interesting is what happens when I decided that I have done enough work to consider most scenarios to be completed. I took all the scenarios and started generating tests for those. So for each scenario I now have a test that tests the current behavior of the system. This is blind testing. That is, I assume that the system is working correctly, and I want to ensure that it keeps working in this way. I am not sure what each test is doing, but the current behavior is assumed to be correct until proven otherwise..

Now I am back to having my usual safety net, and it is a lot of fun to go from zero tests to nearly five hundred tests in a few minutes.

This doesn't prove that the behavior of the system is correct, but it does ensure no regression and make sure that we have a stable platform to work from. We might find a bug, but then we can fix it in safety.

I don't recommend this approach for general use, but for this case, it has proven to be very useful.

Code Data Mining

I just wrote this piece of code:

class ExpressionInserterVisitor : DepthFirstVisitor
{
    public override bool Visit(Node node)
    {
        using(var con = new SqlConnection("data source=localhost;Initial Catalog=Test;Trusted_Connection=yes"))
        using (var command = con.CreateCommand())
        {
            con.Open();
            command.CommandText = "INSERT INTO Expressions (Expression) VALUES(@P1)";
            command.Parameters.AddWithValue("@P1", node.ToString());
            command.ExecuteNonQuery();
        }
        Console.WriteLine(node);
        return base.Visit(node);
    }
}

As you can imagine, this is disposable code, but why did I write that?

I run this code on the entire DSL code base that I have, and then started applying metrics to it. In particular, I was interested in trying to find repeated concepts that has not been codified.

For example, if this would have shown 7 uses of:

user.IsPreferred and order.Total > 500 and (order.PaymentMethod is Cash or not user.IsHighRisk)

Then this is a good indication that I have a business concept waiting to be discovered here, and I turn that into a part of my language:

IsGoodDealForVendor (or something like that)

Here we aren't interested in the usual code quality metrics, we are interested in business quality metrics :-) And the results were, to say the least, impressive.

JFHCI: Quote of the Day

This is probably the best quote that I can find about the motivation for JFHCI:

Enabling developers to write "bad" code quickly and safely.

Tags:

Published at

So where were you?

This post has really pissed me off:

It makes me sick to my stomach to think of all the good .NET projects that are now abandoned (or soon will be) because Microsoft seduced their authors away from doing anything that would actually benefit the .NET community.

Excuse !

Who exactly said that I owe something to anybody? Who exactly said that any of the guys who went to work for Microsoft (many of whom I consider friends) owe you something. The entire post is a whine about "I can't get the software I want for free".

Well, guess what, no one said it has to be free. Software has no right to be free. If anyone wants to stop dedicating significant amount of their time into free stuff, that is their decision, for their own reasons. Rhino Mocks is estimated at nine million dollars by Ohloh, I might decide to stop using it tomorrow, and you don't get a chance to protest that, or even to complain. Put simply, where exactly are your efforts? Where is your money and time?

Because unless you are a customer (in the sense of, money exchanged hands), you got stuff for free and now you complain because people aren't willing to do so anymore?

Now, leaving that aside, to the best of my knowledge, Castle, SubText, dasBlog and SubSonic are all alive and well and have received attention from the respective "seduced" authors.

Primitive Contextual Intellisense

The last time we looked at this issue, we built all the pieces that were required, except for the most important one, actually handling the contextual menu. I am going to be as open as possible, Intellisense is not a trivial task. Nevertheless, we can get pretty good results without investing too much time if we want to. As a reminder, here is the method that is actually responsible for the magic that is about to happen:

public ICompletionData[] GenerateCompletionData(string fileName, TextArea textArea, char charTyped)
{
        return new ICompletionData[] {
             new DefaultCompletionData("Text", "Description", 0),
             new DefaultCompletionData("Text2", "Description2", 1)
        };
}

Not terribly impressive yet, I know, but let us see what we can figure out now. First, we need to find what is the current expression that the caret is located on. That will give us the information that we need to make a decision. We could try to parse the text ourselves, or use the existing Boo Parser. However, the Boo Parser isn't really suitable for the kind of precise UI work that we need here. There are various incompatibilities along the way ( from the way it handles tabs to the nesting of expressions ). None of them is a blocker, and using the Boo Parser is likely the way you want for the more advance scenarios.

Reusing the #Develop parser gives us all the information we need, and we don't need to define things twice. Because we are going to work on a simple language, this is actually the simplest solution. Let us see what is involved in this.

public ICompletionData[] GenerateCompletionData(string fileName, TextArea textArea, char charTyped)
{
    TextWord prevNonWhitespaceTerm = FindPreviousMethod(textArea);
    if(prevNonWhitespaceTerm==null)
        return EmptySuggestion(textArea.Caret);

    var name = prevNonWhitespaceTerm.Word;
    if (name == "specification" || name == "requires" || name == "same_machine_as" || name == "@")
    {
        return ModulesSuggestions();
    }
    int temp;
    if (name == "users_per_machine" || int.TryParse(name, out temp))
    {
        return NumbersSuggestions();
    }
    return EmptySuggestion(textArea.Caret);
}

private TextWord FindPreviousMethod(TextArea textArea)
{
    var lineSegment = textArea.Document.GetLineSegment(textArea.Caret.Line);
    var currentWord = lineSegment.GetWord(textArea.Caret.Column);
    if (currentWord == null && lineSegment.Words.Count > 1)
        currentWord = lineSegment.Words[lineSegment.Words.Count - 1];
    // we actually want the previous word, not the current one, in order to make decisions on it.
    var currentIndex = lineSegment.Words.IndexOf(currentWord);
    if (currentIndex == -1)
        return null;

    return lineSegment.Words.GetRange(0, currentIndex).FindLast(word => word.Word.Trim() != "") ;
}

Again, allow me to reiterate that this is a fairly primitive solution, but it is a good one for our current needs. I am not going to go over all the suggestion methods, but here is the ModulesSuggestion method, which is responsible for the screenshot below:

private ICompletionData[] ModulesSuggestions()
{
    return new ICompletionData[]
    {
        new DefaultCompletionData("@vacations", null, 2),
        new DefaultCompletionData("@external_connections", null, 2),
        new DefaultCompletionData("@salary", null, 2),
        new DefaultCompletionData("@pension", null, 2),
        new DefaultCompletionData("@scheduling_work", null, 2),
        new DefaultCompletionData("@health_insurance", null, 2),
        new DefaultCompletionData("@taxes", null, 2),
    };
}

And this is how it looks like.

image

It works, it is simple, and it doesn't take too much time to build. If we want to get more than this, we probably need to start utilizing the boo parser directly, which will give us a lot more context than the text tokenizer that #Develop is using for syntax highlighter. Nevertheless, I think this is good work.

SQL Server 2008 Table Value Parameters and NHibernate

I just took a look at how this feature is exposed. I really wants this feature. I hit the 2,100 parameters limit of SQL Server too many times in the past, always when I had to do some large IN queries. So, I was very happy to hear about that feature, but I didn't really take a look until now.

Unfortunately, the way they are implemented requires a hard reference to them. You have to create the type in the server,  and then you have to reference it by name. Annoying, to say the least, and it looks like there isn't any generic solution that I can accept. This is bad because I can think of quite a few uses for this feature, from applying batches to complex queries, it can be very useful, but it is looked in its own safe, statically typed, world. Urgh!

NHibernate 2.0 Wiki

NHibernate 2.0 Wiki can be found here. This wiki already includes the entire NHibernate documentation, so you can head there and start learning.

Have fun...

Looking for a WPF dev

I am currently working on an interesting application, basically, rule engine, data + DSL, and other fun stuff. Unfortunately, here is how the UI is right now:

image

Yes, the disclaimer is in the UI.

Therefor, I currently looking for a WPF dev / designer. I am currently in New York, but there is no location limitation.

If you are interested, please contact me.

Code or data?

Here is a question that came up in the book's forums:

I can't figure out how to get the text of the expression from expression.ToCodeString() or better yet, the actual text from the .boo file.

It appears to automagically convert from type Expression to a delegate. What I want is to be able to when a condition is evaluated display the condition that was evaluated, so if when 1 < 5 was evaluated I would be able to get the string "when 1 < 5" - Any way to do this?

Let us see what the issue is. Given this code:

when order.Amount > 10:
	print "yeah!"

We want to see the following printed:

Because 'order.Amount > 10' evaluated to true, executing rule action.
yeah!

The problem, of course, is how exactly to get the string that represent the rule. It is actually simple to do, we just need to ask the compiler nicely, like this:

public abstract class OrderRule
{
    public Predicate<Order> Condition { get; set; }
    public string ConditionString { get; set; }
    public Action RuleAction { get; set; }
    protected Order order;
    public abstract void Build();

    [Meta]
    public static Expression when(Expression expression, Expression action)
    {
        var condition = new BlockExpression();
        condition.Body.Add(new ReturnStatement(expression));
        return new MethodInvocationExpression(
            new ReferenceExpression("When"),
            condition,
            new StringLiteralExpression(expression.ToCodeString()),
            action
            );
    }


    public void When(Predicate<Order> condition, string conditionAsString, Action action)
    {
        Condition = condition;
        ConditionString = conditionAsString;
        RuleAction = action;
    }

    public void Evaluate(Order o)
    {
        order = o;
        if (Condition(o) == false)
            return;
        Console.WriteLine("Because '{0}' evaluated to true, running rule action",ConditionString);
        RuleAction();
    }
}

The key here happens in the when() static method. We translate the call to the when keyword to a call to the When() instance method. Along the way, we aren't passing just the arguments that we got, we are also passing a string literal with the code that was extracted from the relevant expression.

Strategies for editing DSL scripts in production

Another interesting question from Chris Ortman:

So I write my dsl, and tell my customer to here edit this text file?
How do I tell them what the possible options are? Intellisense?
This is a web app, and my desire to build intellisense into a javascript rich text editor is very low.
It might be a good excuse to try out silverlight but even then it seems a large task.
Or I put express or #Develop on the server and make that the 'admin' gui?

This is actually a question that comes up often. Yes, we have a DSL and now it is easy to change, how are we going to deal with changes that affect production?

There are actually several layers to this question. First, there is the practical matter of having some sort of a UI to enable this change. As Chris has noted, this is not something that can be trivially produced as part of the admin section. But the UI is only a tiny part of it. This is especially the case if you want to do things directly on production.

There is a whole host of challenges that come up in this scenario (error handling, handling frequent changes, cascading updates, debugging, invasive execution, etc) that needs to be dealt with. In development mode, there is no issue, because we can afford to be unsafe there. For production, that is not an option. Then you have to deal with issues such as providing auditing information, "who did what, why and when". Another important consideration is the ability to safely roll back a change.

As you can imagine, this is not a simple matter.

My approach, usually, is to avoid this requirement as much as possible. That is, I do not allow to do such things on production. Oh, it is still possible, but it is a manual process that is there for emergency use only. Similar to the ability to log in to the production DB and run queries, is should be reserved, rare and avoided if possible.

However, this is not always possible. If the client needs the ability to do edit the DSL scripts on production, then we need to provide a way for them to do so. What I have found to be useful is to not provide a way to work directly on production. No, I am not being a smartass here, I actually have a point. Instead of working directly on the production scripts, we start, as part of the design, to store the scripts in an SVN server, which is part of the application itself.

If you want to access the scripts, you check them out of the SVN server. Now you can edit them with any tool you want, and finish by committing them back to the repository. The application monitors the repository and will update itself when a commit is done to the /production branch.

This has several big advantages. First, we don't have the problem of partial updates, we have a pretty good audit trail and we have built in reversibility. In addition to that, we avoid the whole problem of having to build a UI for editing the scripts on production, we use the same tools that we use during development for that.

As a side benefit, this also means that pushing script changes to a farm is builtin.

And yes, this is basically continuous integration as part of the actual applicatio.

A new meaning to async communication

I took this picture about a year and a half ago on my phone, while visiting a client. I then email it to myself. I don't think that it arrived, and I forgot about this.

This has just landed in my mailbox. It says, in Hebrew: "Mommy said that you can't put collections in the session on Hibernate will beat you up".

I don't ever own the phone this picture was taken on. How the hell did it arrive?

image

Meaningless code quality, part 2

Here is the next slide in my presentation.

image

Code quality is a hard metric, it can be backed up by numbers, if you need to. Fairly often, it is measured by gut feeling and "this is a mess".

One interesting problem with code quality is that you generally don't have a good way to measure the maintainability of a particular solution, regardless of its current code quality.

NHibernate 2.0 Final is out!

Guys, gals and its. I am overjoyed to tell you that NHibernate 2.0 has been released.

You can get it directly from the download page.

I would like to thank to the NHibernate project lead, Fabio Maulo, for doing such an awesome amount of work, and getting this out.

Thanks, Fabio.

I am not going to list all the features, suffice to say that we are now in a comparable position to Hibernate 3.2.

Have fun!

Boo Lang Studio 1.0 Alpha it out!

Jeffery Olson has just made the first release of Boo Lang Studio available.

This one comes with a "Yes, Dear" installer.

image

Yeah, we have Boo installed!

image

Let us create a new project:

image

And take a look at the code:

image

And intellisense works as well, whew!

image

Jeffery Olson and James Gregory: THANKS!

JFHCI: Considering Scale

There are some really good comments in this post, which I am going to respond to. Before that, I want to emphasis that I am still playing devil's advocate here.

Joe has two very insightful comments, regarding the just hard code everything and don't provide any admin interface for the parameters:

When you're making software that will be used by more than one customer, you can't hardcode these things in.

And:

Again, why not? Why can't Company A use the same software as Company B and A wants to give 30% discount when amount is over $10K whereas B wants to give 20% when it's over $15K?
Why is that so hard to expose? It's just a matter a validation from the admin UI.

There is obviously no technical reason to do that, right?

image

Well, there is one good reason to avoid this at all costs. It just doesn't work when you have any significant amount of rules. And by that, I am talking anything over two dozens or so.

Let us consider what the implications of such a system would be when talking about a system with any kind of reasonable complexity.

The cost of adding a rule has gone up from a simple "add a class" to creating UI, validating values, saving values, adding to the current UI, loading parameters from persistence storage. This translate to moving from twenty minutes task (including testing) to much more complex endeavor. Consider a rule that should give you discount based on geographical location, you need to provide a way to select that. For that matter, you now need to have multi instance rules, where you need to have the same rule bound to different parameters.

Are you willing to estimate the cost now? It is an order of magnitude or two more complex.

Now, let us say that we have solved the problem in some manner. It is not a hard problem to solve, admittedly. We are still left with operational issues that we have to deal with. How are we going to educate the ops team about all the rules? So far I have chosen a very simple set of rules to show, but real business rules are nothing trivial.

Then, we have UI issues, how are we going to show the ops team the set of rules that they can edit? How are we going to represent hundreds and thousands of rules in a meaningful fashion?

My answer is very simple. Don't even try. Instead of trying so hard to cut developers out of the loop, start from assuming that there will be a developer along the way. Now you need to optimize the hell out of this approach.

Remember, we aren't talking about a system that has a very small set of rules, we are talking about a very large set of them. A competitive advantage is how fast we can go from a business requirement to that behavior running in production.

Take into account that a lot of the changes in the system are not just parameters changes, let us take example of just a few business rules that come up as things that the business wants to do. This is not in design phase. This is when the system is on the air:

  • A customer shopping on his birthday gets a 5% discount
  • A preferred customer that has been with us for over a year gets three [product name] for the price of one for this fall.

Again, pre planning for this might give you a way to deal with those (if you thought about that scenario), but even so, you are going to incur a much higher cost at the system implementation phase. Pushing the changes to code makes the implementation much easier, and the ability to modify the system at a later date is greatly improved, because you already built your system around the concept of dynamically changing environment.

But, as Joe said, this doesn't really work if you have more than one customer. It actually does, in my experience. There are a lot of system out there that have a dedicate person for their care, and giving someone a three hours course in how to do this kind of thing is fairly easy (with the requirement that they can program in C# or VB.Net, which is not inconceivable demand).

If this is still not something that you can do, put a DSL in place. A DSL script is much easier to work with for business oriented stuff, and it incur about the same cost from design and implementation view. However, deployment is ever so much easier, since you are deploying a single script, instead of a dll. You can even provide in app UI to do this, if you really wants.

In short, consider the scenario that I am describing, and oft changing environment with a lot of rules. If you have an opinion on how stupid this is, please provide that opinion within this given context.

JFHCI: The evil that is configuration

 

 Chris Ortman asks a very good question about my just hard code it post:

image

I really like the simplicity of the approach, but does this impose a requirement that in order to change the discount percentage of the system I must be able to write c# code? I think it would not take very long for someone to ask me for and 'admin' screen to do that.

I run into this all the time with new features that would be trivial to implement if not for the database table to hold the settings, the model to work with the table, and the admin screen to edit it. How do we get from 'enter 5 into this box in the admin screen' to write class that applies the discount percentage you want .

I have a very simple answer for that. You don't provide any sort of admin screen, and you don't provide any way for a business user to go into the system and make a change. To start with, the system administrator has no idea what are the implications of changing such a configuration. Next, we have the standard issues of when we reload the value, validation, etc. All sorts of things that we really don't want to deal with unless we have to. Not to mentions that the idea of letting anyone the option to modify operational parameters without regard to testing and QA makes me very unhappy.

All of those are workable with enough effort, however. What is not workable is the end result of such a system. Here is what you'll end up with:

image

Remember, we expect to have a lot of such rules. So we expect to have a lot of configuration values. And don't get me started on reusing configuration values for different purposes, just because a dev saw something that looked even somewhat similar to what they were doing now.

Given this piece of code, then, how are we going to handle a changing business condition? Let us say that we want to offer this only to preferred members after the first 6 months:

public class DiscountPreferredMembers : OnOrderSubmittal
{
	public override void Execute()
	{
		if ( User.IsPreferred )	
			Order.AddDiscountPrecentage(5);
	}
}

This is trivially simple:

public class DiscountPreferredMembers : OnOrderSubmittal
{
	public override void Execute()
	{
		if ( User.IsPreferred && DateTime.Now > User.PreferredMemberSince.AddMonths(6) )	
			Order.AddDiscountPrecentage(5);
	}
}

Compile this rule (which will usually be in a separate project, along with a bunch of other tightly related rules) and push it to production. If you are doing an out of band release, you might want to do it in a completely separate DLL.

And yes, we are still talking about using the lowest common denominator, without introducing any concept more complex than a class and the if statement. More advance solution would be a DSL, in which the deployment unit would be the individual script, instead of a full DLL.

But even so, I hope that I am demonstrating the concept.

All we have to do is to stop thinking about the code as set in stone and accept that this is one of the richer way we have to express semantics.