Ayende @ Rahien

It's a girl

What are you in for?

There is a level of tension between what a developer wants and what the client wants. As a developer, I want to use the coolest technologies, the freshest methodologies, to be so far on the bleeding edge that they have an ER dedicated just for my team.

As a client? I want to be as conservative as possible, for the most part, I don’t care for new technologies unless they have some significant feature that I can’t get anywhere else. I value speed of development over having the devs geek out.

I spoke about before, the tendency of developers to start building castles in the sky in favor of getting things done. I call it Stealing From Your Client, because you are introducing additional things that aren’t necessary, won’t bring additional value or just plain make things hard.

For example, in just about any project that I have seen, putting an IRepository in front of an OR/M or putting an IRepository in front of RavenDB was a mistake. It created no value and actually made things harder to use.

Other pet peeves of mine was raised in a discussion about architecture in a recent lecture that I gave. One of the participants asked me for my opinion about CQRS. But when I asked him for what sort of an application, he answer was somewhere along the lines of: “I want to build my next application using CQRS and I like to hear your opinion.”

This sort of thinking drives me bananas. (Editor note: That sounds like a very painful thing, and I am not sure why I would do that, but it appears that this is an English expression that fit this location in the post).

Dictating the architecture of an application, before you even have an application? Hell, making any decisions about the application (including what technologies to use) before you actually have a good idea about what is going on is a Big No! No!

Here is the most important piece of this post. For the most part, all those cool technologies that you hear about? They aren’t relevant for your scenario. And even if you are convinced that they are a perfect fit, there is a cost associated with them, which you have to consider before attempting them. Especially if this is the first time that you are using something.

The really annoying part? Most people who come up with these exciting new technologies spent an inordinate amount of their time saying not how to build apps using them, but when you shouldn’t. Evans’ DDD book quite explicitly state that DDD shouldn’t be your default architecture, that it has a cost and should be carefully considered. The industry as a whole (me included) just ignored him and started to write DDD applications.

Greg Young just lectured in Oredev about why you shouldn’t use CQRS. For much the same reason.

At the end, we are there to provide value to the customer. Yes, it is cool to work on the newest thing on the block, but that is why you have hobby projects.

Tags:

Published at

Originally posted at

Comments (9)

Ask Ayende: Aggregates and repositories

With regards to my recommendation to avoid the repository, Stan asks:

You returned to explain why repository pattern is evil. Just interesting to know what are you doing when you need in your model to access another aggregate. Do you reference NH from your model? I prefer to leave my model POCOed and wrap DB calls by repository pattern. Sleeping good with it.

This is an interesting question. There are several ways to answer that. To start with, assuming that we are using DDD model (which the usage of a repository would imply), you don’t have references between aggregates.

But let us assume that we somehow need that, Stan seems to suggest something like this proposed solution:

public class Person 
{
    public static readonly DateTime ImportantDate;
    public BirthPlace BirthPlace { get; set; } 

    public DateTime BirthDate 
    { 
        get; private set;
    } 

    public void CorrectBirthDate(IRepository<BirthPlace> birthPlaces, DateTime date)
    {
        if (BirthPlace != null && date < ImportantDate && BirthPlace.IsSpecial) 
        { 
            BirthPlace = birthPlaces.GetForDate(date); 
        }
    }
}

Here we have a business rule that states that this is required.

But do we actually need a repository here? What if we just said that whoever calls us need to provide us with a way to get the birth place by date?

public void CorrectBirthDate(
        Func<DateTime, BirthPlace> getBirthPlaceFordate, 
        DateTime date)
{
    if (BirthPlace != null && date < ImportantDate && BirthPlace.IsSpecial) 
    { 
        BirthPlace = getBirthPlaceFordate(date); 
    }
}

This can be done with a simple delegate, no need to introduce a heavy weight abstraction. This is a local solution for a local problem. It keeps the database out from your entities and more importantly, it allows you to actually craft the appropriate response to this at the time of the call.

Tags:

Published at

Originally posted at

Comments (46)

RavenDB: It was the weekend before the wedding…

…when all through the house. Not a creature was stirring, not even a mouse. Because everyone was away on a Stag/Hen weekend.

Phil Jones has just notified me that a site he has been working on for a while went live.  Escape Trips is a site that provides Stag and Hen trips in the UK. A Stag/Hen weekend is apparently a bigger version of a stag party, I assume that this is the origin for movies like this (well, not really).

At any rate, this site is actually powered by RavenDB throughout. We actually have several videos in the pipeline of me and Phil hashing out some details about the site when it was built.

The site is fast, and Phil was kind enough to give me some interesting stats.

Some interesting performance stats compared to SQL Express which was running on the VPS. SQL Express was eating ~500MB of RAM (limited), RavenDB has been sat at ~80MB since launch last night! I think EF was eating CPU as well, CPU usage is way down as well.

Performance comment wise, I don’t know if EF was to blame but IIS process CPU usage is down even though traffic has doubled since the launch (mainly crawlers and a new adwords campaign). After running since the launch on Thursday night, the RAM usage has increased to 100MB, still a really great number though as I plan to scale down the VPS’s RAM saving money, RavenDB will actually be paying for itself!

The website looks quite simple from the public side but most of the development has gone into the private administrative website for dealing with sales, customer support and content editing. Performance wise, the system is more responsive and users are very pleased!

Pretty cool!

Tags:

Published at

Originally posted at

Comments (6)

What is wrong here? Solution

In my previous post, I showed the database schema and the UI and asked what was wrong with that. Before we move on, here is what I showed.

image

image

If you looked carefully, you might have noticed that there are duplicate PO# and Tracking# in the UI. More than that, we somehow double charged the customer for shipping.

What is going on?

It is actually fairly obvious, when you think about it. Look at the schema, there isn’t actually any association between the Tracking # and the PO #. In most orders, we have only 1 PO #, so it was easy to just add this information by just pulling it from the DB and adding a few columns. But when we got an order that has multiple POs… that is when all hell breaks lose.

This is a classic Cartesian Product problem.

The solution?  Actually model the UI to avoid suggesting that there is a relationship between the tracking and purchase orders, like this:

image

Tags:

Published at

Originally posted at

Comments (11)

What is wrong here?

This case, it isn’t code that I am going to show, rather, I am going to show the final UI and the database structure, and let you figure:

  • What is wrong.
  • How to fix this.

Here is the database schema:

image

And here is the problem:

image

Hints, this used to work for a long time, and suddenly it doesn’t, and the customer is pissed, annoyed and threatening to sue.

Tags:

Published at

Originally posted at

Comments (38)

Do you monitor negative events?

You are probably aware that you need to monitor your production systems for errors, and to add health monitoring for your servers.

But are you monitoring negative events? What is a negative event, stuff that should have happened and didn’t.

For example, every week you have a process that runs to update the tax rates that applies to your customers. This is implemented as a scheduled process, but for some reason (computer was just being rebooted, the user’s password expire, etc) that process didn’t run. There isn’t an error, pre se. You won’t get an error because nothing actually had a chance to actually happen.

Another example would be getting a callback confirmation that an order payment has been correctly processed. That usually happen within 1 – 5 minutes, and you get an OK/Fail notification. But what happens if that notification just never came?

This is a much more dangerous scenario, because you have to not only be prepared for handling errors, you have to be prepared for… nothing to happen.

What it means is that you have to have some way to setup expectations in the system, and act on them when you don’t get a confirmation (negative or positive) within a given time frame.

Tags:

Published at

Originally posted at

Comments (27)

RavenDB training: The euro tour

We are planning of doing a lot of training on RavenDB, and I wanted to make sure that everyone is aware of it.

Next week , we have a public event in London. Where Itamar will explain all about indexing:

RavenDB Indexes explained (public event) - Feb 28, London, UK

Then there is the full 2 days workshop about RavenDB, here is the current schedule for the next few months.

There is also going to be a course in the states around August, probably New York again and maybe around July in Berlin.

Tags:

Published at

Originally posted at

Comments (5)

Limit your abstractions: And how do you handle testing?

On my previous post, I explained about the value of pushing as much as possible to the infrastructure, and then show some code that showed how to do so. First, let us look at the business level code:

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Register(string originUnlocode, string destinationUnlocode, DateTime arrivalDeadline)
{
    var trackingId = ExecuteCommand(new RegisterCargo
    {
        OriginCode = originUnlocode,
        DestinationCode = destinationUnlocode,
        ArrivalDeadline = arrivalDeadline
    });

    return RedirectToAction(ShowActionName, new RouteValueDictionary(new { trackingId }));
}

public class RegisterCargo : Command<string>
{
    public override void Execute()
    {
        var origin = Session.Load<Location>(OriginCode);
        var destination = Session.Load<Location>(DestinationCode);

        var trackingId = Query(new NextTrackingIdQuery());

        var routeSpecification = new RouteSpecification(origin, destination, ArrivalDeadline);
        var cargo = new Cargo(trackingId, routeSpecification);
        Session.Save(cargo);

        Result = trackingId;
    }

    public string OriginCode { get; set; }

    public string DestinationCode { get; set; }

    public DateTime ArrivalDeadline { get; set; }
}

And the infrastructure code, now:

protected void Default_ExecuteCommand(Command cmd)
{
    cmd.Session = Session;
    cmd.Execute();
}

protected TResult Default_ExecuteCommand<TResult>(Command<TResult> cmd)
{
    ExecuteCommand((Command) cmd);
    return cmd.Result;
}

You might have noticed a problem in the way we are named things, the names on the action and the infrastructure code do not match. What is going on?

Well, the answer is quite simple. Let us look at how our controller looks like ( at least, the important parts ):

public class AbstractController : Controller
{
    public ISession Session;

    public Action<Command> AlternativeExecuteCommand { get; set; }
    public Func<Command, object> AlternativeExecuteCommandWithResult { get; set; }

    public void ExecuteCommand(Command cmd)
    {
        if (AlternativeExecuteCommand!= null)
            AlternativeExecuteCommand(cmd);
        else
            Default_ExecuteCommand(cmd);
    }

    public TResult ExecuteCommand<TResult>(Command<TResult> cmd)
    {
        if (AlternativeExecuteCommandWithResult != null)
            return (TResult)AlternativeExecuteCommandWithResult(cmd);
        return Default_ExecuteCommand(cmd);
    }

    protected void Default_ExecuteCommand(Command cmd)
    {
        cmd.Session = Session;
        cmd.Execute();
    }

    protected TResult Default_ExecuteCommand<TResult>(Command<TResult> cmd)
    {
        ExecuteCommand((Command)cmd);
        return cmd.Result;
    }
}

What?! You do mocking by hand and inject them like that? That is horrible! It is much easier to use a mocking framework and ….

Yes, it would be, if I was trying to mocking different things all the time. But given that I have very few abstractions, it make sense to not only build this sort of infrastructure, but to also build infrastructure for those things _in the tests_.

For example, let us write the test for the action:

[Fact]
public void WillRegisterCargo()
{
  ExecuteAction<CargoAdminController>( c=> c.Register("US", "UK", DateTime.Today) );
  
  Assert.IsType<RegisterCargo>( this.ExecutedCommands[0] );
}

Lego-mob 2The ExecuteAction method belongs to the test infrastructure, and it setups the controller to be run under the test scenario. Which allows me to not execute the command, but to actually get it.

From there, it is very easy to get to things like:

[Fact]
public void WillCreateNewCargoWithNewTrackingId()
{
  SetupQueryResponse<NextTrackingIdQuery>("abc");
  ExecuteCommand<RegisterCargo>( new RegisterCargo
  {
    OriginCode = "US",
    DestinationCode= "UK",
    ArrivalDeadline = DateTime.Today
  });
  
  var cargo = Session.Load<Cargo>("cargos/1");
  Assert.Equal("abc", cargo.TrackingId);
}

This is important, because now what you are testing is the actual interaction. You don’t care about any of the actual dependencies, we just abstracted them out, but without creating ton of interfaces, abstractions on top of abstractions or any of that.

In fact, we kept the number of abstractions to a minimum, and we can change pretty much every part of the system with very little fear of cascading change.

We have similar lego pieces, all of them move together and interact with one another with complete freedom, and we don’t have to have a Abstract Factory Factory Façade Factory.

Tags:

Published at

Originally posted at

Comments (31)

Limit your abstractions: The key is in the infrastructure…

In my previous post, I discussed actual refactoring to reduce abstraction, and I showed two very interesting methods, Query() and ExecuteCommand(). Here is the code in question:

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Register(string originUnlocode, string destinationUnlocode, DateTime arrivalDeadline)
{
    var trackingId = ExecuteCommand(new RegisterCargo
    {
        OriginCode = originUnlocode,
        DestinationCode = destinationUnlocode,
        ArrivalDeadline = arrivalDeadline
    });

    return RedirectToAction(ShowActionName, new RouteValueDictionary(new { trackingId }));
}

public class RegisterCargo : Command<string>
{
    public override void Execute()
    {
        var origin = Session.Load<Location>(OriginCode);
        var destination = Session.Load<Location>(DestinationCode);

        var trackingId = Query(new NextTrackingIdQuery());

        var routeSpecification = new RouteSpecification(origin, destination, ArrivalDeadline);
        var cargo = new Cargo(trackingId, routeSpecification);
        Session.Save(cargo);

        Result = trackingId;
    }

    public string OriginCode { get; set; }

    public string DestinationCode { get; set; }

    public DateTime ArrivalDeadline { get; set; }
}

What are they so important? Mostly because those methods [and similar, like Raise(event) and ExecuteLater(task)] are actually the back bone of the application. They are the infrastructure on top of which everything rests.

Those methods basically accept an argument (and optionally return a value). Their responsibility are:

  • Setup the given argument so it can run.
  • Execute it.
  • Return the result (if there is one).

Here is an example showing how to implement ExecuteCommand:

protected void Default_ExecuteCommand(Command cmd)
{
    cmd.Session = Session;
    cmd.Execute();
}

protected TResult Default_ExecuteCommand<TResult>(Command<TResult> cmd)
{
    ExecuteCommand((Command) cmd);
    return cmd.Result;
}

I have code very much like that in production, because I know that in this system, there are actually only one or two dependencies that a command may want.

There are very few other dependencies, because of the limited number of abstractions that we have. This makes things very simple to write and work with.

Because we abstract away any dependency management, and because we allow only very small number of abstractions, this works very well. The amount of complexity that you have is way down, code reviewing this is very easy, because there isn’t much to review, and it all follows the same structure. The implementation of the rest are pretty much the same thing.

There is just one thing left to discuss, because it kept showing up on the comments for the other posts. How do you handle testing?

Tags:

Published at

Originally posted at

Comments (47)

Limit your abstractions: Refactoring toward reduced abstractions

So in my previous post I spoke about this code and the complexity behind it:

public class CargoAdminController : BaseController
{
  [AcceptVerbs(HttpVerbs.Post)]
  public ActionResult Register(
      [ModelBinder(typeof (RegistrationCommandBinder))] RegistrationCommand registrationCommand)
  {
      DateTime arrivalDeadlineDateTime = DateTime.ParseExact(registrationCommand.ArrivalDeadline, RegisterDateFormat,
                                                             CultureInfo.InvariantCulture);

      string trackingId = BookingServiceFacade.BookNewCargo(
          registrationCommand.OriginUnlocode, registrationCommand.DestinationUnlocode, arrivalDeadlineDateTime
          );

      return RedirectToAction(ShowActionName, new RouteValueDictionary(new {trackingId}));
  }
}

In this post, I intend to show how we can refactor things. I am going to do that by flattening the architecture, removing useless abstractions and creating a simpler, easier to work with system.

The first thing to do is to refactor the method signature:

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Register(string originUnlocode, string destinationUnlocode, DateTime arrivalDeadline)

Those are three parameters that we need, there is no need to create a model binder, custom command, etc just for this. For that matter, if you already have a model binder, why on earth do you store the date as a string, and not a date time. The framework is quite happy to do the conversion for me, and if it can’t, I can extend the infrastructure to do so. I don’t need to patch this action with date parsing code.

Next, we have this notion of booking a new cargo, looking at the service, that looks like:

public string BookNewCargo(string origin, string destination, DateTime arrivalDeadline)
{
    try
    {
        TrackingId trackingId = BookingService.BookNewCargo(
            new UnLocode(origin),
            new UnLocode(destination),
            arrivalDeadline
            );
        return trackingId.IdString;
    }
    catch (Exception exception)
    {
        throw new NDDDRemoteBookingException(exception.Message);
    }
}

The error handling alone sets my teeth on edge. Also, note that we have a complex type for TrackingId, which contains just a string (there is a lot of code there for IValueObject<T>, comparison, etc), all of which basically go away if you use an actual string. The same is true for UnLocode (UN Location Code, I assume), but at least this one has some validation code in it.

Then there is the lovely forwarding call, which translate to:

public TrackingId BookNewCargo(UnLocode originUnLocode,
                               UnLocode destinationUnLocode,
                               DateTime arrivalDeadline)
{
    using (var transactionScope = new TransactionScope())
    {
        TrackingId trackingId = cargoRepository.NextTrackingId();
        Location origin = locationRepository.Find(originUnLocode);
        Location destination = locationRepository.Find(destinationUnLocode);

        Cargo cargo = CargoFactory.NewCargo(trackingId, origin, destination, arrivalDeadline);

        cargoRepository.Store(cargo);
        logger.Info("Booked new cargo with tracking id " + cargo.TrackingId);

        transactionScope.Complete();
        return cargo.TrackingId;
    }
}

And now we got somewhere, we actually have something there that is actually meaningful. I’ll skip going deeper, I am pretty sure that you can understand what is going on.

From my point of view of the common abstractions in an application:

  1. Controllers
  2. Views
  3. Entities
  4. Commands
  5. Tasks
  6. Events
  7. Queries

Controllers are at the boundaries of the system, they orchestrate the entire system behavior. Note that I have no place for services or repositories in this list. That is quite intentional. Instead of going that route.

Take a look at the code that I ended up with:

 [AcceptVerbs(HttpVerbs.Post)]
 public ActionResult Register(string originUnlocode, string destinationUnlocode, DateTime arrivalDeadline)
 {
     var origin = Session.Load<Location>(originUnlocode);
     var destination = Session.Load<Location>(destinationUnlocode);

     var trackingId = Query(new NextTrackingIdQuery());

     var routeSpecification = new RouteSpecification(origin, destination, arrivalDeadline);
     var cargo = new Cargo(trackingId, routeSpecification);
     Session.Store(cargo);

     return RedirectToAction(ShowActionName, new RouteValueDictionary(new {trackingId}));
 }

As you can see, the entire architecture was collapsed into a single method.

And what kind of abstractions do we have here?

Well, we have the usual things from MVC, Controller, Action, parameter binding.

We have the session that we are using to load data by id, and to store the newly create cargo.

And we have the notion of a query. Generating a new TrackingID is a query that happen on the database (actually implemented as a hilo sequence). That is something that is definitely not the responsibility of the controller action, so we moved it into a query. Note that we have the Query() method there. It is defined as:

protected TResult Query<TResult>(Query<TResult> query)

And NextTrackingIdQuery is defined as:

public class NextTrackingIdQuery : Query<string>

Pretty simple, overall. And I can hear the nitpickers climb over the fences, waving the pitchforks and torches. “What happen when you need to reuse this logic? It is not in the UI and …”

There are a couple of things to note here.

First, there isn’t anywhere else that needs to book a cargo. And saying “and what happen when…” flies right into a wall of people shouting YAGNI.

Second, let us assume that there is such a need, to reuse the booking cargo scenario. How would we approach this?

Well, we can encapsulate the logic for the controller inside a Command. Which gives us:

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Register(string originUnlocode, string destinationUnlocode, DateTime arrivalDeadline)
{
    var trackingId = ExecuteCommand(new RegisterCargo
    {
        OriginCode = originUnlocode,
        DestinationCode = destinationUnlocode,
        ArrivalDeadline = arrivalDeadline
    });

    return RedirectToAction(ShowActionName, new RouteValueDictionary(new { trackingId }));
}

And then we have the actual RegisterCargo command:

public abstract class Command
{
    public IDocumentSession Session { get; set; }
    public abstract void Execute();

    protected TResult Query<TResult>(Query<TResult> query);
}
public abstract class Command<T> : Command
{
    public T Result { get; protected set; }
}

public class RegisterCargo : Command<string>
{
    public override void Execute()
    {
        var origin = Session.Load<Location>(OriginCode);
        var destination = Session.Load<Location>(DestinationCode);

        var trackingId = Query(new NextTrackingIdQuery());

        var routeSpecification = new RouteSpecification(origin, destination, ArrivalDeadline);
        var cargo = new Cargo(trackingId, routeSpecification);
        Session.Save(cargo);

        Result = trackingId;
    }

    public string OriginCode { get; set; }

    public string DestinationCode { get; set; }

    public DateTime ArrivalDeadline { get; set; }
}

Note that the Command class also have a way to execute queries, in fact, it is the exact same way that we use when we had the code in the controller. We just moved stuff around, not really made any major change, but we can easily start using the same functionality in another location.

I generally don’t like doing this because most functionality is not reused, it is specific for a particular place and scenario, but I wanted to show how you can lift some part of the code and move it to a different location, otherwise people would complain about the “lack of reuse opportunities”.

On my next post I am going to talk about the Query() and ExecuteCommand() methods, and why they are so important.

Tags:

Published at

Originally posted at

Comments (94)

RavenHQ goes beta–RavenDB reaches the cloud

One of the things that we have been doing lately was providing solutions for cloud hosted RavenDB. I am very proud to announce the public beta phase of RavenHQ, a cloud based, fully managed RavenDB service.

image

Currently it is available on AppHarbor only, and I must emphasis that this is still a beta, so you might run into some road bumps, but we have a really good team working on this.

Actually, here is an important detail, about this offering.

Hibernating Rhinos (the company who is actually doing the development of RavenDB) is at heart a development / consulting company. We didn’t want to try to break apart something good, so we setup a new company dedicated for running RavenDB on the cloud, RavenHQ.

Why do you care about this? Because it means that while the RavenDB development team is available for any problem that you might run into, RavenHQ is actually stuffed by people whose job is merely to make sure that all your databases are humming along nicely, and not a developer who is 15% of watching what is going on in that server somewhere on the cloud.

I collaborated in RavenHQ with Jonathan Matheus (NSeviceBus Committer and an all around cool guy) to create something that I feel will be really awesome.

As I said before, we are currently offering RavenHQ on App Harbor only, but we will soon open it for general registration. In the meantime, this is called beta for a reason.

It is hard to test cloud based stuff in a lab, so after we have made sure that everything works, the next step is to see if you can break it. I am assuming the worse at that you will manage to break it in all sorts of creative ways. Please give us a short amount of grace period to make sure that we can match our internal workings to how people are actually using us.

Have an awesome weekend!

Tags:

Published at

Originally posted at

Comments (17)

RavenDB: Index Boosting

Recently we added a really nice feature, boosting the results while indexing.

Boosting is a way to give documents or attributes in a document weights. Attribute level boosting is a way to tell RavenDB that a certain  attribute in a document is more important than the others, so it will show up higher in queries when other properties are involved in a query. A document level boosting means that a certain document is more important than another (when using multi maps).

Let us see a few examples where this is happening. The simplest scenario is when we have a multi field search, and we want one of the fields to be the more important one. For example, we decided that when you make a search for first name and last name, a match on the first name has higher relevance than a match on the last name. We can define this requirement with the following index:

public class Users_ByName : AbstractIndexCreationTask<User>
{
    public Users_ByName()
    {
        Map = users => from user in users
                       select new
                       {
                           FirstName = user.FirstName.Boost(3),
                           user.LastName
                       };
    }
}

And we can query the index using:

var matches = session.Query<User,UsersByName>()
      .Where(x=>x.FirstName == "Ayende" || x.LastName == "Eini")
      .ToList()

Assuming that we have a user with the first name “Ayende” and another user with the last name “Eini”, this will find both of them, but will rank the user with the name “Ayende” first.

Let us see another variant, we have a multi map index for users and accounts, both are searchable by name, but we want to ensure that accounts are more important than users. We can do that using the following index:

public class UsersAndAccounts : AbstractMultiMapIndexCreationTask
{
    public UsersAndAccounts()
    {
        AddMap<User>(users =>
                     from user in users
                     select new {Name = user.FirstName}
            );
        AddMap<Account>(accounts =>
                        from account in accounts
                        select new {account.Name}.Boost(3)
            );
    }
}

If we have query that has matches for users and accounts, this will make sure that the account comes first.

And finally, a really interesting use case is that based on the entity itself, you decide to rank it higher. For example, we want to rank customers that ordered a lot from us higher than other customers. We can do that using the following index:

public class Accounts_Search : AbstractIndexCreationTask<Account>
{
    public Accounts_Search()
    {
        Map = accounts =>
              from account in accounts
              select new
              {
                  account.Name
              }.Boost(account.TotalIncome > 10000 ? 3 : 1);
    }
}

This way, we get the more important customers first. And this is really one of those things that brings up the polish in the system, the things that makes the users sit up and take notice.

Tags:

Published at

Originally posted at

Comments (14)

Limit your abstractions: So what is the whole big deal about?

When I started out, I pointed out that I truly dislike this type of architecture:

image

And I said that I much rather an architecture that has a far more limited set of abstractions, and I gave this example:

  1. Controllers
  2. Views
  3. Entities
  4. Commands
  5. Tasks
  6. Events
  7. Queries

That is all nice in theory, but let us talk in practice, shall we? How do we actually write code that actually uses this model?

Let us show some code that uses this type of code, this type, this is from the C# port of the same codebase, available here.

public class CargoAdminController : BaseController
{
  [AcceptVerbs(HttpVerbs.Post)]
  public ActionResult Register(
      [ModelBinder(typeof (RegistrationCommandBinder))] RegistrationCommand registrationCommand)
  {
      DateTime arrivalDeadlineDateTime = DateTime.ParseExact(registrationCommand.ArrivalDeadline, RegisterDateFormat,
                                                             CultureInfo.InvariantCulture);

      string trackingId = BookingServiceFacade.BookNewCargo(
          registrationCommand.OriginUnlocode, registrationCommand.DestinationUnlocode, arrivalDeadlineDateTime
          );

      return RedirectToAction(ShowActionName, new RouteValueDictionary(new {trackingId}));
  }
}

Does this looks good to you? Here is what is actually going on here.

image

You can click on this link look at what is going in here (and I removed some stuff for clarity’s sake.

This stuff is complex, more to the point, it doesn’t read naturally, it is hard to make a change without modifying a lot of code. This is scary.

We have a lot of abstractions here, services and repositories and facades and what not. (Mind, each of those thing is an independent abstraction, not a common one.)

In my next post, I’ll show how to refactor this to a much saner model.

Tags:

Published at

Originally posted at

Comments (17)

Limit your abstractions: All cookies looks the same to the cookie cutter

One of the major advantages of limiting the number of abstractions you have is that you end up with a lot less “infrastructure” code. This is in quote because a lot of the time I see this type of code doing things like this:

public class BookingServiceImpl : IBookingService  
{

  public override IList<Itinerary> RequestPossibleRoutesForCargo(TrackingId trackingId)
  {
    Cargo cargo = cargoRepository.Find(trackingId);

    if (cargo == null)
    {
      return new List<Itinerary>();
    }

    return routingService.FetchRoutesForSpecification(cargo.routeSpecification());
  }
  
}

I don’t want to see stuff like that. Instead, I want to be able to go into any piece of code and figure out by what it is what it must be doing. All my code follow fairly similar patterns, and the only differences that I have are actual business differences.

Here is the list of common abstractions that I gave before, this time, I am going to go over each one and explain it.

  1. Controllers – Stand at the edge of the system and manage interaction with the outside world. Can be MVC controllers, MVVM models, WCF Services.
  2. Views  - The actual UI logic that is being executed. Can be MVC views, XAML, or real UI code (you know, that old WinForms stuff Smile).
  3. Entities – Data that is being persisted.
  4. Commands – A packaged command to do something that will execute immediately. (Usually invoked by controllers).
  5. Tasks – A packaged execution that will be execute at a later point in time (usually async), after the current operation have completed.
  6. Events – Something that happened in the system that is interesting and require action. Common place for business logic and interaction.
  7. Queries – Packaged query to be executed immediately. Usually only fairly complex ones gets promoted to an actual query object.

There might be a few others in your system, but for the most part, you would see those types of things over and over and over again.

Oh, sure, you might have other things as well, but those should be rare. If you need to display things in multiple currency interacting with a currency service is something that you would need to often, by all means, make it easy to do (how you do that is usually not important), but the important thing to remember is that those sort of things are one off, and they should remain one off, not the way you structure the entire app.

The reason this is important is that once you have this common infrastructure and shape (for lack of a better word), you can start working in a very rapid pace, without being distracted, and making changes becomes easy. All of your architecture is going through the same central pipes, shifting where they are going is easy to do. You don’t have to drag a rigid system made of a lot of small individual pieces, after all.

Tags:

Published at

Originally posted at

Comments (37)

Limit your abstractions: Commands vs. Tasks, did you forget the workflow?

imageOn my last post, I outlined the major abstractions that I tend to use in my applications.

  1. Controllers
  2. Views
  3. Entities
  4. Commands
  5. Tasks
  6. Events
  7. Queries

I also said that I like Tasks much more than Commands and I’ll explain that in the future. When talking about tasks, I usually talk about something that is based on this code. This give us the ability to write code such as this:

public class AssignCargoToRoute : BackgroundTask
{
  public Itinerary Itinerary { get;set; }
  TrackingId TrackingId { get;set; }

  public override void Execute()
  {
    
  }
}

On the face of it, this is a very similar to what we have had before. So why am I so much in favor of tasks rather than commands?

Put simply, the promises that they make are different.  A command will execute immediately, this is good when we are encapsulating common or complex piece of logic. We give it a meaningful name and move on with our lives.

The problem is that in many cases, executing immediately is something that we don’t want. Why is that?

Well, what happen if this can take a while? What if this requires touching a remote resource (one that can’t take part of our transaction)? What happen if we want this to execute, but only if the entire operation have been successful? How do we handle errors? What happen when the scenario calls for a complex workflow? Can I partially succeed in what I am doing? Can you have a compensating action if some part fail? All of those scenarios basically boil down to “I don’t want to execute it now, I want the execution to be managed for me”.

Thing about the scenario that we actually have here. We need to assign a cargo to a route. But what does that means? In the trivial example, we do that by updating some data in our local database. But in real world scenario, something like that tends to be much more complex. We need to calculate shipping charges, check manifest, verify that we have all the proper permits, etc. All of that takes time, and usually collaboration with external systems.

For the most part, I find that real world systems requires a lot more tasks than commands. Mostly because it is actually rare to have complex interaction inside your own system. If you do, you have to be cautious that you aren’t adding too much complexity. It is the external interactions that tends to makes life… interesting.

This has implications on how we are building the system, because we don’t assume immediate execution and the temporal coupling that comes with it.

Tags:

Published at

Originally posted at

Comments (22)

Limit your abstractions: You only get six to a dozen in the entire app

Let us take a look at another part of the DDD sample application. This time, the booking service.

image

One thing that is really glaring at me is that we have a mix of both commands and queries in this interface. Also, just consider the name. It is a Booking service, but it doesn’t actually seem to have an actual meaning in the application itself. There is no entity named Booking, and except for the BookNewCargo, there is no mention of booking anywhere in the application.

You know what, maybe there is logic in the RequestPossibleRoutesForCargo that is actually meaningful beyond a simple query. Such as charging the customer for the calculation, or reserving space on the route that we selected, etc.

At any rate, I don’t like this service at all. In fact, any time that you have something that is called XyzService, you ought to suspect it.  Let me take this one step further. I don’t like that it is an interface, and I don’t like how it is composed. I don’t see it as an independent thing. Going further than that, I don’t really see a reason why we would need an interface here. You might have noticed what the title of this series is. I want to limit abstraction, and IBookingService is an abstraction, one that I don’t see any value in.

In most applications, I like to have a very small number of abstractions. Usually in the order of half a dozen to a dozen (top!). I usually think about them like this:

  1. Controllers
  2. Views
  3. Entities
  4. Commands
  5. Tasks
  6. Events
  7. Queries

Sometimes you have a few more, but those are the major ones. Controllers,Views and Entities are fairly obvious, I would imagine. But what about the rest?

Command is a package “thing” that happen immediately. Tasks are very like commands, except that they don’t have an explicit execution date and Events allow you to build smarts into the system. We have already seen how I handle events, in the previous posts in this series. And Queries should be fairly obvious as well.

Let us tackle Commands now, and then explain why they are bad later on.

When I said that I don’t want abstractions, I meant that I don’t want an interface and an implementation, and some way to connect the two, etc. I don’t really see a lot of value in that for the common case.

Let us break it apart into commands, which would give us this:

public class AssignCargoToRoute : Command
{
  public Itinerary Itinerary { get;set; }
  TrackingId TrackingId { get;set; }

  public override void Execute()
  {
    
  }
}

public class BookNewCargo : Command
{
  public UnLocode Origin {get;set;}
  public UnLocode Destination { get;set; }
  public DateTime ArrivalDeadline {get;set;}

  public TrackingId Result {get;set;}
  public override void Execute()
  {
    
  }
}

I am very fond in having a base class for those types of things. The base class provide me with the infrastructure support for the command in question.

In my next post, I’ll go over why I don’t like this approach, and discuss other ways to structure things so it is more suitable for an actual application.

Tags:

Published at

Originally posted at

Comments (18)

The economics of continuous deployment

One of the things that I did, almost by accident, when we started Hibernating Rhinos was to create a CI server and a public daily build server. And every single successful build ended up in customer hands. That was awesome in many respects, it removed a lot of the “we have got to make a new release” pressure, because we were making new releases, sometimes multiple times a day.

When we started with RavenDB, it was obvious to me that this was what we were going to do with it as well, because the advantages to this approach as so clear. With RavenDB, we needed a two stage system, but still, every single build gets to the customer hands.

Awesome, great, outstanding, exceptional and other such synonyms. As long as you look at this from one angle, the one in which we are only concerned about the technical challenges of delivering software .The problem is that there are additional things to note here. Economic challenges.

Let us take the profiler as a good example. It was released in beta on the Jan 1, 2009, and since then we had 920 separate builds, adding a ton of new features, capabilities, improving performance, making things smoother and in general making it a better product.

That is over 3 years without a major release, mostly because we never had the need to do this, we kept delivering software on a day to day basis.

During that time, we delivered features such as viewing the result set, checking the query plan of a query (in all major databases), exporting the entire session to HTML so you can send it to your DBA, CI integration and so much more. It has been wonderful.

Except… this has one implications that I didn’t think of at the time. If you bought NH Prof on the 1st Jan, 2009 you got 3 years of product updates, for no additional costs. And unless we create a new major version, you can keep using the software, including all the updates and improvements, without paying.

That is great for the very early customers, but not so good for the people who need to eat so they can work on the profiler. Let us think about the implications of this a bit more, okay?

In order for us to actually make money, we have to:

  • keep expanding our one-off customer base, which is going to hit a limit at some point.
  • create a new version, getting the old customer to purchase the updates.

Seems simple, right? This is what most companies do, and how most software is sold. You get a license for version 1 and you buy a license for version 2.

So far, so good. But let us consider the implications of that. In order to get the old users to buy the new one, I have to put some really nice stuff in the next version. Which means that I have to do a lot of “secret” development because I can’t just release it on our usual continuous deployment mode. That sucks. And it also means that features that are already coded are actually disabled because we defer them to the next version.

So, the next version of the profilers is going to have to have some interesting features to get people to buy it. One of them is production profiling. It has actually been around for quite a while. It has simply been #ifdef’ed out of the product, because it is something that we keep for the next version.

I just checked, and I was acutely surprised by what I found. The initial work for production profiling was done in Jan 2010, it is working since then. I got side tracked with RavenDB so I never had the chance to actually complete the rest of the features for 2.x and release them all.

In mid 2010 we started experimenting with subscriptions. Instead of having a one time payment model, we moved to a pay as you go. So as long as you were using the profiler, you were paying for it, and in return, we provided all of those new features.

I have been thinking about this a lot lately. I strongly lean toward making the next version of the profiler (coming soon, and it will have a bunch of nice features) subscription only.

My current thinking it to allow two modes of buying the product. Monthly / yearly subscription and a one time fee that give you 18 months of usage (and doesn’t re-charge). That would allow us to keep producing software in incremental steps, without having to go away for a while and work in secret on big ticket features just so we can have enough stuff to put on “why you should buy 2.x” list.

I would appreciate any feedback that you may have.

Tags:

Published at

Originally posted at

Comments (54)

Limit your abstractions: Application Events–event processing and RX

In my last post, I mentioned that this is actually an event processing system, so we might as well use actual event processing and see what we can gain out of this. I chose to use RX (reactive extensions), which can turn a series of events into a linq statement. This is incredibly powerful, and has some interesting implications when you combine this with your architecture. In particular, let us see what we can get when we set out to replace this with RX based event processing style.

image_thumb3_thumb_thumb

We can get to something like this very easily:

public class CargoProcessor : EventsProcessor
{
    public CargoProcessor()
    {
        On<Cargo>(cargos =>
            from cargo in cargos
            where cargo.Delivery.Misdirected
            select MisdirectedCargo(cargo)
            );

        On<Cargo>(cargos =>
            from cargo in cargos
            where cargo.Delivery.UnloadedAtDestination
            select CaroArrived(cargo)
        );
    }

    private object CaroArrived(Cargo cargo)
    {
        // handle event
        return null;
    }

    private object MisdirectedCargo(Cargo cargo)
    {
        // handle event
        return null;
    }
}

We use RX to handle the linq processing over the events, and in EventsProcessor we have very little code, probably just:

    public class EventsProcessor
    {
        private readonly List<Func<IObservable<object>, IObservable<object>>>  actions = new List<Func<IObservable<object>, IObservable<object>>>();

        protected void On<T>(Func<IObservable<T>, IObservable<object>> action)
        {
            actions.Add(observable => action(observable.OfType<T>()));
        }

        public void Execute(IObservable<object> observable)
        {
            foreach (var action in actions)
            {
                action(observable).Subscribe();
            }
        }
    }

Elsewhere in the code we setup the actual Obsersable that we pass to all the EventsProcessors. The major advantages that we have with this style is that we have a natural syntax to do selection on the events that interest us, including fairly complex one. We still have easy time of creating new EventsProcessors if we want, but because the code for defining the selection is so compact, we can usually put related stuff together, which is going to be very helpful for making sure that the codebase is readable.

And, naturally, this method extends itself to handling events of multiple types in the same place. For example, if we want to also handle the HandlingEvent, we can do it in place, because it is very much related to the Cargo, it seems.

Tags:

Published at

Originally posted at

Comments (21)

When you pit RavenDB & SQL Server against one another…

Here is how it works. I hate benchmarks, because they are very easily manipulated. Whenever I am testing performance stuff, I am posting numbers, but they are usually in reference to themselves (showing improvements).

That said…

Mark Rodseth .Net Technical Architect at Fortune Cookie in London, UK and he did a really interesting comparison between RavenDB & SQL Server. I feel good about posting this because Mark is a totally foreign agent (hm…. well, maybe not that Smile ) but he has no association with RavenDB or Hibernating Rhinos.

Also, this post really made my day.

Update: Mark posted more details on his test case.

Mark setup a load test for two identical applications, one using RavenDB, the other one using SQL Server. The results:

SQL Load Test
Transactions: 111,014 (Transaction = Single Get Request)
Failures: 110,286 (Any 500 or timeout)

And for RavenDB ?

RavenDB Load Test
Transactions: 145,554 (Transaction = Single Get Request)
Failures: 0 (Any 500 or timeout)

And now that is pretty cool.

Limit your abstractions: Application Events–Proposed Solution #2–Cohesion

In my previous post, I spoke about ISP and how we can replace the following code with something that is easier to follow:

image_thumb3_thumb

I proposed something like:

public interface IHappenOn<T>
{
   void Inspect(T item);
}

Which would be invoked using:

container.ExecuteAll<IHappenOn<Cargo>>(i=>i.Inspect(cargo));

Or something like that.

Which lead us to the following code:

public class CargoArrived : IHappenedOn<Cargo>
{
  public void Inspect(Cargo cargo)
  {
    if(cargo.Delivery.UnloadedAtDestination == false)
      return;
      
    // handle event
  }
}

public class CargoMisdirected : IHappenedOn<Cargo>
{
  public void Inspect(Cargo cargo)
  {
    if(cargo.Delivery.Misdirected == false)
      return;
      
    // handle event
  }
}

public class CargoHandled : IHappenOn<HandlingEvent>
{
   // etc
}

public class EventRegistrationAttempt : IHappenedOn<HandlingEventRegistrationAttempt>
{
  // etc
}

But I don’t really like this code, to be perfectly frank. It seems to me like there isn’t really a good reason why CargoArrived and CargoMisdirected are located in different classes. It is likely that there is going to be a lot of commonalities between the different types of handling events on cargo. We might as well merge them together for now, giving us:

public class CargoHappened : IHappenedOn<Cargo>
{
  public void Inspect(Cargo cargo)
  {
    if(cargo.Delivery.UnloadedAtDestination)
      CargoArrived(cargo);
      
    
    if(cargo.Delivery.Misdirected)
      CargoMisdirected(cargo);
      
  }
  
  public void CargoArrived(Cargo cargo)
  {
    // handle event
  }
  
  public void CargoMisdirected(Cargo cargo)
  {
    //handle event
  }
}

This code put a lot of the cargo handling in one place, making it easier to follow and understand. At the same time, the architecture gives us the option to split it to different classes at any time. We aren’t going to end up with a God class for Cargo handling. But as long as it make sense, we can keep them together.

I like this style of event processing, but we can probably do better job at if if we actually used event processing semantics here. I’ll discuss that in my next post.

Tags:

Published at

Originally posted at

Comments (8)

Limiting your abstractions: Reflections on the Interface Segregation Principle

I have found two definitions for ISP (in this post, I assume that you know what ISP is, if you don’t, look at the links below).

This one I can fully agree with:

Clients should not be forced to depend upon interfaces that they don't use.

And this one I strongly disagree with:

Many client specific interfaces are better than one general purpose interface

For fun, both of those links are from Object Mentor.

It would be more accurate to say that I don’t so much disagree with the intent of the second statement, but that I disagree with the wording. One (literal) way of understanding the second statement is to say that we want specific interfaces over generic ones. And I would strongly disagree. An interface is an abstraction (for the most part), and I want to reduce the amount that I have in my system.

Let us look at an example, based on my recent posts in this series.

image_thumb[3]_thumb

In my previous post, I said that I want to remove both CargoWasHandled and CargoWasMisdirected in favor of IHappenOnCargoInspection. But what about the two other methods?

Would ISP force me to have IHappenOnEventRegistrationAttempt and IHappenOnCargoHandling?

Sure, those are many specific interfaces, but are they really better than something like IHappenOn<T>? Is there something truly meaningful that gets lost when we have the generic interface?

I would say that this isn’t the case, further more, I would actually state that having a single interface will lead to more maintainable code, because there are less abstractions in the code. There is a codebase that is more cohesive and easier to understand.

But I’ll talk more about cohesion on my next post.

Tags:

Published at

Originally posted at

Comments (5)

Send me a patch for that

This post is in reply to Hadi’s post. Please go ahead and read it.

Done? Great, so let me try to respond, this time, from the point of view of someone who regularly asks for patches / pull requests.

Here are a few examples.

To make things more interesting, the project that I am talking about now is RavenDB which is both Open Source and commercial. Hadi says:

Numerous times I’ve seen reactions from OSS developers, contributors or merely a simple passer by, responding to a complaint with: submit a patch or well if you can do better, write your own framework. In other words, put up or shut up.

Hadi then goes on to explain exactly why this is a high barrier for most users.

  • You need to familiarize yourself with the codebase.
  • You need to understand the source control system that is used and how to send a patch / pull request.

And I would fully agree with Hadi that those are stumbling blocks. I can’t speak for other people, but in our case, that is the intention.

Nitpicker corner here: I am speaking explicitly and only about features here. Bugs gets fixed by us (unless the user already submitted a fix as well).

Put simply, there is an issue of priorities here. We have a certain direction for the project that we want to take it. And in many cases, users want things that are out of scope for us for the foreseeable future. Our options then become:

  • Sorry, ain’t going to happen.
  • Sure, we will push aside all the work that we intended to do to do your thing.
  • No problem, we added that to the queue, expect it in 6 – 9 months, if we will still consider it important then.

None of which is an acceptable answer from our point of view.

Case in point, facets support in RavenDB was something that was requested a few times. We never did it because it was out of scope for our plan, RavenDB is a database server, not a search server and we weren’t really sure how complex this would be and how to implement this. Basically, this was an expensive feature that wasn’t in the major feature set that we wanted. The answer that we gave people is “send me a pull request for that”.

To be clear, this is basically an opportunity to affect the direction of the project in a way you consider important. What ended up happening is that Matt Warren took up the task and created an initial implementation. Which was then subject to intense refactoring and finally got into the product. You can see the entire conversation about this here. The major difference along the way is that Matt did all the research for this feature, and he had working code. From there the balance change. It was no longer an issue of expensive research and figuring out how to do it. It was an issue of having working code and refactoring it so it matched the rest of the RavenDB codebase. That wasn’t expensive, and we got a new feature in.

Here is another story, a case where I flat out didn’t think it was possible. About two years ago Rob Ashton had a feature suggestion (ad hoc queries with RavenDB). Frankly, I thought that this was simply not possible, and after a bit of back and forth, I told Rob:

Let me rephrase that.
Dream up the API from the client side to do this.

Rob went away for a few hours, and then came back with a working code sample. I had to pick my jaw off the floor using both hands. That feature got a lot of priority right away, and is a feature that I routinely brag about when talking about RavenDB.

But let me come back again to the common case, a user request something that isn’t in the project plan. Now, remember, requests are cheap. From the point of view of the user, it doesn’t cost anything to request a feature. From the point of view of the project, it can cost a lot. There is research, implementation, debugging, backward compatibility, testing and continuous support associated with just about any feature you care to name.

And our options whenever a user make a request that is out of line for the project plan are:

  • Sorry, ain’t going to happen.
  • Sure, we will push aside all the work that we intended to do to do your thing.
  • No problem, we added that to the queue, expect it in 6 – 9 months, if we will still consider it important then.

Or, we can also say:

  • We don’t have the resources to currently do that, but we would gladly accept a pull request to do so.

And that point, the user is faced with a choice. He can either:

  • Oh, well, it isn’t important to me.
  • Oh, it is important to me so I have better do that.

In other words, it shift the prioritization to the user, based on how important that feature is.

We recently got a feature request to support something like this:

session.Query<User>()
   .Where(x=> searchInput.Name != null && x.User == searcInput.Name)
   .ToArray();

I’ll spare you the details of just how complex it is to implement something like that (especially when it can also be things like: (searchInput.Age > 18). But the simple work around for that is:

var q = session.Query<User>();
if(searchInput.Name != null)
  q = q.Where(x=> x.User == searcInput.Name);

q.ToArray();

Supporting the first one is complex, there is a simple work around that the user can use (and I like the second option from the point of view of readability as well).

That sort of thing get a “A pull request for this feature would be appreciated”. Because the alternative to that is to slam the door in the user’s face.

Limit your abstractions: Application Events–Proposed Solution #1

In my previous post, I explained why I really don’t the following.

image_thumb[3]

public class CargoInspectionServiceImpl : ICargoInspectionService 
{
  // code redacted for simplicity

 public override void InspectCargo(TrackingId trackingId)
 {
    Validate.NotNull(trackingId, "Tracking ID is required");

    Cargo cargo = cargoRepository.Find(trackingId);
    if (cargo == null)
    {
      logger.Warn("Can't inspect non-existing cargo " + trackingId);
      return;
    }

    HandlingHistory handlingHistory = handlingEventRepository.LookupHandlingHistoryOfCargo(trackingId);

    cargo.DeriveDeliveryProgress(handlingHistory);

    if (cargo.Delivery.Misdirected)
    {
      applicationEvents.CargoWasMisdirected(cargo);
    }

    if (cargo.Delivery.UnloadedAtDestination)
    {
      applicationEvents.CargoHasArrived(cargo);
    }

    cargoRepository.Store(cargo);
 }
}

Now, let us see one proposed solution for that. We can drop the IApplicationEvents.CargoHasArrived and IApplicationEvents.CargoWasMisdirected, instead creating the following:

public interface IHappenOnCargoInspection
{
   void Inspect(Cargo cargo);
}

We can have multiple implementations of this interface, such as this one:

public class MidirectedCargo : IHappenOnCargoInspection
{
   public void Inspect(Cargo cargo)
   {
        if(cargo.Delivery.Misdirected == false)
             return;

        // code to handle misdirected cargo.
   }
}

In a similar fashion, we would have a CargoArrived implementation, and the ICargoInspectionService would be tasked with managing the implementation of IHappenOnCargoInspection, probably through a container. Although I would probably replace it with something like:

container.ExecuteOnAll<IHappenOnCargoInspection>(i=>i.Inspect(cargo));

All in all, it is a simple method, but it means that now the responsibility to detect and act is centralized in each cargo inspector implementation. If the detection of misdirected cargo is changed, we know that there is just one place to make that change. If we need a new behavior, for example, for late cargo, we can do that by introducing a new class, which implement the interface. That gives us the Open Closed Principle.

This is better, but I still don’t like it. There are better methods than that, but we will discuss them in another post.

Tags:

Published at

Originally posted at

Comments (7)

Limit your abstractions: Application Events–what about change?

In my previous post, I showed an example of application events and asked what is wrong with them.

image

public class CargoInspectionServiceImpl : ICargoInspectionService 
{
  // code redacted for simplicity

 public override void InspectCargo(TrackingId trackingId)
 {
    Validate.NotNull(trackingId, "Tracking ID is required");

    Cargo cargo = cargoRepository.Find(trackingId);
    if (cargo == null)
    {
      logger.Warn("Can't inspect non-existing cargo " + trackingId);
      return;
    }

    HandlingHistory handlingHistory = handlingEventRepository.LookupHandlingHistoryOfCargo(trackingId);

    cargo.DeriveDeliveryProgress(handlingHistory);

    if (cargo.Delivery.Misdirected)
    {
      applicationEvents.CargoWasMisdirected(cargo);
    }

    if (cargo.Delivery.UnloadedAtDestination)
    {
      applicationEvents.CargoHasArrived(cargo);
    }

    cargoRepository.Store(cargo);
 }
}

This is very problematic code, from my point of view, for several reasons. Look at how it allocate responsibilities. IApplicationEvents is supposed to execute the actual event, but deciding when to execute the event is left for the caller. I said several reasons, but this is the main one, all other flows from it.

What happen when the rules for invoking an event change? What happen when we want to add a new event?

The way this is handled is broken. It violates the Open Closed Principle, it violates the Single Responsibility Principle and it frankly annoys me.

Can you think about ways to improve this?

I’ll discuss some in my next post.

Tags:

Published at

Originally posted at

Comments (22)

RavenDB workshop in NDC, Oslo 4-5th June

The RavenDB workshop is coming to Oslo, Norway in June!

Join us to an intensive RavenDB hands-on workshop just before the great NDC conference starts.

During the first day of this workshop we will get to know RavenDB and its core concepts, get comfortable with its API, learn how to build and customize indexes, and how to correctly model data for use in a document database.

After getting familiar with all the basics in the first day, during the second day we will build on that knowledge to properly grok Map⁄Reduce, Multi–maps and other advanced usages of indexes, learn how to extend RavenDB and the various options of scaling out.

More details on the workshop and the conference can be found here.

Tags:

Published at

Originally posted at