Adaptive Domain Models with Rhino Commons

time to read 4 min | 645 words

Udi Dahan has been talking about this for a while now. As usual, he makes sense, but I am working in different enough context that it takes time to assimilate it.

At any rate, we have been talking about this for a few days, and I finally sat down and decided that I really need to look at it with code. The result of that experiment is that I like this approach, but am still not 100% sold.

The first idea is that we need to decouple the service layer from our domain implementation. But why? The domain layer is under the service layer, after all. Surely the service layer should be able to reference the domain. The reasoning here is that the domain model play several different roles in most applications. It is the preferred way to access our persistent information (but they should not be aware of persistence), it is the central place for business logic, it is the representation of our notions about the domain, and much more that I am probably leaving aside.

The problem here is there is a dissonance between the requirements we have here. Let us take a simple example of an Order entity.

image As you can see, Order has several things that I can do. It can accept an new line, and it can calculate the total cost of the order.

But those are two distinct responsibilities that are based on the same entity. What is more, they have completely different persistence related requirements.

I talked about this issue here, over a year ago.

So, we need to split the responsibilities, so we can take care of each of them independently. But it doesn't make sense to split the Order entity, so instead we will introduce purpose driven interfaces. Now, when we want to talk about the domain, we can view certain aspect of the Order entity in isolation.

This leads us to the following design:

image

And now we can refer to the separate responsibilities independently. Doing this based on the type open up to the non invasive API approaches that I talked about before. You can read Udi's posts about it to learn more about the concepts. Right now I am more interested in discussing the implementation.

First, the unit of abstraction that we work in is the IRepository<T>, as always.

The major change with introducing the idea of a ConcreteType to the repository. Now it will try to use the ConcreteType instead of the give typeof(T) that it was created with. This affects all queries done with the repository (of course, if you don't specify ConcreteType, nothing changes).

The repository got a single new method:

T Create();

This allows you to create new instances of the entity without knowing its concrete type. And that is basically it.

Well, not really :-)

I introduced two other concepts as well.

public interface IFetchingStrategy<T>
{
	ICriteria Apply(ICriteria criteria);
}

IFetchingStrategy can interfere in the way queries are constructed. As a simple example, you could build a strategy that force eager load of the OrderLines collection when the IOrderCostCalculator is being queried.

There is not complex configuration involved in setting up IFetchingStrategy. All you need to do is register your strategies in the container, and let the repository do the rest.

However, doesn't this mean that we now need to explicitly register repositories for all our entities (and for all their interfaces)?

Well, yes, but no. Technically we need to do that. But we have help, EntitiesToRepositories.Register, so we can just put the following line somewhere in the application startup and we are done.

EntitiesToRepositories.Register(
	IoC.Container, 
	UnitOfWork.CurrentSession.SessionFactory, 
	typeof (NHRepository<>),
	typeof (IOrderCostCalculator).Assembly);

And this is it, you can start working with this new paradigm with no extra steps.

As a side benefit, this really pave the way to complex multi tenant applications.