Ayende @ Rahien

It's a girl

Composite UI, the designer and IoC


I am trying to think of a way to make this view work. It is a single view that contains more views, each of them responsible for a subset of the functionality in the view. This is the first time that I am dealing with smart client architecture for a long time.

It is surprising to see how much the web affect my thinking. I am having trouble dealing with the stateful nature of things.

Anyway, the problem that I have here is that I want to have a controller with the following constructor:

public class AccountController
	public AccountController(
		ICalendarView calendarView,
		ISummaryView summaryView,
		IActionView actionView)	{ }

The problem is that those views are embedded inside a parent view. And since those sub views are both views and controls inside the parent view, I can't think of a good way to deal with both of them at once.

I think that I am relying on the designer more than is healthy for me. I know how to solve the issue, break those into three views and build them at runtime, I just... hesitate before doing this. Although I am not really sure why.

Another option is to register the view instances when the parent view loads, but that means that the parent view has to know about the IoC container, and that makes me more uncomfortable.

It is a bit more of a problem when I consider that there are two associated calendar views, which compose a single view.

Interesting problem.

Purely declarative DSL

Let us assume that we need to build a quote generation program. This mean that we need to generate a quote out of the customer desires and the system requirements.

The customer's desires can be expressed in this UI:


The system requirements are:

  • The Salary module is specification is a machine per every 150 users.
  • The Taxes module requires a machine per 50 users.
  • The Vacations module requires the Scheduling Work module.
  • The Vacations module requires the External Connections module.
  • The Pension Plans module requires the External Connections module.
  • The Pension Plans module must be on the same machine as the Health Insurance module.
  • The Health Insurance module requires the External Connections module.
  • The Recruiting module requires a connection to the internet, and therefore requires a fire wall of the recommended list.
  • The Employee Monitoring module requires the CompMonitor component

The first DSL that I wrote for it looked like this:

if has( "Vacations" ):
	add "Scheduling"
number_of_machines["Salary"] = (user_count % 150) +1
number_of_machines["Taxes"] = (user_count % 50) +1

But this looked like a really bad idea, so I turned to a purely declarative approach, like this one:

specification @vacations:
	requires @scheduling_work
	requires @external_connections
specification @salary:
	users_per_machine 150
specification @taxes:
	users_per_machine 50

specification @pension:
	same_machine_as @health_insurance

The problem with this approach is that I wonder, is this really something that you need a DSL for? You can do the same using XML very easily.

The advantage of a DSL is that we can put logic in it, so we can also do something like:

specification @pension: 
	if user_count < 500:
		same_machine_as @health_insurance
		requires @distributed_messaging_backend
		requires @health_insurance

Which would be much harder to express in XML.

Of course, then we are not in the purely declarative DSL anymore.


Authorization DSL

Here is a tidbit that I worked on yesterday for the DSL book:

operation "/account/login"

if Principal.IsInRole("Administrators"):
	Allow("Administrators can always log in")

if date.Now.Hour < 9 or date.Now.Hour > 17:
	Deny("Cannot log in outside of business hours, 09:00 - 17:00")

And another one:

if Principal.IsInRole("Managers"):
	Allow("Managers can always approve orders")

if Entity.TotalCost >= 10_000:
	Deny("Only managers can approve orders of more than 10,000")
Allow("All users can approve orders less than 10,000")

There is no relation to Rhino Security, just to be clear.

I simply wanted  a sample for a DSL, and this seems natural enough.

Looking for a new development laptop

I am looking at getting a new laptop. This is something that should serve as a development machine, and I am getting tired of waiting for the computer to do its work. As such, I intend to invest in a high end machine, but I am still considering which and what.

Minimum Requirements are 4GB RAM, Dual Core, Fast HD, big screen.

Video doesn't interest me, since even the low end ones are more than I would ever want. Weight is also not an issue, I would be perfectly happy with a laptop that came with its own wheelbarrow.

I am seriously thinking about getting a laptop with a solid state drive, and I am beginning to wonder if quad core is worth the price. Then again, I am currently writing dream checks for it, which is why I feel like I can go wild with all the features.

Any recommendations?

ORM += 2

I am going to give a talk about the high end usages of OR/M and what it can do to an application design.

I have about one hour for this, and plenty topics. I am trying to think about what topics are both interesting and important to be included.

Here are some of the topics that I am thinking about:

  • Partial Domain Models
  • Persistent Specifications
  • Googlize your domain model
  • Integration with IoC containers
  • Taking polymorphism up a notch - adaptive domain models
  • Aspect Orientation
  • Future Queries
  • Scaling up and out
    • Distributed Caching
    • Shards
  • Extending the functionality with listeners
    • Off the side reporting
  • Queries as Business Logic
  • Cross Cutting Query Enhancement
    • Filters
  • Dealing with temporal domains

Anything that you like? Anything that you would like to me to talk about that isn't here?

From BooBS to Bake

Okay, I renamed Boo Build System to Bake. Now you can cut the jokes and actually integrate it into a PC environment.

The repository is here, although you can just grab the binaries.


Rhino Security: External API

When I thought about Rhino Security, I imagine it with a single public interface that had exactly three methods:

  • IsAllowed
  • AddPermissionsToQuery
  • Why

When I sat down and actually wrote it, it turned out to be quite different. Turn out that you usually want to handle editing permissions, not just check permissions. The main interface that you'll deal with is usually IAuthorizationService:


It has the three methods that I thought about (plus overloads), and with the exception of renaming Why() to GetAuthorizationInformation(), it is pretty much how I conceived it. That change was motivated by the desire to get standard API concepts. Why() isn't a really good method name, after all.

For building the security model, we have IAuthorizationRepository:


This is a much bigger interface, and it composes almost everything that you need to do in order to create the security model that you want. I am at the point where this is getting just a tad too big, another few methods and I'll need to break it in two, I think. I am not quite sure how to do this and keep it cohesive.

Wait, why do we need things like CreateOperation() in the first place? Can't we just create the operation and call Repository<Operation>.Save() ?

No, you can't, this interface is about more than just simple persistence. It is also handling logic related to keeping the security model. What do I mean by that? For example, CreateOperation("/Account/Edit") will actually generate two operations, "/Account" and "/Account/Edit", where the first is the parent of the second.

This interface also ensures that we are playing nicely with the second level cache, which is also important.

I did say that this interface is almost everything, what did I leave out?

The actual building of the permissions, of course:


This is utilizing a fluent interface in order to define the permission. A typical definition would be:



This allow the current user to edit all accounts, and deny all members of the administrators group account editing permission.

And that sums all the interfaces that you have to deal with in order to work with Rhino Security.

Next, the required extension points.

NHibernate Queries: Find all users that are members of the same blogs as this user

Let us assume that we have the following model:

  • User
    • n:m -> Blogs
      • n:m -> Users

Given a user, how would you find all the users that are members of all the blogs that the user is a member of?

Turn out that NHibernate makes it very easy:

DetachedCriteria usersForSameBlog = DetachedCriteria.For<User>()
	.CreateCriteria("Users", "user")
	.Add(Subqueries.PropertyIn("id", usersForSameBlog))

And the resulting SQL is:

SELECT this_.Id        AS Id5_0_,

       this_.Password  AS Password5_0_,

       this_.Username  AS Username5_0_,

       this_.Email     AS Email5_0_,

       this_.CreatedAt AS CreatedAt5_0_,

       this_.Bio       AS Bio5_0_

FROM   Users this_

WHERE  this_.Id IN (SELECT this_0_.Id AS y0_

                    FROM   Users this_0_

                           INNER JOIN UsersBlogs blogs4_

                             ON this_0_.Id = blogs4_.UserId

                           INNER JOIN Blogs blog1_

                             ON blogs4_.BlogId = blog1_.Id

                           INNER JOIN UsersBlogs users6_

                             ON blog1_.Id = users6_.BlogId

                           INNER JOIN Users user2_

                             ON users6_.UserId = user2_.Id

                    WHERE  this_0_.Id = @p0)

You need to control the stack

I have a fairly strong opinions about the way I build software, and I rarely want to compromise on them. When I want to build good software, I tend to do this with the hindsight of what is not working. As such, I tend to be... demanding from the tools that I use.

What follows are a list of commits logs that I can directly correlate to requirements from Rhino Security. All of them are from the last week or so.


  • Applying patch (with modifications) from Jesse Napier, to support unlocking collections from the cache.
  • Adding tests to 2nd level cache.
  • Applying patch from Roger Kratz, performance improvements on Guid types.
  • Fixing javaism in dialect method. Supporting subselects and limits in SQLite
  • Adding supporting for paging sub queries.
  • Need to handle the generic version of dictionaries as well.
  • Override values in source if they already exist when copying values from dictionary
  • Adding the ability to specify a where parameter on the table generator, which allows to use a single table for all the entities.
  • Fixing bug that occurs when loading two many to many collection eagerly from the same table, where one of them is null.


  • Fixing the build, will not add an interceptor twice when it was already added by a facility
  • Generic components will take their lifecycle / interceptors from the parent generic handler instead of the currently resolving handler.
  • Adding ModelValidated event, to allow external frameworks to modify the model before the HBM is generated.
  • We shouldn't override the original exception stack

Rhino Tools

  • Adding tests for With.QueryCache(), making sure that With.QueryCache() can be entered recursively. Increased timeout of AsyncBulkInsertAppenderTestFixture so it can actually run on my pitiful laptop.
  • Adding support for INHibernateInitializationAware in ARUnitOfWorkTestContext
  • Adding error handling for AllAssemblies. Adding a way to execute an IConfigurationRunner instance that was pre-compiled.
  • Will not eager load assemblies any more, cause too many problems with missing references that are still valid to run

Without those modifications, I would probably have been able to build the solution I wanted, but it would have to work around those issues. By having control the entire breadth and width or the stack, I can make sure that my solution is ideally suited to what I think is the best approach. As an aside, it turn out that other people tend to benefit from that.

Future Query Of implemented

It took very little time to actually make this work. I knew there was a reason liked my stack, it is flexible and easy to work with.

You can check the implementation here, it is about 100 lines of code.

And the test for it:

FutureQueryOf<Parent> futureQueryOfParents = new FutureQueryOf<Parent>(DetachedCriteria.For<Parent>());
FutureQueryOf<Child> futureQueryOfChildren = new FutureQueryOf<Child>(DetachedCriteria.For<Child>());
Assert.AreEqual(0, futureQueryOfParents.Results.Count);

//This also kills the database, because we use an in
// memory one ,so we ensure that the code is not 
// executing a second query

Assert.AreEqual(0,  futureQueryOfChildren.Results.Count);

Interception as an extensibility mechanism

I got a request to allow system-mode for Rhino Security, something like this:

	// in here the security behaves as if you have permission 
	// to do everything
	// queries are not enhanced, etc.

It is not something that I really want to allow, so I started to think how we can implement this, I came up with the following solution:

public class AuthorizationServiceWithActAsSystemSupport : IAuhorizationService
	IAuhorizationService inner;

	public AuthorizationServiceWithActAsSystemSupport(IAuhorizationService inner)
		this.inner = innner;

	private bool IsActAsSystem
		get { return true.Equals(Local.Data["act.as.system"]); }

	public bool IsAllowed(IUser user, string operationName)
			return true;
		return inner.IsAllowed(user, operationName);

	public void AddPermissionsToQuery(IUser user, string operationName, ICriteria query)
		inner.AddPermissionsToQuery(user, operationName, query);

	// .. the rest

Now, all we need to do is register it first:

component IAuthorizationService, AuthorizationServiceWithActAsSystemSupport

faciliy RhinoSecurityFacility

And that is it. This both answer the requirement and doesn't add the functionality that I don't like to the system.

Again, neat.

Is plagiarism the best compliment?

They say that plagiarism is the best compliment, although I have no idea who they are.

It was brought to my attention that this book seems to have lifted more than a few answers directly from my post, not to mention that nearly all the questions are lifted from Scott Hanselman's post.

I don't know if Scott was asked about it, but I certainly wasn't. I skimmed a bit in Google books and I couldn't find any credit section.

I really don't like it.


A while ago I added query batching support to NHibernate, so you can execute multiply queries to the database in a single roundtrip. That was well and good, except that you need to know, in advance, what you want to batch. This is often the case, but not nearly enough. Fairly often, I want disparate actions that would be batched together. It just occurred to me that this is entirely possible to do.

In my Rhino Igloo project, I have a lot of places where I have code very similar to this (except it can go for quite a while):

Users.DataSource = Controller.GetUsers();
Issues.DataSource = Controller.GetIssues();
Products.DataSource = Controller.GetProducts();

I solved the problem of batching those by figuring out in advance what I need and then doing a single batch query for all of them, then handing out the result through each of those methods.

This is complicated and sometimes fragile.

What would happen if I could ignore that and instead built API that looked like this?

public FutureOf<User> GetUsers();
public FutureOf<Issue> GetIssues();
public FutureOf<Product> GetProducts();

Where FutureOf<T> would be defined as:

public class FutureOf<T> : IEnumerable<T>

Now, what is special here, I hear you way. Well, when you start enumerating over a future, it gather all the future queries that has been created so far, executing them in a single batch against the database, and return the results.

Where before I needed to take special care to get this result, now it is just a naturally occurring phenomenon :-)

Neat, if I say so myself.

Rhino Security: Part II - Discussing the Implementation

I just finished testing an annoying but important feature, NHibernate's second level cache integration with Rhino Security. The security rules are a natural candidate for caching, since they change to change on an infrequent basis but are queried often. As such, it is obvious why I spent time ensuring that the whole thing works successfully.

At any rate, what I wanted to talk about today was structure of the framework. You can see the table layout below.

A few things to note:

  • The tables are all in the "security" schema, I find that it makes more sense this way, but you can set it to work with "security_Operation" if you really like (or the DB doesn't support schemas).
  • User is referenced a few times, but is not shown here. As I mentioned earlier, the user entity is an external concept. We are using the mapping rewriting technique that we discussed earlier.

(more below)


Here is the main interface for Rhino Security:


The tasks it performs are:

  • Enhance a query with security information.
  • Get an explanation about why permission was given / denied.
  • Perform an explicit permission check on entity or feature.

The last two are fairly easy to understand, but what does the last one means? It means that if I want to perform a "select * from accounts" and I enhance the query, I will give this baby:

SELECT THIS_.ID          AS ID4_0_,


       THIS_.NAME        AS NAME4_0_




                       INNER JOIN SECURITY_OPERATIONS OP1_

                         ON THIS_0_.OPERATION = OP1_.ID


                         ON THIS_0_.ENTITIESGROUP = ENTITYGROU2_.ID


                         ON ENTITYGROU2_.ID = ENTITIES7_.GROUPID



              WHERE    OP1_.NAME IN (@p1,@p2)

                       AND (THIS_0_."User" = @p3

                             OR THIS_0_.USERSGROUP IN (@p4))



                             OR (THIS_0_.ENTITYSECURITYKEY IS NULL

                                 AND THIS_0_.ENTITIESGROUP IS NULL))

              ORDER BY THIS_0_.LEVEL DESC,

                       THIS_0_.ALLOW ASC);

Isn't it cute?  This is much better than some home grown security system that I have seen (some of which I have built, actually), since it is stable with regards to the cost of the query that it will use. I mention before that it is possible to de-normalize the information into a single table, but since this requires more invasive approach from the application side, and since I have not seen performance issues with this yet, I'll leave it at this point for the moment.

Advance: Working with non DB backed users

In most applications, the user is a fully fledged entity, even if your authentication is elsewhere. Just keeping things like preferences will necessitate that, but let us assume that we really have that as a non DB entity.

In this case, we would need to make modifications to the tables (which we generally do, anyway, although automatically), and re-map the user references as an IUserType that can get the user from an id instead of a foreign key to entity. I think I'll wait until someone really needs it to make a demo of it.

Anyway, this is the main magic. It was fairly hard to get it right, as a matter of fact. But the query enhancement was one of the things that made security a breeze in my last project. We had a custom security module, but many of the concepts are the same.

The next part we will talk a bit about the IsAllowed and how that works. There is some nice magic there as well.

NHibernate and the second level cache tips

I have been tearing my hair our today, because I couldn't figure out why something that should have worked didn't ( second level caching , obviously ).

Along the way, I found out several things that you should be aware of. First of all, let us talk about what the feature is, shall we?

NHibernate is design as an enterprise OR/M product, and as such, it has very good support for running in web farms scenarios. This support include running along side with distributed caches, including immediate farm wide updates.  NHibernate goes to great lengths to ensure cache consistency in these scenarios (it is not perfect, but it is very good). A lot of the things that tripped me today were related to just that, NHibernate was working to ensure cache consistency and I wasn't aware of that.

The way it works, NHibernate keeps three caches.

  • The entities cache - the entity data is disassembled and then put in the cache, ready to be assembled to entities again.
  • The queries cache - the identifiers of entities returned from queries, but no the data itself (since this is in the entities cache).
  • The update timestamp cache - the last time a table was written to.

The last cache is very important, since it ensures that the cache will not serve stale results.

Now, when we come to actually using the cache, we have the following semantics.

  • Each session is associated with a timestamp on creation.
  • Every time we put query results in the cache, the timestamp of the executing session is recorded.
  • The timestamp cache is updated whenever a table is written to, but in a tricky sort of way:
    • When we perform the actual writing, we write a value that is somewhere in the future to the cache. So all queries that hit the cache now will not find it, and then hit the DB to get the new data. Since we are in the middle of transaction, they would wait until we finish the transaction. If we are using low isolation level, and another thread / machine attempts to put the old results back in the cache, it wouldn't hold, because the update timestamp is into the future.
    • When we perform the commit on the transaction, we update the timestamp cache with the current value.

Now, let us think about the meaning of this, shall we?

If a session has perform an update to a table, committed the transaction and then executed a cache query, it is not valid for the cache. That is because the timestamp written to the update cache is the transaction commit timestamp, while the query timestamp is the session's timestamp, which obviously comes earlier.

The update timestamp cache is not updated until you commit the transaction! This is to ensure that you will not read "uncommited values" from the cache.

Another gotcha is that if you open a session with your own connection, it will not be able to put anything in the cache (all its cached queries will have invalid timestamps!)

In general, those are not things that you need to concern yourself with, but I spent some time today just trying to get tests for the second level caching working, and it took me time to realize that in the tests I didn't used transactions and I used the same session for querying as for performing the updates.

Convention based security: A MonoRail Sample

I was asked how I would got about building a real world security with the concept of securing operations instead of data.

This is a quick & dirty implementation of the concept by marrying Rhino Security to MonoRail. This is so quick and dirty that I haven't even run it, so take this as a concept, not the real implementation, please.

The idea is that we can map each request to an operation, and use the convention of "id" as a special meaning to perform operation security that pertain to specific data.

Here is the code:

public class RhinoSecurityFilter : IFilter
    private readonly IAuthorizationService authorizationService;

    public RhinoSecurityFilter(IAuthorizationService authorizationService)
        this.authorizationService = authorizationService;

    public bool Perform(ExecuteWhen exec, IEngineContext context, IController controller,
                        IControllerContext controllerContext)
        string operation = "/" + controllerContext.Name + "/" + controllerContext.Action;
        string id = context.Request["id"];
        object entity = null;
        if (string.IsNullOrEmpty(id) == false)
            Type entityType = GetEntityType(controller);
            entity = TryGetEntity(entityType, id);

            if (authorizationService.IsAllowed(context.CurrentUser, operation) == false)
            if (authorizationService.IsAllowed(context.CurrentUser, entity, operation) == false)
        return true;

It just perform a security check using the /Controller/Action names, and it tries to get the entity from the "id" parameter if it can.

Then, we can write our base controller:

public class AbstractController : SmartDispatcherController


Now you are left with configuring the security, but you already have a cross cutting security implementation.

As an example, hitting this url: /orders/list.castle?id=15

Will perform a security check that you have permission to list customer's 15 orders.

This is pretty extensive, probably overly so, however. A better alternative would be to define an attribute with the ability to override the default operation name, so you can group several action into the same operation.

You would still need a way to bypass that, however, since there are some thing where you would have to allow access and perform custom permissions, no matter how flexible Rhino Security is, or you may be required to do multiply checks to verify that, and this system doesn't allow for it.

Anyway, this is the overall idea, thoughts?

Designing the Security Model

Right now I want to talk more deeply than merely the security infrastructure, I want to talk about how you use this security infrastructure.

There are several approaches for those. One of them, which I have seen used heavily in CSLA, is to simply make the check in the properties. Something like this:

public class Comment
	public virtual IPAddress OriginIP 
			return originIP; 
			originIP = value; 

	public virtual bool CanDelete() { ... }

We can move to a declarative model with attributes, like this:

public class Comment
	public virtual IPAddress OriginIP
		get { return originIP; }
		set { originIP = value; }

We can take it a step further and decide that we don't want to explicitly state this, and just assume it by convention.

A while ago I decided to make a system work in this exact fashion. Of course, I am a big believer of convention over configuration, so we defined the following rules for the application:

  • Need to secure  Read / Write / Delete
  • Need to support custom operation as well, like "Assign work", "Authorize time sheet", etc
  • Developers will forget to make security calls, we need to develop a system that protects us from this.

We built a really nice implementation that hooked directly into the container and the data access later, you literally could not make a security breach, because you were always running under the context of a user, and the data was filtered for you by the data access in a generic fashion. For that matter, we didn't have to worry about authorization in the business code, we made in memory modification and then persisted that. If we had a security violation, we simply showed an error to the user.

We didn't have to define anything in the code either, it was all convention based and implicit. The only complexity was in configuring the security system itself, but that was the nature of the beast, after all. We though that we found the ultimate security pattern, and were quite pleased with ourselves. Until we started user testing. Then we run into... interesting issues.

We had the idea of a rule in the system, which could be used by the administrators to set policies. It worked just fine in our testing, until we started to get impossible errors from the users. I think that you can understand what it was by now, right? (Yes, the usual, developers always test as admin)

Normal users didn't have permissions to read those rules, but the system really needed them to perform core parts of its tasks. We are always running under the user context and we are always making the security check. The system was designed, up front, to be very explicit about it. Now we found that there really were reasons where the system needed access to the entities without performing those security checks.

Another "interesting issue" that came up was the issue of information aggregation. As it turn out, it was always the wrong idea to report that there is only a single entity in the system, just because the user has access to just that one. I'll let you draw the conclusion about the HR users that found out that they really couldn't see how many hours an employee worked in this month.

It was a big problem, and we had to scramble around and hack-a-lot the ultimate security solution that we so loved.

The security system had two major issues:

  • It was too granular.
  • It didn't have any operating context.

Since then, I have learned a lot about how to design and implement security modules, and my current design has the following requirements:

  • Security decisions are made on a use case level
  • Have a standard, enforced, approach of dealing with security
  • Make security decision easy to code.

How does this translate to code?

Well, first I need to define what a use case is in the code. For a web application, it is almost always at the page / request level, and that makes it very easy to deal with.

In my last project, we had the following base class defined:

public abstract class AbstractController
	public abstract AssertUserIsAllowed();

Since all the controllers in the application inherited from AbstractController, we had a single place that we put all the security checks. We still had to deal with security in other places, such as when we loaded data from the database, or wanted partial views, but this approach meant that we had remembered what was going on. If we needed to make security decisions elsewhere, we commented that in the assert.

But this was in a Rhino Igloo application, where by necessity we had to have a controller per page. Using MonoRail, we usually have a single controller that handles several actions, in which case I would tend to write something like this:

public void Login()

public void Index()
public void Save(...)

And then I would mandate that all actions would have to have security attributes on them (Mandate as in, if it doesn't have security attribute, you can't access it).

Well, actually, right now I would probably route it through Rhino Security, which would give me more flexibility than this model, and would keep it out of the code.

Nevertheless, I would put security at the use case level. This is where I have the context to ask the correct questions. Anything else tend to lead to an increasingly complex set of special rules.

Rhino Security Overview: Part I

A few months ago I spoke about how I would build a security infrastructure for an enterprise application, I went over the design goals that I had and the failing of previous attempts.

Since then, I got a few requests to implement that, and since this is really neat piece of work, I decided to go ahead and build it.

The main goals, allow me to remind you, are:

  • Flexible
  • Performant
  • Does not constrain the domain model
  • Easy to work with
  • Understandable
  • No surprises

You can read the post about it to get a feeling about what I had in mind when I thought it up.

When it came the time to actually implement them, however, it turn out that as usual, the implementation is significantly different than the design in all the details. Here is the overall detail of the entities involved.

The main part that we have here is the security model, but notice that the application domain model is off to the side, and have very little to do with the security model. In fact, there is only a single requirement from the application domain model, the User entity needs to implement an interface with a single property, that is all. (more below)



From the point of view of interactions with the security system, we have two extensions points that we need to supply, a User entity implementing IUser, and a set of services implementing IEntityInformationExtractor<TEntity>:


The use of IEntityInformationExtractor<TEntity> is an interesting one. Rhino Security is using all my usual infrastructure, so it can take advantage of Windsor' generic specialization to do some nice tricks. Components of IEntityInformationExtractor<TEntity> are registered in the container, and resolved at runtime when there is a need for them.

This means that you can either implement a generic one for your model, or specialize as needed using generics. I like this approach very much, frankly.

From the point of view of the security model, we have the following:

  • User - the external User entity that is mapped into the security domain. Permissions are defined for users.
  • Users Group - a named grouping of users. Permissions are defined for users groups.
  • Entity - an external entity. Permissions are defined on entities.
  • Entities Groups - a named groups of entities. Permissions are defined on entities groups.
  • Operation - a named operation in the system, using the "/Account/Financial/View" pattern.
  • Permission - Allow or deny an operation for user / users group for an entity / entities group.

Here are a few examples of the kind of rules that we can defined with this method:

  • Allow "/Account/Financial/View" for "Ayende" on "All Accounts", level 1
  • Allow "/Account/Financial/View" for "Accounting Department" on "All Accounts", Level 1
  • Allow "/Case/Assign" for "HelpDesk" on "Standard cases", level 1
  • Allow "/Employee" for "Joe Smith" on "Direct Reports of Joe Smith", level 1

The major shift here is that we treat both entities and users groups as very cheap resources. Instead of having a just a few and structuring around them, we define what we want and then structure the groups around them. The burden them moves from complex code to maintaining the structure. I find this a very reasonable tradeoff for a simple security model and the flexibility that it gives me.

Next time, setting up the security model...

SQLite vs. SQL CE

Those two seems to be the most common embedded databases in the .NET world. This is important to me, since I want to do testing against embedded database.

SQL CE can be used with SQL Management Studio, which is nice, but it has three major issues for me so far:

  • It doesn't support memory only operations. SQLite does, and it means a difference of 12 seconds vs. 40 seconds in running ~100 tests that hit the DB. This is important, especially kicking up everything seems to take about 10 seconds anyway (using Test Driven.Net)
  • It doesn't support paging (WTF!)
  • It doesn't support comparing to a sub query, so this is not legal:
    select * from account where 1 = (select allow from permissions)

Right now I am experimenting with just how much I can twist around to get everything to break.

I am on the third NH bug this week, and counting :-)

Dealing with hierarchical structures in databases

I have a very simple requirement, I need to create a hierarchy of users' groups. So you can do something like:

  • Administrators
    • DBA
      • SQLite DBA

If you are a member of SQLite DBA group, you are implicitly a member of the Administrators group.

In the database, it is trivial to model this:


Except that then we run into the problem of dealing with the hierarchy. We can't really ask questions that involve more than one level of the hierarchy easily.  Some databases has support for hierarchical operators, but that is different from one database to the next. That is a problem, since I need it to work across databases, and without doing too much fancy stuff.

We can work around the problem by introducing a new table:


Now we move the burden of the hierarchy from the query phase to the data entry phase.

From the point of view of the entity, we have this:


Please ignore the death star shape and concentrate on the details :-)

Here is how we are getting all the data in the tree:

public virtual UsersGroup[] GetAssociatedUsersGroupFor(IUser user)
    DetachedCriteria directGroupsCriteria = DetachedCriteria.For<UsersGroup>()
        .CreateAlias("Users", "user")
        .Add(Expression.Eq("user.id", user.SecurityInfo.Identifier))

    DetachedCriteria allGroupsCriteria = DetachedCriteria.For<UsersGroup>()
        .CreateAlias("Users", "user", JoinType.LeftOuterJoin)
        .CreateAlias("AllChildren", "child", JoinType.LeftOuterJoin)
            Subqueries.PropertyIn("child.id", directGroupsCriteria) ||
            Expression.Eq("user.id", user.SecurityInfo.Identifier));

    ICollection<UsersGroup> usersGroups = 
        usersGroupRepository.FindAll(allGroupsCriteria, Order.Asc("Name"));
    return Collection.ToArray<UsersGroup>(usersGroups);

Note that here we don't care whatever we are associated with a group directly or indirectly. This is an important consideration in some scenarios (mostly when you want to display information to the user), so we need some way to chart the hierarchy, right?

Here is how we are doing this:

public virtual UsersGroup[] GetAncestryAssociation(IUser user, string usersGroupName)
    UsersGroup desiredGroup = GetUsersGroupByName(usersGroupName);
    ICollection<UsersGroup> directGroups =
    if (directGroups.Contains(desiredGroup))
        return new UsersGroup[] { desiredGroup };
    // as a nice benefit, this does an eager load of all the groups in the hierarchy
    // in an efficient way, so we don't have SELECT N + 1 here, nor do we need
    // to load the Users collection (which may be very large) to check if we are associated
    // directly or not
    UsersGroup[] associatedGroups = GetAssociatedUsersGroupFor(user);
    if (Array.IndexOf(associatedGroups, desiredGroup) == -1)
        return new UsersGroup[0];
    // now we need to find out the path to it
    List<UsersGroup> shortest = new List<UsersGroup>();
    foreach (UsersGroup usersGroup in associatedGroups)
        List<UsersGroup> path = new List<UsersGroup>();
        UsersGroup current = usersGroup;
        while (current.Parent != null && current != desiredGroup)
            current = current.Parent;
        if (current != null)
        // Valid paths are those that are contains the desired group
        // and start in one of the groups that are directly associated
        // with the user
        if (path.Contains(desiredGroup) && directGroups.Contains(path[0]))
            shortest = Min(shortest, path);
    return shortest.ToArray();

As an aside, this is about as complex a method as I can tolerate, and even that just barely.

I mentioned that the burden was when creating it, right? Here is what I meant:

public UsersGroup CreateChildUserGroupOf(string parentGroupName, string usersGroupName)
    UsersGroup parent = GetUsersGroupByName(parentGroupName);
    Guard.Against<ArgumentException>(parent == null,
                                     "Parent users group '" + parentGroupName + "' does not exists");

    UsersGroup group = CreateUsersGroup(usersGroupName);
    group.Parent = parent;
    return group;

We could hide it all inside the Parent's property setter, but we still need to deal with it.

And that is all you need to do in order to get it cross database hierarchical structures working.

Performance, threading and double checked locks

A very common pattern for lazy initialization of expensive things is this:

GetDescriptionsHelper.Delegate getDescription;
if (GetDescriptionsHelper.Cache.TryGetValue(entityType, out getDescription) == false)
	MethodInfo getDescriptionInternalGeneric = getDescriptionInternalMethodInfo.MakeGenericMethod(entityType);
	getDescription = (GetDescriptionsHelper.Delegate)Delegate.CreateDelegate(

	GetDescriptionsHelper.Cache.Add(entityType, getDescription);

This tends to break down when we are talking about code that can run in multiply threads. So we start writing this:

	GetDescriptionsHelper.Delegate getDescription;
	if (GetDescriptionsHelper.Cache.TryGetValue(entityType, out getDescription) == false)
		MethodInfo getDescriptionInternalGeneric = getDescriptionInternalMethodInfo.MakeGenericMethod(entityType);
		getDescription = (GetDescriptionsHelper.Delegate)Delegate.CreateDelegate(
		GetDescriptionsHelper.Cache.Add(entityType, getDescription);

Except, this is really bad for performance, because we always lock when we try to get the value out. So we start playing with it and get this:

GetDescriptionsHelper.Delegate getDescription;
if (GetDescriptionsHelper.Cache.TryGetValue(entityType, out getDescription) == false)
		MethodInfo getDescriptionInternalGeneric = getDescriptionInternalMethodInfo.MakeGenericMethod(entityType);
		getDescription = (GetDescriptionsHelper.Delegate)Delegate.CreateDelegate(
		GetDescriptionsHelper.Cache.Add(entityType, getDescription);

This code has a serious error in it, in the right conditions, two threads will evaluate the if at the same time, and then try to enter the lock at the same time. The end result is an exception as the second of them will try to add the entity type again.

So we come out with the doubly checked lock, like this:

GetDescriptionsHelper.Delegate getDescription;
if (GetDescriptionsHelper.Cache.TryGetValue(entityType, out getDescription) == false)
		if (GetDescriptionsHelper.Cache.TryGetValue(entityType, out getDescription) )

		MethodInfo getDescriptionInternalGeneric = getDescriptionInternalMethodInfo.MakeGenericMethod(entityType);
		getDescription = (GetDescriptionsHelper.Delegate)Delegate.CreateDelegate(
		GetDescriptionsHelper.Cache.Add(entityType, getDescription);

Now we are handling this condition as well. But my preference of late has been to use this code in multi threaded environments.

Update: I should be wrong more often, I got some very good replies to this post. The code below happened to work by chance and luck alone, apparently. The solution above is the more correct one.

Actually, it is more complex than that. It is still possible to have readers attempt to read the Cache variable (which is of type Dictionary<Type, Delegate>) while it is being written to inside the lock. There is a potential for serious  bit of mayhem in that case.

The safe thing to do in this case would be to lock it always before access, but for things that are going to be used frequently (hot spots) this can be a problem. Reader/Writer lock is much worse in terms or performance than the usual lock statement, and ReaderWriterLockSlim is for .NET 3.5 only.

An interesting dilemma, and I apologize for misleading you earlier.

GetDescriptionsHelper.Delegate getDescription;
if (GetDescriptionsHelper.Cache.TryGetValue(entityType, out getDescription) == false)
	MethodInfo getDescriptionInternalGeneric = getDescriptionInternalMethodInfo.MakeGenericMethod(entityType);
	getDescription = (GetDescriptionsHelper.Delegate)Delegate.CreateDelegate(

	GetDescriptionsHelper.Cache[entityType] = getDescription;

can you spot the main difference? We are not using locks, and we are using the indexer instead of the Add() method.

Empirical evidence suggest that using the indexer in a multi threaded environment is safe, in that it doesn't corrupt the dictionary and one of the values will remain there.

This means that assuming that the cost of creating the value isn't very high, it is just fine to have it created twice on those rare cases, the end result is after the initial flurry, we have the value cached, even if it was calculated more than once. In the long run, it doesn't matter.

Challenge: Strongly typing weakly typed code

How would you make the following code work?

public static class Security
	public static string GetDescription(Type entityType, Guid securityKey)
		Guard.Against<ArgumentException>(securityKey == Guid.Empty, "Security Key cannot be empty");
		IEntityInformationExtractor<TEntity> extractor = IoC.Resolve<IEntityInformationExtractor<TEntity>>();
		return extractor.GetDescription(securityKey);

You can't change the entity type parameter to a generic parameter, because you only know about it at runtime. This is usually called with:

Security.GetDescription(Type.GetType(permission.EntityTypeName), permission.EntitySecurityKey.Value);

Donjon - Hammett's Teasing

Hammett has just posted teaser image on his blog.

Go check it out. I have some idea about how long ago it started, and to get to this point is amazing (and no, I don't have any extra knowledge about what it does otherwise, except maybe free us from OutOfMemoryError s and manual restarts).

Serialazble isolation level on rows that does not exists

Recently I was asked how to solve this problem. An external service makes a call to the application, to create / update an entity. This call can arrive in one of few endpoints. The catch is that sometimes the external service send the call to create a new entity to all the end points at the same time. This obviously caused issue when trying to insert the same row twice.

I suggested using serializable isolation level to handle this scenario, but I wasn't sure what kind of guarantees is makes for rows that do not exists. I decided to write this simple test case (warning: test code / quick hack, don't write real code like this!).

private static void InsertToDatabase(object state)
    int value = 0;

    while (true)
        value += 1;
            using (SqlConnection connection =
                    new SqlConnection("data source=localhost;integrated security=sspi;initial catalog=test"))
                using (SqlTransaction transaction =
                using (SqlCommand command = connection.CreateCommand())
                    command.CommandText = "SELECT Id FROM Items WHERE Id = @id";
                    command.Parameters.AddWithValue("id", value);
                    command.Transaction = transaction;
                    if (command.ExecuteScalar() != null)
                    command.CommandText = "INSERT INTO Items (Id) VALUES(@id)";
                    Console.WriteLine("{1}: Wrote {0}", value, Thread.CurrentThread.ManagedThreadId);

        catch (SqlException e)
            if (e.Number == 1205) //transaction deadlock
                Console.WriteLine("{0}: Deadlock recovery", Thread.CurrentThread.ManagedThreadId);

As a note, performance for this really sucks if you have contention, and I got a lot of transaction deadlocks when running it with multiply threads.

What I have observed was that it indeed inserted only a single id. All other isolation levels (including snapshot) will produce duplicate rows in this scenario.

Assumption proven, now I only need to find what kind of a lock it takes on a row that doesn't exists.