Ayende @ Rahien

It's a girl

Didja know? Merging Windsor configuration with automatic registration

One of the most annoying things that we have to do during development is updating configuration files. That is why convention over configuration is such a successful concept. The problem is what to do when you can mostly use the convention, but need to supply configuration values as well.

Well, one of the nice things about Windsor is the ability to merge several sources of information transparently. Given this configuration:

<configuration>
	<configSections>
		<section name="castle"
			type="Castle.Windsor.Configuration.AppDomain.CastleSectionHandler, Castle.Windsor" />
	</configSections>
	<castle>
		<facilities>
			<facility id="rhino.esb" >
				<bus threadCount="1"
						 numberOfRetries="5"
						 endpoint="msmq://localhost/demo.backend"
             />
				<messages>
				</messages>
			</facility>
		</facilities>
		<components>
			<component id="Demo.Backend.SendEmailConsumer">
				<parameters>
					<host>smtp.gmail.com</host>
					<port>587</port>
					<password>*****</password>
					<username>*****@ayende.com</username>
					<enableSsl>true</enableSsl>
					<from>*****@ayende.com</from>
					<fromDisplayName>Something</fromDisplayName>
				</parameters>
			</component>
		</components>
	</castle>
</configuration>

And this auto registration:

var container = new WindsorContainer(new XmlInterpreter());
container.Register(
	AllTypes.Of(typeof (ConsumerOf<>))
		.FromAssembly(typeof(Program).Assembly)
	);

We now get the benefit of both convention and configuration. We can let the convention pick up anything that we need, and configure just the values that we really have to configure.

Fighting the architecture astronaut

It is surprising how many other things I have to deal with in order to get a product up and running. Right now I am working on the web site for NH Prof, and I found myself tackling a problem in an interesting way.

The problem itself is pretty simple. I need to show a user guide, so people can lookup help topics, view alerts, etc.

My first instinct was to create a HelpTopics table and do a simple data driven help site. Literally just showing a list of the topics and allowing to view them. I got that working, and was busy creating my second help topic when I found out that I had to have some way of referencing another help topic.

Obviously I could just use a link, but that would tie one content of the help topics to the id of another, in a completely non obvious manner. Not to mention that when I deploy, it may very well have a different id.

At the time I was also fighting having to do updates to the content of the site using UPDATE statements (if the content is in the DB, then I need to update it somehow, and I figured that giving UI for that is too much of a hassle). I started thinking about solutions, using the title of the page for a link, wiki style, when I figured out that I was out of breath because I was architecting a trivial FAQ system as if it was Wikipedia.

I decided to take a step back, and try to do it all over again. This time, using such useful tools as HTML and pages. Here is what the current user guide looks like:

 

<h1>NHibernte Profiler User Guide</h1>
<h2>Topics</h2>
<ul>
	<li><%=Html.ActionLink<LearnController>(x => x.Topic("ProfileAppWithConfiguration"),
		 "Configuring the application to be profile without changing the application code")%></li>
</ul>
<h2>Alerts</h2>
<ul>
	<li><%=Html.ActionLink<LearnController>(x => x.Alert("SelectNPlusOne"), 
		"Select N+1")%></li>
	<li><%=Html.ActionLink<LearnController>(x => x.Alert("TooManyDatabaseCalls"), 
		"Too many database calls per session")%></li>
	<li><%=Html.ActionLink<LearnController>(x => x.Alert("UnboundedResultSet"), 
		"Unbounded result set")%></li>
	<li><%=Html.ActionLink<LearnController>(x => x.Alert("ExcessiveNumberOfRows"), 
		"Excessive / Large number of rows returned")%></li>
	<li><%=Html.ActionLink<LearnController>(x => x.Alert("DoNotUseImplicitTransactions"),
		"Use of implicit transactions is discouraged")%></li>
	<li><%=Html.ActionLink<LearnController>(x => x.Alert("LargeNumberOfWrites"), 
		"Large number of individual writes")%></li>
</ul>

And here is the implementation of Topic and Alert:

public ActionResult Topic(string name)
{
	return View("~/Views/Learn/Topics/" + name + ".aspx");
}

public ActionResult Alert(string name)
{
	return View("~/Views/Learn/Alerts/" + name + ".aspx");
}

Now, if I want to edit something, I can do it in a proper HTML editor, and managing links between items has turned into a very trivial issue, like it is supposed to be.

NH Prof Alerts: Use statement batching

This is a bit from the docs for NH Prof, which I am sharing in order to get some peer review.

This warning is raised when the profiler detects that you are writing a lot of data to the database. Similar to the warning about too many calls to the database, the main issue here is the number of remote calls and the time they take.

We can batch together several queries using NHibernate's support for Multi Query and Multi Criteria, but a relatively unknown feature for NHibernate is the ability to batch a set of write statements into a single database call.

This is controlled using the adonet.batch_size setting in the configuration. If you set it to a number larger than zero, you can immediately start benefiting from reduced number of database calls. You can even set this value at runtime, using session.SetBatchSize().

NH Prof new features: Overall usage and aggregated alerts

Hat tip to Chad Myers for requesting those features.

Here is an example of the overall usage report for the entire application.

image

The aggregated alerts gives you a view about how your application is doing. As you can probably see, this isn't a very healthy application. But since this is actually showing the test suite for the profiler, I am happy with that. On the bottom, you can also see what entities where loaded throughout the entire profiling session, although this is not their final look & feel.

We can also see inspect the details of a particular session.

image

Again, this is probably not a healthy code base :-) But that is why I created the profiler for.

NH Prof Alerts: Too many database calls per session

This is a bit from the docs for NH Prof, which I am sharing in order to get some peer review.

One of the most expensive operations that we can do in our applications is to make a remote call. Going beyond our own process is an extremely expensive operation. Going beyond the local machine is more expensive yet again.

Calling the database, whatever to query or to write, is a remote call and we want to reduce the number of remote calls as much as possible. This warning is being raised when the profiler notice that a single session is making an excessive number of calls to the database. This is usually an indicative of a potential optimization in the way the session is used.

There are several reasons why this can be:

  • Large number of queries as a result of a Select N+1.
  • Calling the database in a loop.
  • Updating (or inserting / deleting) large number of entities
  • Large number of (different) queries that we execute to perform our task.

For the first reason, you can see the suggestions for Select N+1. Calling the database in a loop is generally a bug, and should be avoided. Usually you can restructure the code in a way that doesn't require you to do so.

Updating large number of entities is discussed in Use statement batching, and mainly involves setting the batch size to reduce the number of calls that we make for the database.

The last issue is the more interesting one, in which we need to get data from several sources, and we issue multiple queries for that. The problem is that we issue multiple separate queries for that, which has the aforementioned issues.

NHibernate provide a nice way of avoiding this, by using Multi Query and Multi Criteria, both of which allow you to aggregate several queries into a single call to the database. If this is your scenario, I strongly recommend that you would take a look Multi Query and Multi Criteria and see how you can use them in your application.

Required: Debugging / Crash recovery assistance in NJ / NY

A client of mine has run into a production problem, and we need to dig into the application to understand what is going on. Unfortunately, I am not on site, so the level of help that I can contribute currently is not that great.

We are looking for someone that can come onboard and help us figure out what the exact problem is.

I don't have all the details about the production issue yet, but the symptom is very high connection count to the database, and the OS refusing connections with TIME_WAIT. We are pretty sure that there is a connection leak somewhere, but so far we have no been able to identify it.

The application in question compromise of classic ASP, ASP.Net and .Net Window Services talking to a Sql Server 2000 database.

My client is interested in someone who can come onsite and diagnose the issue. They are located in Fort Lee, NJ, near the GWB. Obviously we would like to get to the root of the issue ASAP. If you think that you can help, please contact me and I will put you in touch with my client.

And yes, this does seems like a scenario right out of Release It!

Software as a Service is a matter of trust

There is a whole world of services out there that one can take advantage of, from online office suite (Google Docs are surprisingly functional) to email, from financial services to hosting your own data center in the cloud. Some of them are free, and some are commercial.

I am taking advantage of many of them. Mostly because I am one of those people who keep moving around between machines, and anything that isn't web based isn't practical to me.

Whenever the option of using someone else's services come up, there are a few question that we need to deal with:

  • Security - Do I trust these guys to handle my data?
  • Longevity - Are they going to be around for a long time?
  • How do I get out - Are they locking all my data inside? Can I get it out in a reasonable way?
  • Reliability - Can I rely on them to be up? How often do they have issues?
  • Support - If I have an issue, who deals with it? How do I contact them? How responsive they are? What kind of treatment do they have?
  • Transparency - If there is an issue, do I get to know what happened? How we can ensure that it will not happen again?

None of these are idle questions.

Right now, I cannot work on NH Prof because of an issue with assembla's Subversion. And Ken, yes, I know this wouldn't have happened if I used Git. When I used Google Docs to manage my finances, there were several incidents of me logging into the system and not seeing any of my spreadsheets. They always came back several hours later, but that was a very unwelcome feeling.

I have run into issues with non responsive (or even hostile) support before, and this is one of the reasons that I would immediately terminate all association with a particular provider.

I also prefer services that have a commercial option to the completely free one. With the completely free ones, unless they are backed by someone like Google or Microsoft, there is a strong likelihood that some day it will all just be gone.

Now, for many of those services, I can get a server and host them myself. The problem is that there is a non trivial management cost for that. And if I want to do that, I would need to invest quite some time in doing so. The plus side is that if there is an issue, I can fix this. The down side is that if there is an issue, I need to fix it.

In the end, it come down to how much you trust the service provider, more than anything else. I don't think it is feasible to ignore those services, or even in any way smart. But you do have to be cautious about who you are using, and have a planned strategy to back out of this service, or move to another one. In my case, some of those strategies means falling back to gool ol' Excel and digging through my email.

For some things, like my email, I don't have a fallback strategy. If Gmail is down, I am going to be pissed, but the features that I get from Gmail is so important to my workflow that I can't imagine doing it any other way.

Trust, that is all there is.

Discussion: OO 101 Solutions and the Open Close Principle at the architecture level

This post is based on the transcript of a conversation between me and Luke Breuer, regarding the application of the Open Close Principle to the system architecture. The quoted sections are Luke's, and my own responses are shown underneath them. This is a different style from my usual posts, because this is a conversation, which was transcriptted and only slighted edited.

I hope that this would help all those who wanted to know more about what I meant about using OPC at the architecture level and how to structure systems where you only ever add new things.

Let us get started:

The topic of the conversation – well, I mean, the most general topic is how do you make software better.The more specific one is that you’re talking about using the open and close principles for the core architecture and how that enabled you to extend it very easily and it just turned out that that allowed you to do a lot and not have much of a maintenance problem. And it also meant that you didn’t need to test or gets 100‑percent test coverage like a lot of people would advocate almost blindly.

I’m not sure that I would say that I don’t want to have tests. I think the tests would be very helpful in many scenarios. But there are scenarios where tests are awkward to use or just plain annoying. And the scenario that I talked about in the blog post, I had an application that was was WebForms application so it’s inherently hard to test.

We tried to test it using Watin for a while, but that was very slow, and it was very fragile because of the Java script – we did a lot of Ajax. So trying to synchronize between the test, the server, and the Java script code was just literally an impossible task. When I found myself writing threading code in Java script, I know that I went too far.

So basically, we had a problem of trying to synchronize when we running the test and we had a lot of ajax in the pages, then we had three different execution environment. One of them was the test. One of them was the actual code on the server. And one of them was the Java script that was running on the browser. I wrote threading coding and Java script.

That was already difficult to try to do, so we gave that up. I suppose that we could have tested the inner layers of the application, but at that point I felt that it wouldn’t worth it – it wouldn’t have enough ROI to do so. So we basically didn’t. And I just came out of a project where I tried to do everything right. I had this rich domain model and I had this rich semantics of a how it was working. And I had this really nice testable solution. Then we actually hit changing requirements.

And the test that I was so proud of, became from between enabler of change to actually being a hindrance of change. They shackled us to the old design because when I tried to make changes the way the system worked, because a business requirement has changed to the level that old assumptions that I made were no longer valid. I now had to make a drop down the old test suite and rewrite a new test as I went along. And then I had to try to port the old test and to see if they made sense. This is not a standard thing, this was something that was we basically changed something in the core of domain.

But what I found was that at least for my style of work, is that I tend to have a lot of drafts along the way. And drafts change very rapidly. So usually the way I try to build software is to say, “Okay, I’m going to write it with tests,” but these aren’t sacred. That is, if I find that I have a significant enough change I’m going to blow away the test and start over. Usually we start over from scratch and reboot the test project.

Then I’m going to salvage from the original production code and the original test code. But this is now salvage mode. I’m not going to try to port everything. This is too big a change in many cases. Now this is for draft mode. This is where you start a project. Now I had a team of people that didn’t really did TDD and I wasn’t confident in my ability to impart this knowledge in that time. What I decided to do was to take a different approach to how we were going to build the system.

I wanted to be able to build the system in a way that we didn’t actually have to go and do integration all the time. This was a web application. So we had a very well‑defined scope of change. We only changed one page. Now when I’m talking about web application in which we only changed one page, now we had this one feature called. This was the search feature.

And that search feature took roughly – let me think – roughly a month and a half, a single feature, a month and a half. And this is calendar time, so maybe four or five man months just from the point of view of the effort that we dedicate to this thing. This was one of the main features of the application. And basically it had features from everything like I want to be able to search by just about every attribute on every piece of the entire domain model, to I want to be able to export that to Excel, or a world or I want to be able to go from the search page to a particular resort.

That sounds like exactly the software that my company makes.

Yeah, and then I want to go to from that resort to with children, and so on and so forth. So and now from this fifth level down, a page that I went through from the search page, I want to hit, “Okay, back to search,” and go back to the page that I started with, with all the settings, all my search, all my paging, sorting, whatever that I had there.

That was a very nice page to work on. A very, very complex page. We actually managed to do it quite nicely I think. Anyway, so the base architecture stated that we want to have only add only features, or add only code. So because we had the Web model, we are able to say the page is the unit of change, if we want to make a change to a particular feature then we had to look for the page or the set of pages that compromised this feature.

But we actually didn’t have to change a feature just because. Because we always had a very well defined scope what we wanted to do. So example, the search page had a seven or eight features in it. But the way that we built the search page meant that there was isolation between every feature in the page. So we have the export to Excel as a good example. The export to Excel was literally a build as a, “Okay, you click this button. You’d been redirected to another page that is going to now generate the actual Excel file that you’ll download.”

The unit of change from our perspective was, “Okay, let’s create the new URL that will generate the Excel file from the search results.” And all the change that we actually had to do to code that was already written, was go to the search page and change the link for the export to Excel. And that was it. We didn’t actually have a way to break existing features because we never touched them. And once we had this working, we can move on to new features.

And if the customer found a bug or if they wanted to change a feature, then we’re able to say, “Okay, this is my feature.” It’s because we built it using an always add architecture, the feature were inherently small. The biggest feature that we had was the search functionality which was, if I remember correctly, spread across three files and had a total of 3,000 lines.

By the way, just to give you an appreciation, Rhino Security, which is a full‑blown framework for security in NHibernate, is fifteen hundred or so. That is how complex it was. But even this feature was literally mostly dealing with just building the queries, not actually doing some complex logic that we could somehow break.

Now another thing that I’m fond of, so I had other project in which I improve on this idea. And basically I’m calling it just hard code it. This came out of the notion that we already have a really great way of configuring the system. We write code! And the tool that we use to configure the system is essentially the compiler.
So let’s say that I want to write a business rule. If you are a preferred customer, you are going to get 5 percent off some item, all the chocolates that you buy. Okay? Something like that. This is a sale. This is an offer, or whatever you want to call it. Now one of the way of doing that is by configuring via XML, creating a rule engine or whatever you want. All of those options are pretty complex. An easy option would be just write the bloody thing inline. Just if this is the the problem, write if your a preferred customer and just do it.

Now the problem with doing that in line is well this is very easy to do. Then now you are going to are you into malignance issues because now you have logic scattered all over the place. So the solution for that is to say, “Okay. I’m going to formalize the process of hard coding things.” I mean, why is it that people always think that a hard coding is a bad idea? And why is it that so many people treat hard coding is a sin?

Because from our perspective that is because people are drawn to hard coding because it’s so easy. And because it creates maintenance problems down the road, we treat it like a sin. So the idea is let’s formalize hard coding in to an approach that is maintainable. So now we take the idea of order processing, of changing the way that we process orders. Like if you are a preferred customer, you get 5 percent discount for chocolates.

We create an extension point an abstraction called IOrderProcessingRule. Now in order to apply a new business rule, you write something like a, “Give preferred customer 5 percent –” this is the class name, by the way – “Give_preferred _a customer_5_percent_on_chocolates.”

Okay. So now we are going to formalize this and say something like this:

public class Give_preferred_Customer_5_percent_discount_on_chocolate : IOrderProcessingRule
{
     public void Process(Order order)
     {   
            order.OrderLines
                .Where(x=>x.Prodcut.Name.Contains("chocolate")
                .Apply(x=>x.AddDiscount(5.Percent());
     }
}

So we write this piece of code and we drop these in a particular namespace in our application. And now we are done. Because now the system will automatically pick up this rule, apply it in specific places. And now you can literally forget about everything else that you need to do because you have written the code to do that. The order possessing only knows about IOrderProcessingRule. That piece of code is a very generic, very easy to prove that this is correct. And you don’t really have to manage complexity because you are just doing that. You writing everything inline, in a formal fashion, and we overcome complexity by splitting it into tiny pieces.

So here is a blog post that I wrote about that, basically we write an entire system is in this approach. We had somewhat more elaborate approach of how to do things. But this literally reduced the time of new features from a day to half an hour, something like that.

Okay. So the deployment story – kind of written my questions here. So you just – I’m guessing you compile it and you just drop the DLL on a Web server directory somewhere and it pick this up.

That is one option. Another option is just compile this on the fly and treat this as a script. Now this is still using the C# because we assume that we may want to use such tool as refactoring or whatever on that. Another option that we my have is just use DSL for that and give the user a very specific nice tool in order to handle that.

So you’re actually talking about supplying the C# compiler at the client’s.

The C Sharp compiler is already deployed there. You don’t have to do anything, the compiler is bundled with the .NET runtime.

Oh, okay. Okay. So you – alright. So you can do code CodeDom and all that. On any machine, you can access CodeDom and do compilation.

This is how you do this, hot code swapping web cast. So your option is basically now you have the order processing engine that takes order processing rules and it may take several order processing rules because – I mean several order processing rule types – let’s say this – for different steps in the execution of processing the order. So you may have an IAfterAuthorizationRule or something like that, or a IPreAuthorizationRule, all sort of stuff like that, that you can literally select what you are going to do.

I handled that by having a common IOrderProcessingRule just having an attribute call [Handle] and then have an operation, a list of operations that this can handle. And this told the engine it was point to an action executer. Usually I also say that the rules that I’m creating is also inherent from an abstract class and not from an interface. Because I’m pounding the hell out of the common scenarios in order to get them into a set of utility methods inside of abstract class so the code that I actually have inside my order processing rules is extremely simple and clear, something like the code that we have above, this is the code that you have. That’s it.

So how you handle ordering of five different classes to implement this?

I don’t. Ordering – in this case, ordering doesn’t matter. If order does matter, then I am creating a composite and explicitly ordering them.

So you create a class that composes other classes?

Yes, although I would strongly suggest not relying ordering because then you’ll get into more complex scenarios.

Okay. What about detecting when business logic conflict on other business logic, when there are possible contradictions on what’s being done?

Define – give me an example.

Well, I’m just thinking that if you have a bunch of business logic are you requiring someone just to have it all in their head or can you do some sort of analysis on it?

Oh, now you’re getting into why you want to have a DSL. So usually that really depend on the type of things that you’re doing. Because let’s say that we are back in the order processing scenario. And I’m sorry if you have a better scenario that fits your domain we can talk about that. Do you have one? Something that fits your own domain better?

Well, something – we have an extremely complex problem that I don’t know if it’s useful to discuss. The hospitals, at least in America, have contracts with insurance companies. It defines the patient came in with this problem, got diagnosed and had these procedures and drugs and whatnot provided them. And there is someone who hates software developers working for some of these insurance companies, maybe a lawyer, maybe not –

And they give you these crazy rules?

Insane rules. Actually, I have a friend at the company and he’s actually identified ambiguities as a legal language. So that things could actually be interpreted to it legally.

I once was able to go to company because I tried to implement something and wasn’t able to do that. And we figure out that we literally their business model was not possible, not the way that they stated that.

It’s very interesting because your approach here might work well for the contract processing. But right now it’s all just straight comparative code. Because you have to do it a few times to detect a pattern and figure out how best to effectively parameterize it. Which is what the DSL does to the business logic.

I would actually say that if you have – how many rules do you expect to have in the system? To the nearest hundred...

Three hundred and eighty.

Three hundred and eighty. So at this point you really want to be able – I mean you really want to – 300 is now beyond your ability to remember off the top of your head. And it’s also usually more than anyone can remember, period. So now you have a system that you can’t understand, by design. Okay? Yeah. This is why when people do audits, they are getting very, very scared.

So anyway, one of the things that they do in this situation – when you have literally thousands of rules. And now you have to start building into the system traceability and visibility. So give me an example of a rule that you have.

An example of a rule is that if a patient has a certain kind of operation, say an appendectomy or something like that, that that procedure sets a maximum price on how much the insurance company will pay.

By the way, I was wrong. I actually haven’t looked at the data structure, but we have what looks like 3,700 rules.

Thirty‑seven hundred, 3‑7‑0‑0.

Yes

Wonderful. Wonderful. Okay. But – and here is the funny part. Only a few of them are valid in each particular scenario.

Correct. And then the 380 does tell you that we have ’em in groups somehow. So it’s not completely crazy. It’s just mostly crazy.

Okay. So given this scenario, here is what I would try to do. At this point the rules say that you have enough availability to try to do it using a DSL. But even if you don’t want to do it using a DSL, then a lot of the scenarios apply, a lot of theory apply.

You have to some have some sort or rule evaluation context. So when you execute the rules on top of some scenario, every rule that evaluate is valid has to be recorded. Along with recording that this rule is valid, you also record what is the condition that was evaluated to be valid.

And that give you a whole world of options just from the visibility system. Why did we say that this guy is not valid for insurance? Well, what are the rules that have now been valid. Okay I see that he’s a heavy smoker, and he had an appendicitis operation last six weeks. So those two are valid. I see that the rule’s name is "don’t give insurance if patient’s a heavy smoker". Okay. Now I know what’s happened.

So just from that point of view, you are being very clear about what’s going on.

Can I brainstorm a little bit on your code there? So it seems that what you’re allowing is, first of all, it can definitely create classes that take – lambda expressions in constructor and test instances of those to your processing engine to parameterize the stuff that can be parameterized. Maybe you don’t even need a lamda.

I’m not sure. Let's talk in code, okay? That is usually better.

// Luke's suggestion to parameterize
public class Give_preferred_Customer_discount : IOrderProcessingRule
{
     public Give_preferred_Customer_discount(Expression does_discount_apply, decimal discount)
     {
          ...
     }

     public void Process(Order order)
     {   
            order.OrderLines
                .Where(doesc_discount_apply)
                .Apply(x=>x.AddDiscount(discount.Percent());
     }
}

You’re being over generic.

Well, wait a second. Let’s go see if... so then we can say where discount apply. So theoretically it could be parameterized that way. So what’s bad about that?

Because now you’re being overly generic. Who is going to configure this?

Who’s gonna use it?

Yeah. Who is going to instantiate this class?

I don’t know. That depends on the execution model. What I’m just saying is I’m trying to figure out way to parameterize the rules where it makes sense to because that’s a lot of hard coding and if you have the same rule slightly many different times, that’s a lotta copy pasting.

Don’t copy/paste. Use inheritance.

// a better way to parameterize by Oren
public class Give_preferred_Customer_5_precent_discount_on_chocolate : IOrderProcessingRule
{
     public Expression<Func<OrderLine, bool>> Condition
     {
                get
                {
                        return line => line.Product.Name == "Chocolate";
                }
     }
     
     public Expression<Action<OrderLine>> Action
     {
                get
                {
                        return line => line.AddDiscount(5.Precent());
                }
     }
}

Basically the idea is the same, but now instead of actually having code that execute, we split the difference into two things. First we have the condition and the condition is an expression tree, so we can process that at run time.

We have a rule that contain two properties, condition, and action. Both of these properties return a lambda expression, an expression tree because now we are able to process that at run time. Now we are able to take a look at what the code actually does. Even if we just perform a ToString() on the expression tree, that still means that we have a lot of more information at run time than we had any other way. Same thing for action. Because we are –

Now that’s a lot of code to write. I’m guessing you would use DSL so you can actually specify it on one line and not get what you have there?

In the DSL example, we will have something like this:

// this is a DSL equivalent, which is compiled directly to IL
when line.Product is Chocolate:
    apply_discount 5.percent

This would be this code would translate to in a DSL.

Okay. So would you use the DSL to generate that C# code?

No. This DSL is directly compiled into IL code.

It’s kinda funny. I had no idea that out of this conversation I would get a possible way to make our hospital contract processing much better.

Here is the example hospital rule:

// example rule of a rule that a US insurance company would 
// have that defines how much it will pay a hospital for its 
// client; the contracts insurance companies have with hospitals
// can be _very_ complex
when patient_arrived:
    had_a inflamed_appendix
    
when patient_left:
    did_not_have inflamed_appendix
    
set_max_payable_cost 14_000

Something like that. So now this is the single rule. Now let’s write another rule. Let’s say it’s when patient arrived and patient is premium member. And let’s say had appendix. If you wanted to be really nasty, you could do something like that. When patient, if he went in not being a premium member and left being a premium member, then he is payable for the higher amount. Stuff like that.

// another rule

when patient_arrived and patient is premium_member:
    had_a inflamed_appendix
    
set_max_payable_cost 19_0000

Now this is very initial language design obviously. So take that with a grain of salt. But this is what you’re talking about. This is the type of rules that you have. Now the moment that we start having rules like that, now we can start asking questions like, “What are my rules for inflamed appendix? What am I supposed to do?”

And it’s readable.

Not readable, queryable. Now you can ask what are my rules for an inflamed appendix and you will get an answer. For premium members, if the recuperation – and you should probably – we can do something better like saying had an operation for.

//better domain semantics
had_an_operation_for inflamed_appendix

is_a premuim_member

set_max_payable_cost 19_0000

Okay? So this is now much better concept for when left, when arrived. So now you can start doing queries on top of that like saying, “Okay. Show me everything that what we do for inflamed appendix,” and you get an answer. Ask the system what happens if a patient has an operation for an inflamed appendix and the system will tell you all the different possibilities; in the above case, it would tell you that:

  • if the person is not a premium member, the maximum cost will be 14,000
  • if the person is a premium member, the maximum cost will be 19,000

You get, “Okay, here is what we do.” And because we have a structure about how we build things, then we say, “Okay. For premium member, this is what you’re doing. For a non‑premium member, this is what we are doing. Here are the limitations. Here is the max payable. Here is the duration in which we willing to pay. We are not willing to pay if you did it in none of our own hospital. If this was an emergency procedure, blah, blah, blah, blah.” So the idea here – this is a  rule but it’s not write only. It’s queryable as well and now you can start doing business intelligence on top of your own business rules.

So we actually do some of that. One of the things that we want to do is the hospitals renegotiate these contracts every year. And what they wanna do is, “Well, how can I change my prices given the very complex rules that I have so I can make more money?”

Oh, wait. Now we need to have simulation capabilities as well. This is really powerful stuff.

This is actually probably a pretty good example to both explain why your approach here is really good, and I can actually – if this turns out well enough, I could actually implement this.

Okay, yeah, yeah. That depend – and what you’re writing now that depend on the structure if your DSL, on the way you build things. But because – now another issue, this actually compiled down to IL. So for your perspective, this is insanely fast. You can also do things like – okay, let’s try to do speculative analysis. What’s going to happen if you are going to raise based on my last year history?

Let’s execute all of last year’s cases on top of this here set of rules and see how we can optimize what we actually get. Let’s say that we know we have 60 people with appendicitis came to the hospital. And what happen if we actually tweak this rule to say that they only get this amount, like if they have a bad credit rating or something like that?

Our process of applying the rules actually records when each rule has been created and which rule has been invoked. So it doesn’t give you quantity analysis capability here, but we can ask how many times was this rule was invoked.

Yeah, but what we’re trying to say, you can actually try to do it the other way around. You have the history of the cases that you had, right? Now let’s say you are in the renegotiation period with the hospitals and you want to be able to tell them, “Okay, I’m going to reduce the cost for an appendix operation by $1,000.00. But I’m going to give you three pregnancy checks or whatever,” because based on past history, you can say, “Okay, this will – if next year is the same as this year, I know that I’m actually going to have a lot more – it’s more cost effective to do this thing than that thing."

And then you actually get into the point where your actual technical expertise and ability to mine and manipulate the data that you have is allowing you to give your hospital what they want, but at the same time, remain competitive and actually make more money.

Oh, yeah. And an advantage of this right here, I think we get a throughput of about 1,000 accounts – each time patient goes in and gets something done, we have a throughput of about 1,000 accounts a second right now. If this C# approach could significantly improve on that, and you go from a process that – a what if process of several hours to a day down to maybe ten minutes.

How many accounts do you have?

In the system, we have several millions.

I would say that you can do that in under three minutes.

That would be fantastic.

You can just throw it on a grid. The main problem would be just loading the data to access that. But those are all just in memory rules that you can just spit out and just execute immediately basically.

I mean strictly speaking, if you get enough machines out there you could type in a rule and get what changed to the picture.

Actually, a million records is something that you can do very easily. I mean you can just pre load them into memory and just execute that. So let me think for a second. Assuming that you don’t have a – yeah. Drop the three minutes, maybe less. You can probably do on‑the‑fly changes.

I don’t doubt it. I mean, I’ve actually wanted to do this from the beginning, but I had too much else to do so...

Okay. So how much of the system can you implement using this approach?

Well, this is – the thing with this is you can – when you’re – with the code you have above there, you’re basically saying, “I’m going to assume that the rule has a pretty specific form, a set of conditions for whether the rule applies, and a set of actions to perform if the rule applies. That gets you some of the amounts, but it doesn’t get you rules interacting with each other.

What do you mean by that?

Which is something that we can't do without. What I mean is that if a certain procedure happens, it changes the evaluation model. Sometimes there will be no max price. Let’s see. That is probably covered by a logic.

When you have a rule that changed the evaluation context for the rules, you re‑execute on the rules.

Okay. I’m going to get smart about which ones are...

No. You actually don’t really need to get smart in most cases because usually performance wouldn’t be an issue, but usually what I’m doing is – okay. Let’s execute all the rules and a rule don’t actually change the state of the system. Rule produce output that I can now – okay. Let’s say that the rule like if you’re under this clause, then you get everything for free. So now the rule needs to be rewritten. And now the rule output is, “Everything is possible; go ahead.”

Now at that point of time, we re‑execute the rules and every rule that actually try to set a maximum amount, is now being invalid, because you have the visibility into what a rule is actually trying to do.

Now what happens if I for some reason have a rule that cannot be expressed from the DSL? Is there a way to like C# and then invoke it?

Yeah, it’s just IL. That’s the fun part.

Yes. That’s the – that’s probably what I would consider almost the most power capability because usually you can parametrized between 50 and 99 percent of a system. But you always have that edge case which requires something extra complicated. And supporting that in the DSL might make the DLS that much more complicated. But being able to just inject C# instead it can be very powerful.

Okay, one note, however, the DSL that I have in mind is based on Boo, and Boo is a fully functional programming language. So your DSL is actually turing complete.  I would say that if you have to make it look ugly with a bit of code, you probably are missing an abstraction in your DSL that you can facilitate. Like when we move from when patient arrive and patient left, to had an operation of.

Yeah. This has a lot of promise.

Yeah, I think so. It’s certainly a very cool way of handling complex systems.

I’m telling you, if we can pull this off and if we could get that distributed processing of rules down, as the user is typing in contract rules he gets useful information back, that could turn part of our sector upside down. That’s very powerful ability that I don’t think anyone has at this point in time.

Okay, cool. Very cool. So just going back to the original topic, this is what I’m talking about when I’m talking about a write‑only architecture, open/close architecture. The idea here is that you’re literally able to take the system and say, “Okay, this is the core part of the system. This is how I evaluate what is the – if a patient should get paid and how much he should get paid.”

And at this point in time, we have very limited set of operations that we can execute. Set maximum amount, set exceptions, set unlimited amounts, stuff like that. This is very limited. This is very small, and this highly testable in isolation.

the part that usually isn’t testable or hard to test is all the crazy business rule that they keep making up. I refuse to call this business logic because there isn’t any logic in it.

My head hurt when I was trying to understand. I actually went to our client and they were just – it’s just they kept adding onto different ways the contract could be calculated. It’s just absolutely ridiculous. I mean I honestly do not envy my coworker for having to figure this stuff out.

But now that he’s figured out all the different all the different possibilities of what could happen, now the fun part ’cause you can actually start building the DSL like this and – so what you’re saying is that once you have a good core that can support something like this, you don’t often get disruptive feature requests that require major rearchitecting?

No, you don’t. You generally have to go with the same path over and over and over again. Just  “Okay, now I have this one. Now I have this one.” Sometimes you will get something like, “I want to check if – I need to make a check to see if his wife is also in the danger group,” or something like that. Okay, new feature in the core. This is something that we need to add.

But generally stabilize on a feature set that satisfy all the parameterization that your customer have. And now you’re just churning out very similar Lego bricks that all look the same. And now you’re evaluated is in the type of additional services that you can provide on top of this, like on‑the‑fly calculation, like let me tell you how I can do speculative analysis on top of historical data in your current root set and try to do on‑the‑fly optimization.

Like imagine that you’re trying to do something like what you just said about, “Let’s change a rule and see what’s happened.” What happen if I could just, “Okay, let’s look at all the rules and let’s try some optimization strategies to see what’s going on.” And now I’m going to tell you that, “You know what? If we can reduce this and up this, then we have a much better success chance about making 60 percent more profit this year.”

Well, and you – you almost get enough power that you have to be careful about changing things around too much or they suspicious.

Yeah. Yes, and no, because this depend on how you market that. This is, by the way, why you don’t want to make the decision automatically because it always will end up being optimizing toward a place that you generally don’t want to go.

Well, I you wanna be really evil, you sell the tool to the insurance company and the hospital.

And then they just optimize the client away. Yeah, and so at that point, no one use them, so you basically optimize away the client. So yeah, that’s why I’m saying that you need to be careful not to make this automated because at some point, you really want to make sure that the customer actually gets what they want. Yeah.

So jumping out a little bit, it seems like this is pursuing the kinds of ideas that we’ve known we should design for ages. But you only have parts of the system coupled that need to be coupled. It’s basically the same things you learn in Software Design 101. But for some reason, when it comes to the real world, people just can’t seem to follow those idea. Have any idea why that is?

A lot of focus on infrastructure. A couple of hours ago, actually, I got a feature request for Rhino Security. It was, “I need to do a – type information for the operation that they have. Can you please add an int field to the operation and call it AppSpecific so I can add my own stuff there and anyone else can add their own stuff there?” And I look at that and I said, “You know what? I think this makes me look like Win32. And I don’t really like that. I have another option. You can derive your class from my operation and add your own constant behavior and that’s it."

Oh, do you mean inheritance like it was supposed to be?

Yes, like add your own specific stuff. The software will pick up that you use this new operation, this new derived operation and will use your own stuff. So you have extended my software. My software has allowed you to extend itself and now you’re done. Create your class, register this new class in the mapping. And now you have this new class everywhere.

You’re effectively saying you do the minimum amount of work that could possibly be required, just information‑wise to accomplish what you need accomplished, and you’re providing a system where that’s actually built in.

Yes, so the system is literally having an IoC in place, having something like NHibernate place which actually understand the concept of OO and can apply them. In most typical system, let’s take a typical system of – you know what? Even something like Aminal, Dog and Ecat. The problem is that yeah, you can apply a typical old design and have everything that isn’t specific to a dog or cat take an animal as a parameter.

The problem is that you have no way of introducing a lion into the system because the system is static is into the time in which it was compiled. We no longer live in a world where this is true because we can introduce new types into this system on the fly. So if we have a tool that support that, we can literally apply a classic old design solution into our approach without having to do anything complex. The problem is that people have been so ingrained on the notion that this is so hard; this is not possible to do it on the fly and dynamically, this is such an ingrained concept that people just don’t think about that.

It’s historical baggage. And this is historical baggage mostly in the minds of the people who are working on that. This is no longer a technical problem.

So why doesn’t anyone else do this? They don’t know?

People are doing that. If you look at all the advance stuff. I did an advance talk – advance NHibernte and advance IOC talk. Frankly, I was slightly ashamed of doing these talks because, “Look, inheritance. Look, polymorphism.”

And this was my advance talks, like, “Look, I can add a new property to a derived class.” Wow. And, “Oh, look how I can override behavior using the override keyword,” stuff like that.

And that was in the advanced IOC and advanced in hibernate. And I’m slightly queasy about calling them advanced because they’re literally, OO 101. But the application of this 101 principle in reward scenarios are what make them advanced. And then you move to the really advance stuff like, “Okay, let’s select the appropriate behavior or the appropriate class that we want to load at runtime, based on business logic. Let’s say that you’re working for... Prudential is a good example of an insurance company, and AIG is another one, just random stuff that pops into my head.

Well, AIG isn’t a good example of an insurance company right now

Yeah. Okay, those are just two insurances companies that just popped into mind. So let’s say that both them run on your software. They have different rules, different requirements, different behaviors, different data they are gathering. But if you want them to run on the same system – now when a request come in, calculating whatever this is a valid insurance for a AIG customer, they go through a different process then for Prudential customer.

So for that reason, your system is set up to handle, Okay, when I’m loading a Prudential customer, it’s different than an AIG customer, both in the fields and the behavior of the system.

It’s interesting that you say that because we actually have our hospital, and there are probably about 40 different hospitals in the systems.

And all of them have different rules.

I didn't mean hospitals, 40 different insurance companies.

And all of them have slightly different rules on top of the same basic offering, right?

Oh, some have – some are very similar. And some have very different ways of evaluating that definitely need to be accounted for.

And the problem with this type of system, how do you actually manage to do that and keep your velocity over time, especially if you want to release a new version?

I mean, the question is can you use this operating engine for this company and this other engine for the other one?

No, you don’t. You use the same engine and specialize the rules. No, yeah, yeah. No. You could actually have something like these guys do something so crazy, I don’t want to introduce this to anyone else. So I’m going to specialize the engine – again, sub‑class and extend. And when I’m processing something of these guys, I’m using their engine. And I am using their engine by virtue of registering that in their own child container and now I’m done.

And how each of the customers have their own rule set, basically just scripts that they’re running. And they’re separated by the directory that they’re in, and that’s it. You’re done.

So there are two major advantages I see. One is overriding all the other functionality you need to override instead of just duplicating. And the other is turning completeness that sometimes you really do need that. And your environment make this trivial. Which is virtually unheard of unless you simply don’t have any business rule engine in there at all.

Yeah. It’s sometimes – one of the major problem in the industry we try to solve complexity with complexity instead of – so you end up with complex solution for complex problems. But if you don’t really too much complexity for complexity, you generally starts in a simple solution. Often, that simple solution is wrong. But it usually will point you toward the right direction.

Yeah? Oh, yeah. I knew there was some way to make it simpler. I just got too much else to do and... Well, I think I have a pretty good idea of what you’re suggesting here. What I would suggest is you do whatever you want with this recording, but I would like to type it up just because I find it much easier to read than listen to a sort of – I don’t really like watching the PDC videos because they’re linear and I got to pause and rewind if I wanna hear something again. Whereas, if you have written material, you can scan it at your own speed and do all that.

And there is where the interesting bits are over. Assuming that you got this far, please let me know what you think about this format.

NH Prof Alerts: Excessive number of rows returned

This is a bit from the docs for NH Prof, which I am sharing in order to get some peer review.

The excessive number of rows returned is a warning that is being generated from the profiler when... a query is returning a large number of rows. The simplest scenario is that we simply loaded all the rows in a large table, using something like this code:

session.CreateCriteria(typeof(Order))
	.List<User>();

This is a common mistake when you are binding to a UI component (such as a grid) that perform its own paging. This is a problem is several levels:

  • We tend to want to see only part of the data
  • We just loaded a whole lot of unnecessary data
  • We are sending more data over the network
  • We have higher memory footprint than we should have
  • In extreme cases, we may crash as a result of out of memory exception

None of those are good things, and like the discussion on unbounded result sets, this can be easily prevented by applying a limit at the database level to the number of rows that we will load.

But it is not just simple queries without limit that can cause issue, another common source of this error is Cartesian product when using joins. Let us take a look at this query:

session.CreateCriteria(typeof(Order))
	.SetFetchMode("OrderLines", FetchMode.Join)
	.SetFetchMode("Snapshots", FetchMode.Join)
	.List<Order>();

Assuming that we have ten orders, with ten order lines each and five snapshots each, we are going to load 500 rows from the database. Mostly, they will contain duplicate data that we already have, and NHibernate will reduce the duplication to the appropriate object graph.

The problem is that we still loaded too much data, with the same issues as before. Now we also have the problem that Cartesian product doesn't tend to stop at 500, but escalate very quickly to ridiculous number of rows returned for trivial amount of data that we actually want.

The solution for this issue is to change the way we query the data. Instead of issuing a single query with several joins, we can split this to several queries, and send them all to the database in a single batch using Multi Queries.

Architecture Advice: Scale the architecture to fit the application

I was just reviewing an application that was obviously built upon a lot of the best practices advice for using an NHibernate application. I am currently in the process of ripping apart much of that application and then putting it back together again.

This involve things like collapsing projects, removing abstractions and deleting tests.  The main reason for that is quite simple, the architecture is too big for the application. We can create a much more stream lined application is we don't burden it with an architecture that is suitable for bigger and more complex applications.

Selecting an inappropriate architecture is a big burden on an application, and it is one where you should be very careful. Chose an overly simplistic architecture, and you can only scale the application complexity with great difficulty. Chose an overly complex architecture, and you can't even get the baseline working easily, because of the complexity involved.

Don't try to make an application fit the architecture, and don't try to apply architecture blindly. Think and always start from the simplest thing that can possibly work.

My baseline ASP.Net MVC modifications

We start by killing ViewData. I don't want it, and I want to fail hard if someone is trying to use it:

image

Likewise in the views, I hate ViewData.Model, so I remove it from there as well:

image

For Javascript, I don't want to use <script/> tags, they are prone to path problems, and I might need to go back and change them (using compression, combining scripts, etc). I use this approach:

image

The page title is set by the view, the way it should:

image

Or, I just set it up in the master page, if I don't care that much about changing it.

Do not put presentation concerns in the controllers!

I thought it would be obvious, but I am currently cleaning up an application that has mixed them in a very annoying way.

I decided to track the reason for this coupling and I found this on the sample controller in ASP.Net MVC:

image

I am pretty sure that setting the page title is a presentation concern, as such, I don't want to see this in the controller, and I certainly don't want to see it in the base sample from which everyone is going to start.

NH Prof Alerts: Unbounded result set

This is a bit from the docs for NH Prof, which I am sharing in order to get some peer review.

Unbounded result set is perform a query without explicitly limiting the number of returned result (using SetMaxResults() with NHibernate, or using TOP or LIMIT clauses in the SQL). Usually, this means that the application is assuming that a query will only return a few records. That works well in development and testing, but it is a time bomb in production.

The query suddenly starts returning thousands upon thousands of rows and in some cases, it is returning millions of rows. This leads to more load on the database server, the application server and the network. In many cases, it can grind the entire system to a halt, usually ending with the application servers crashing with out of memory errors.

Here is one example of a query that will trigger unbounded result set warning:

session.CreateQuery("from OrderLines lines where lines.Order.Id = :id")
       .SetParameter("id", orderId)
       .List();

If the order have many line items, we are going to load all of them, which is probably not what we intended. A very easy fix for this issue is to add pagination:

session.CreateQuery("from OrderLines lines where lines.Order.Id = :id")
	.SetParameter("id", orderId)
	.SetFirstResult(0)
	.SetMaxResult(25)
	.List();

Now we are assured that we need to only handle a predictable number, and if we need to work with all of them, we can page through the records as needed. But there is another common occurrence of unbounded result set, directly traversing the object graph, as in this example:

var order = session.Get<Order>(orderId);
DoSomethingWithOrderLines(order.OrderLines); 

Here, again, we are loading the entire set (in fact, it is identical to the query we issued before) without regard to how big it is. NHibernate does provide robust handling of this scenario, using filters.

var order = session.Get<Order>(orderId);
var orderLines = session.CreateFilter(order.OrderLines, "")
	.SetFirstResult(0)
	.SetMaxResults(25)
	.List();
DoSomethingWithOrderLines(orderLines);

This allow us to page through a collection very easily, and save us from having to deal with unbounded result sets and their consequences.

NH Prof Alerts: Use of implicit transactions is discouraged

This is a bit from the docs for NH Prof, which I am sharing in order to get some peer review.

A common mistake when using a database is that we should use only transactions to orchestrate several write statements. Every operation that the database is doing is done inside a transaction. This include both queries and writes ( update, insert, delete ).

When we don't define our own transactions, we fall back into implicit transaction mode, in which every statement to the database run in its own transaction, resulting in a higher performance cost (database time to build and tear down transactions) and reduced consistency.

Even if we are only reading data, we want to use a transaction, because using a transaction ensure that we get a consistent result from the database. NHibernate assume that all access to the database is done under a transaction, and strongly discourage any use of the session without a transaction.

Example of valid code:

using(var session = sessionFactory.OpenSession()) 
using(var tx = session.BeginTransaction()) 
{ 
	// execute code that uses the session 
	tx.Commit(); 
} 

Leaving aside the safety issue of working with transactions, the assumption that transactions are costly and we need to optimize them is a false one. As already mentioned, databases are always running in transaction. And databases have been heavily optimized to work with transactions. The question is whatever this is per statement or per batch. There is some amount of work that need to be done to create and dispose a transaction, and having to do it per statement is actually more costly than doing it per batch.

It is possible to control the number and type of locks that a transaction takes by changing the transaction isolation level (and indeed, a common optimization is to reduce the isolation level).

NHibernate treat the call to Commit() as the time to flush all changed items from the unit of work to the database, without an explicit Commit(), it has no way of knowing when it should do that. A call to Flush() is possible, but it is generally strongly discouraged, because this is usually a sign that you are not using transactions properly.

I strongly suggest that you would use code similar to the one shown above (or use another approach to transactions, such as TransactionScope, or Castle's Automatic Transaction Management) in order to handle transactions correctly.

NH Prof Alerts: Select N + 1

This is a bit from the docs for NH Prof, which I am sharing in order to get some peer review.

Select N+1 is a data access anti pattern, in which we are accessing the database in one of the least optimal ways. Let us take a look at a code sample, and then discuss what is going on. I want to show the user all the comments from all the posts, so they can delete all the nasty comments. The naןve implementation would be something like:

// SELECT * FROM Posts
foreach (Post post in session.CreateQuery("from Post").List()) 
{
     //lazy loading of comments list causes: SELECT * FROM Comments where PostId = @p0
    foreach (Comment comment in post.Comments) 
    {
        //do something with comment
    }
}


In this example, we can see that we are loading a list of posts ( the first select ) and then traversing the object graph. However, we access the lazily loaded collection, causing NHibernate to go to the database and bring the results one row at a time. This is incredibly inefficient, and the NHibernate Profiler will generate a warning whenever it encounters such a case. The solution for this example is simple, we simple force an eager load of the collection up front.

Using HQL:

var posts = session
	.CreateQuery("from Post p left join fetch p.Comments")
	.List();

Using the criteria API:
session.CreateCriteria(typeof(Post)) 
	.SetFetchMode("Comments", FetchMode.Eager) 
	.List();


In both cases, we will get a join and only a single query to the database. Note, this is the classic appearance of the problem, it can also surface in other scenarios, such as calling the database in a loop, or more complex object graph traversals. In those cases, it it generally much harder to see what is causing the issue.

NHibernate Profiler will detect those scenarios as well, and give you the exact line in the source code that cause this SQL to be generated. Another option for solving this issue is: MutliQuery and MultiCriteria, which are also used to solve the issue of Too Many Queries.

NH Prof Deep Dive: Implemented the Unbounded Result Set warning

IconSo, I talked a bit about the architecture and the actual feature, but let us see how I have actually build & implemented this feature.

This is the actual code that goes into the actual product, I want to point out. And this is actually one of the more complex ones, because of the possible state changes.

public class UnboundedResultSetStatementProcessor : IStatementProcessor
{
	public void BeforeAttachingToSession(SessionInformation sessionInformation, 
		FormattedStatement statement)
	{
	}

	public void AfterAttachingToSession(SessionInformation sessionInformation, 
		FormattedStatement statement, OnNewAction newAction)
	{
		if(statement.CountOfRows!=null)
		{
			CheckStatementForUnboundedResultSet(statement, newAction);
			return;
		}
		bool addedAction = false;
		statement.ValuesRefreshed += () =>
		{
			if(addedAction)
				return;
			addedAction = CheckStatementForUnboundedResultSet(statement, newAction);
		};
	}

	public bool CheckStatementForUnboundedResultSet(FormattedStatement statement,
		 OnNewAction newAction)
	{
		if (statement.CountOfRows == null)
			return false;

		// we are discounting statements returning 1 or 0 results because
		// those are likely to be queries on either PK or unique values
		if (statement.CountOfRows < 2)
			return false;

		// we don't check for select statement here, because only selects have row count
		var limitKeywords = new[] { "top", "limit", "offset" };
		foreach (var limitKeyword in limitKeywords)
		{
			//why doesn't the CLR have Contains() that takes StringComparison ??
			if (statement.RawSql.IndexOf(limitKeyword, StringComparison.InvariantCultureIgnoreCase) != -1)
				return true;
		}

		newAction(new ActionInformation
		{
			Severity = Severity.Suggestion,
			Title = "Unbounded result set"
		});
		return true;
	}

	public void ProcessTransactionStatement(TransactionMessageBase tx)
	{
	}
}

And now the test:

[TestFixture]
public class Ticket_51_UnboundedResultSet : IntegrationTestBase
{
	[Test]
	public void Will_issue_alert_for_unbounded_result_sets()
	{
		ExecuteScenarioInDifferentAppDomain<LoadPostsUsingCriteriaQuery>();

		var statement = observer.Model.RecentStatements.Statements
			.OfType<StatementModel>()
			.First();
		Assert.AreEqual(1, statement.Actions.Count);
		Assert.AreEqual("Unbounded result set", statement.Actions[0].Title);
	}
}

And, just for fun, the scenario that we are testing:

public class LoadPostsUsingCriteriaQuery : IScenario
{
    public void Execute(ISessionFactory factory)
    {
        using (var session = factory.OpenSession())
        using (var tx = session.BeginTransaction())
        {
            session.CreateCriteria(typeof(Post))
                .List();

            tx.Commit();
        }
    }
}

And this is it. All you have to do to implement a new feature. This make building the application much easier, because at each point in time, we have to deal with only one thing. It is the aggregation of everything put together that is actually of value.

Also, notice that I heavily optimized my workflow for tests and scenarios. I can write just what I want to happen, not caring about how this is actually happening. Optimizing the ease of test is another architectural concern that I consider very important. If we don't deal with that, the tests would be a PITA to write, so they would either wouldn't get written, or we would get tests that are hard to read.

Also, notice that this is a full integration tests, we execute the entire backend, and we test the actual view model that the UI is going to display. I could have tested this using standard unit testing, but in this case, I chose to see how everything works from start to finish.

NH Prof Deep Dive: Applying the Open Close Principle at the architecture level

IconThe back end in NH Prof is responsible for intercepting NHibernate's events, making sense of all the mess, applying best practices suggestions and forwarding to the front end for display.

Second, it is also a good example of how I apply the Open Close Principle at the architecture level. With NH Prof, there are multiple extension points that I can use to add new features.

Here is a schematic of how things works:

image

Not shown here is the NHibernate Listener (of which, of course, I have several), which is publishing events to the bus. Those events are first handled by the low level message handlers, which publish new events on the bus.

Those are interesting only in the sense that they translate the low level details into events with semantics that we can use in the app itself. Most of those events, as you probably guessed, end up in the Model Building part. This is responsible of taking a set of unstructured events into a coherent structure.

Along with the model building, we have another extension point here, best practices analysis. Those are implemented as a set of classes that we plug into the model building part. If we want to add a new best practice, we need to create a new class, register it, and we are done.

Here is the checkin for implementing the unbounded row set (which is ticket #51):

image

We add a new class (and a test for the class), register it in the BusFacility and in this case I actually had to fix a bug in the tested scenario, which loaded the wrong item.

I'll post more details about the actual implementation of Unbounded Result Set soon, but I wanted to talk about the architecture that enable this. Because we structured the whole thing around a common core that we can use, anything that fit the core (and most things does) doesn't require any special effort. Apply a new behavior, done.

Tags:

Published at

NH Prof new feature: Unbounded result sets

One of the things that I am doing with NH Prof is not only giving you visibility into what NHibernate is doing, but also trying to automate my own experience in analyzing best practices and problematic usages of NHibernate.

NH Prof will detect usage patterns and warn against bad practices and suggest how to deal with them. The first one that I implemented was detecting Select N+1, and the feedback from the beta group was "Wow! I didn't even know that we had this problem, but casual use with the profiler immediately showed it."

Here is the newest feature, detecting & warning about unbounded result sets:

image

And the actual warning:

image

NH Prof new feature: Row Counts

That was actually a hard to implement feature, since this is not something that NHibernate just give out. Nevertheless, by trawling through the codebase long enough, I was able to figure out how to do this.

image

As an aside, one of the requested features for NH Prof was to be able to get DB level stats. I am not going to do this for v1.0, but I now have a pretty firm idea about how to implement this. We will have to see how many people request this information.

Hidden Windows Gems: Extensible Storage Engine

Did you know that Windows came with an embedded database?

Did you know that this embedded database is the power behind Active Directory & Exchange?

Did you know that this is actually part of Windows' API and is exposed to developers?

Did you know that it requires no installation and has zero administration overhead?

Did you know there is a .Net API?

Well, the answer for all of that is that you probably didn't know that, but it is true!

The embedded database is called Esent, and the managed library for this API was just released.

This is an implementation of ISAM DB, and I have been playing around with it for the last few days. It isn't as nice for .Net developers as I would like it to be (but Laurion is working on that).

I think making this public is a great thing, and the options that this opens up are quite interesting. I took that for a spin and came up with this tiny bit of code that allow me to store JSON documents:

https://rhino-tools.svn.sourceforge.net/svnroot/rhino-tools/branches/rhino-divandb

It is not done, not nearly done, but the fact that I could rely on the embedded DB to do so made my life so much easier. I wish I knew about that when I played with Rhino Queues, it would have made my life so much simpler.

Longest time to first test pass, but it now works

public class DivanDatabaseTest
{
	[Fact]
	public void Can_add_document_to_database()
	{
		using (var instance = new Instance("test"))
		{
			instance.Init();

			using (var sesion = new Session(instance.JetInstance))
			{
				var database = new DivanDatabase(instance, sesion, "test.divan");
				DocumentId[] add = database.Add(
					JObject.Parse("{'name': 'oren', 'email': 'ayende@ayende.com'")
					);

				using (var view = database.OpenDocumentsView())
				{
					var doc = view.FindById(add[0].Id);

					Assert.Equal("oren", (string) doc["name"]);
					Assert.Equal("ayende@ayende.com", (string) doc["email"]);
				}
			}
		}
	}
}

The data access challenge: Implement Rhino Security

Rhino Security is an awesome little framework that provide security infrastructure for applications. I created that after having to rebuild a security infrastructure five times, due to changing requirements. It is implemented on top of NHibernate.

I would like to challenge you to implement Rhino Security in your data access strategy of choice. Here is the design, intro and implementation notes. And of course that the code itself is accessible here.

If you think that your data access strategy is awesome, show me the code. Rhino Security is a non trivial example, but it is still quite small, about 1,700 lines of code. So that is quite doable.

Oh, and I would love to see implementations on non RDBMS platforms.

I don't expect anyone to step up and do this, by the way. For the simple reason that I don't think that this is possible. And yes, that is said in the vain hope that someone will actually show me one.

When your extensibility strategy is OOD...

You get to have really simple solutions.

One of the reasons that I like NHibernate so much is that is allow me to use Object Oriented solutions to my problems. Case in point, we have the Rhino Security library, which provide a facility for asking security questions about your domain.

Bart had an issue with Rhino Security, he wanted to extend the library to also contain a type. The original idea was to add a int field called AppSpecific, which will let each app define additional information on top of the existing domain model.

That made me feel so Win32 that I had to go and sleep for a while. I suggested the following OO solution:

public class BartOperation : Operation
{
      OperationType OperationIsStronglyTyped {get;set;}

} 

I mean, if you want to extend Operation, why not... extend operation?

The problem is, I think, that most people have been so brain washed by the impendence mismatch that this wouldn't occur to them. Bart went away and implemented that suggestion, the whole exchange took less than a day.

Chose your tools carefully, and use them well, for they will reward you aplenty.

Accessibility Concerns

Kelly Stuard called me out with regards to the NH Prof user interface:

I'd just like to weigh in and say that I am completely against the WPF-murder of this application's UI.  There are several hundred reasons to go away from the OS's colors and standard dialogs; I do not see a single one here.

In choosing to shove your choices upon your users you are potentially alienating people who may need a particular color contrast or text size(color blind / low vision) or need to only use the keyboard (blind with reader).

It looks very sexy, I will admit.  However, it is a piece of software I want to *function* very sexy; that is its only job, in my mind.  I want it to look just like every other application and respond to my OS changes just like every other application.

There are a few reasons that we chose this particular look & feel for the application. One was branding, I wanted the application to remind people of my blog*. The second was that this UI allows us to express a lot of things in a concise fashion while keeping it pretty. The third was that it allows us to create pretty UI.

That may seem like an inconsequential point, but it is actually very important. UI sells, period. And good UI sells more than black on gray that is what going with the OS scheme would give us. I am building this application to sell it, and as such, anything that increase the sales count is a huge plus for me.

Now, am I alienating people that would prefer to have a different color scheme? Am I alienating people with screen readers?

Probably so, I am afraid to say. But that is, again, a conscious decision on my part.

There are ways that we can improve the situation for that scenario, such as providing an alternative color scheme or one that pick up the user's system colors. WPF also contains the hooks to make applications friendly to screen readers, so that is possible as well.

But that isn't going to be in version 1.0. I am not sure that it will be in the product at all, for that matter. It is a very simple matter of considering the ROI. Doing something like that is going to take resources that I can invest in other features. In order to justify doing this, I need a big enough user base that require this functionality.

At the end of the day, I need to be able to pay the bills. And considering the target audience of the NHibernate Profiler, I don't think that this is a Must Have feature for v1.0, and it is a feature that will be triaged along with the others for vNext.

* Funny how what seemed like a trivial decision four or five years ago, choosing color scheme for the site, turned out to have such huge implications for the rest of my software.

Should you be able to define new abstractions in a Domain Specific Language?

Fowler has a post about DSL, which contain the following:

The first is in language design. I was talking with a language designer who I have a lot of respect for, and he stressed that a key feature of languages was the ability to define new abstractions. With DSLs I don't think this is the case. In most DSLs the DSL chooses the abstraction you work with, if you want different abstractions you use a different DSL (or maybe extend the DSL you're using). Sometimes there's a role for new abstractions, but those cases are the minority and when they do happen the abstractions are limited. Indeed I think the lack of ability to define new abstractions is one of the things that distinguishes DSLs from general purpose languages.

I disagree with the statement that this is an exception. This is something that come up again and again in a DSL. You start with a given concept of what you want to do, and you give it to the users. Rinse & repeat a couple of time and you have a language that the users can start play with.

The main problem is that not giving the users that facilities for abstraction usually mean that they will work around your system, or that the DSL scripts that you will have will suffer from copy & paste, unnecessary complexity, etc.

A trivial example would be how to define customer roles. You want to be able to process orders based on customer role, and as such, it is important to be able to define how you choose a customer. Not only that, but even the customer selection criteria should be abstracted.

Here are a few examples:

  • Bad customer:
    • Bad credit rating
    • OR More than 5 returns last year
    • OR More than 5 helpdesk calls last 6 months
  • Silver Customer
    • More than 5 purchases
  • Gold Customer
    • Total purchases over 5,000$
  • Preferred customer:
    • Not a bad customer
    • Gold customer

We want to define roles using other roles. Because that is how the business thinks about it. We naturally create abstractions for ourselves, because this is how we think.

If we need to go back to the development team for each new abstraction, we have a heavy weight process that the users will work around.

Most of my DSL contains some form of user define abstractions, usually in the form of:

define bad_customer:
	customer.HasBadCredit or 
		customer.TotalReturnsIn(1.year) > 5 or
		customer.TotalHelpdeskCallsin(6.months) > 5

define silver_customer:
	customer.TotalPurchases > 5

define gold_customer:
	customer.TotalPurchasesAmount > 5000

define preferred_customer:
	gold_customer and not bad_customer

This is usually user editable, and we plug this into the intellisense of the system. In one project, I even provided a refactoring support, Extract Business Term (bound to CTRL+ALT+V, go R#!)

Abstractions are important, and building them into the language is just as important. You want to empower the users to extend your language, even if it a language that is very limited in scope. Because you give them the option to take the language to realms that you never dreamed on.