I have the strong feeling that I am missing something. We have just run into a major slow down in one of our pages, as a result of adding the CalendarExtender. This led me to do some searching, and I am not sure that I can believe the results.
Let us take this simple page:
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="test.aspx.cs" Inherits="ExplodingMonkeys.Web.test" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
<html xmlns="http://www.w3.org/1999/xhtml" >
<form id="form1" runat="server">
<asp:TextBox ID="Date" runat="server" />
When rendered to the browser, it weight 528 bytes. Now, let us add a <asp:ScriptManager runat="server"/> to the page. The size of the page exploded to 79 kilobytes, from this change alone.
I then added the following to the page:
<ajax:CalendarExtender runat="server" TargetControlID="Date" CssClass="ClassName" Format="MMMM d, yyyy"
The page size reached 143 kilobytes. I am pretty sure that I am missing something here. I enabled both caching and compression in the web config:
<scriptResourceHandler enableCompression="true" enableCaching="true" />
And then tried again. The results were better, but not really encouraging.
With just script manager: 79 KB (56 KB from cache)
With calendar extender: 143 KB (115 KB from cache)
Just to make it clear, there is a noticable delay on our pages, it went from being instantenous to taking four or five seconds. And I am talking about localhost testing, on powerful machines. (Again that HTTP 2 connections per server issue, I think).
Am I missing something, or are those the real numbers?
The issue of the EntLib licenses just came up in the mono dev list. Quoting from the license:
That really surprised me. I supposes it makes some sort of sense for Microsoft, but I don't like it.
I was asked this question, and I think that the answer is interesting enough to post here.
There are two sets of technologies that MS is currently pushing.
There is Linq, which is additional support for querying directly from the language, and there is the various ORM efforts that Microsoft is pushing.
When the Linq bits will stabilize to the point where it is viable to start projects using them, there will be support for querying NHibernate with Linq.
I have looked in detailed into the four (or is it five now?) ORM efforts that microsoft is currently pushing, and I am not seeing anything that excites me there.
Early feedback that I got from Microsoft confirmed that even ADO.Net for Entities, which is the only ORM effort that tries to match NHibernate's capabilities, is not going to be extensible enough to support what I and my customers needs. This is the usual 80% solution, with a hood welded shut in all the interesting locations.
In addition to this, I find the whole configuration schema to be an order of magnitude more complex than it needs to be, with the additional complexity that this would bring later on when trying to understand what it means.
So no, I do not believe that Microsoft pushing their ORM efforts would have a bad affect on NHibernate, and having Linq would just make our life that much easier.
That said, an ORM that comes from Microsoft is probably going to be popular because it comes from Microsoft. I believe that this will merely confirm for many in the .NET world that ORM is a valid way to work.
The same people that are currently using or evaluating NHibernate will keep it, those that would never use a non Microsoft tool would use the MS ORM. Not that much different from now. People trying to migrate all but the simple projects from NHibernate to MS' ORM would run into the afore-mentioned brick walls, and would either keep using NHibernate, or would have to invest significant effort porting the application.
Jeff Atwood has posted about the difficulities of some so-called programmers to program. The intersting facts are here. An extremely simple task is given, and quite a bit of people simply can't handle it, or take an undue amount of time to solve it.
A while ago I posted all sort of interesting questions that I would not put in interviews, it was the sort of interview questions from hell. I did a lot of interviews since then, and I have gotten sick of the level of people that I meet.
My current favoriate question is to give them this code:
public class Program
public static void Main(string args)
//print the input string in reverse
And ask them to solve it. It has gotten to the point that this is literally the first thing that I would ask a candidate to do. I am not picky, I would accept a solution in any imperative langauge except Ook#. And quite a bit of them simply can't do it. I am usually interviewing people that are supposed to have two years or more of experiance building applications in .Net. And they simply can't do it.
Let me tell you how vicious how I am with this horrible question, I give them a laptop (mine, actually) loaded with Visual Studio + ReSharper, full access to the net (and yes, that include Google), and they still can't do it. In one particular case, I watch the candidate go to google, skip the first result that was titled "Reversing a String in C#" and go directly to a code sample in C(!). He then spent the next five minutes shocking me by trying to get it to compile.
I can think of at least five different ways of doing it, depending on whatever you know that string can be iterated or that it has an indexer. I would expect that even if you didn't know either about string, you would realize that you can use Substring() to do it in a horribly inefficent way.
I can accept people with lack of knowledge, I can accept newbies (no prejustice is good), but I can't accept someone who can't program.
Sasha Goldshtein just posted some very interesting stuff about the way the JIT optimize method dispatch. Despite the interesting topic, what caught my eye more than everything else was:
I found it hilarious.
All too often, I am seeing patterns used indiscrimately. "What do you mean I should use a Momento here, it is in the Book". Patterns are invaluable as a way of expressing complex ideas in a simple, consistent manner, the problem is that this value is cheapened by misuse all over the place.
The idea of patterns in great, the implementation of it... could use some refactoring.
A developer I am working with had added a method Customer.FindOrdersByCity() and we got into a discussion why it wasn't the correct place for the method. This dicussion has made it clear that I have never really sat down and wrote what I though about what guidelines I follow with regard to good application design.
Let us start with the definations:
- Entity - an (usually persistable) object that is part of the application model. Usually the application is based around actions on entities or by entities. Examples: Customer, Order, Employee
- Infrastructure Service - part of the underlying infrastructure of the application, usually handles concerns that are common to many applications . Examples: DatabaseService, LoggingService
- Application Service - part of the core application, handles business concerns that are usually specific for the application. Examples: OrdersService, CustomerService, BillingService
- Controller - responsible for coordinating a cohesive part of the application, usually a single user interaction, but can be larger than that. Examples: NewOrderController, CustomersController
- Views - responsible for converting the data from the controller into something that the consumer can understand. This is not just a user, I have WCF services as views, among other things.
There isn't much to talk about with regards to infrastructure services, I put them in Rhino-Commons and forget about them :-) Controllers/Views I have already talked about at length.
In general, I would like to keep my entities clean of dependencies on services. I like to put business logic only in the entities, and use the surrounding framework to get what is needed (usually this means lazy loading) without the explicit involvement of the entity with the service layer. This means that they are light-weight, independent of the infrastructure, and can be tested independantly.
Application Services are interesting, they generally use both the infrastructure services and the model to do useful stuff. They may contain business logic, if that is the best place to put it, although I would rather have it in the entities. They are usually responsible for setting things up so the entities can do their work.
As usual, the best examples are code, so let us take a look at the Order class:
public class Order
public virtual Money CalculateCost()
Money cost = Money.Zero;
foreach (OrderLine line in OrderLines)
cost = cost.Add(line.Cost);
cost = cost.Add(this.ShippingCosts);
The business logic is in the entity, but what does the service does, then?
public class OrderService
public virtual Money CalculateCostForOrder(int orderId)
Order order = Repository<Order>.FindOne(
(Where.Order.Id == orderId).ToDetachedCriteria()
This is a very rough draft, but you get the idea, I hope. The service loads the entity, making sure to load the OrderLines collection with the entity (to save another query later) and delegate the calculation to the entity.
What did I gain here? I could have just put the calculation directly in the service, couldn't I? I could do that, but then I would have lost the OO capabilities that I have with entities (vs. Data Transfer Objects). If I wanted to have a OnSaleOrder, with a special discount, I could do this polymorphically, instead of adding conditions to the service.
There is a strong relation between a service and an entity (but hopefully not the other way around), this means that there usually will be a service per entity, but that is not always the case (OrdersService and NewOrderService, for instance) and there certainly will be cases where there are entities that have no service (those will usually not be the aggerate roots of the system, of course).
The service in general has a very procedural interface, although I am starting to consider fluent interfaces and method objects for services as well.
A while ago I extolled the benefits of using an in-memory database for tests. Now, the only in memory database that I know of (that has a ADO.Net provider) is SQLite. SQLite is a great database, except for one tiny issue. It is has a really weak support for dates, requiring your to jump through multiply hops to do anything even slightly interesting with dates.
Since I am mostly working on business applications, strong support for dates is a crucial issue for me. With great relucance, I moved my tests to use SqlCE embedded database. This is still an order of magnitude faster than going to a remote server, but it was much slower than running the whole test in memory, never touching the disk.
The speed difference became acute when I tried to run many Unit Tests (vs. Integration Tests) using the embedded DB, it was far too slow to be a real unit test framework.
For this reason, I now have two ways of using database from unit tests, using In Memory for most tests, and using Embedded DB when dates are important. You can check the code here.
Anyone can recommend a .NET accessible in memory database with good dates handling?
Took a bit longer than I anticipated, since people found some really hard bugs with regards to the generics support. I now know a lot more than I ever wanted to about generics, and I considered myself an expert before-hand.
Anyway, the new bits have:
- Improved support for wacky generics scenarios. Thanks to James and Thierry.
- Better support for types having non inheritable attributes that cannot be easily replicated. Thanks to Aaron.
- Better support for types implementing interfaces by using a base class that is located in another assembly.* Thanks to Aaron.
- Included XML Documentation files.
- All merged assemblies are not marked internal, to save your from collisions if you are using the Castle bits yourself.
The CallOriginalMethod() was deprecated, you are now encouraged to use CallOriginalMethod(OriginalMethodOptions.NoExpectation) or CallOriginalMethod(OriginalMethodOptions.CreateExpectation) instead. The reasoning behind the change is that currently CallOriginalMethod looks like it create an expectation, but it doesn't really does it, s it is better to be explicit about the whole thing. Thanks for Enrst for pointing it out to me.
As usual, the binaries and source are here.
* If you understood what that meant...
Take a look here to see what it takes to solve my previous challange. It was extremely hard to get it to work in all scenarios, and I am pretty sure that there are additional edge cases that I have not thought of, but for now, all the tests are green. I also have a 100% repreducable (on several machines) VS crashing bug, which is about the fifth that I know of.
There is probably a bug in Dynamic Proxy that causes it to output overrides to methods in such a way that when you reflect over the generated types, you get two methods for each overriden methods, instead of a single overriden method. I looked into it in detail a while ago, and I couldn't figure out how to make it work like I expect it to. The code that it generate is verifable, so this is something that the CLR supports, but I am not quite sure that I understand what the semantics for this is.
So, what does this have to do with Brail? NHibernate makes extensive use of DynamicProxy in order to increase performance and to defer loading, which meant that any application using NHibernate had this issue. Now, Brail is a dynamically typed langauge (well, not really, but that is close enough), which means that it uses Reflection to resolve properties and methods that are called. Because of Dynamic Proxy generated what is basicaly a duplicate method, that failed.
At the basic level, it meant that a call like this:
Would turn into:
(This is actually very far from how it is wokring but again, that is close enough to make sure you understand the problem.) Now, GetProperty("Name") woudl throw an AmbigiousMatchException, since it actually found two properties called "Name", one of the original class, and the second on the proxied class.
Why am I boring you with this? The fix for this was fairly simple, but it involved simply replacing the way Boo (which is what Brail is using) type system with my own. The fast that I could do that with roughly 350 LoC is quite amazing to me. This also means that I can actually preserve the dynamic nature of Brail, but behind the scene generate strongly typed accessors (think Dynamic Methods), which would elevate any concerns about reflection costs.
The code is in the Castle's trunk now, so you can start using it.
So yesterday I sat down to see what I can about the remaining Rhino Mocks bug, when I was suddenly and viciously attacked by a wild beast. After the inital confusion, it turn out that the mad barking and the slavering wasn't, as I concluded at first, an indication that I am edible, but rather that Rose has found a bug in Dynamic Proxy.
It is not often that I turn to a canine for a bit of advise about runtime IL generation, but Rose is something special. Below you can see her concentrating on the part that handle generic method invocations.
And here is me trying to get a fair share of the keyboard, I believe in Pair Programming, but Rose just won't let go of the keyboard. She has a remarkable words per minute count, although her spelling is a bit off at time (but then again, so odes mine).
I don't blame her for the spelling, she is just a puppy, five months old and still growing. She is a Caucasian Shepherd, which means that she is likely to get much bigger (she is about the size of a full grown German Shepherd now).
Well, if you are reading this I have successfully moved to orcs web.
The transfer was fairly smooth, except they don't support MySQL. That wasn't an issue, as a matter of fact, since I am using Cuyahoga, it was a simple matter to change the config and make it work with SQL Server. I was worried about transferring the data, but it looks like MySQL has fairly strongly export capabilities, including into formats that SQL Server can consume.
Beyond that, it was a breeze...
Now I only have to wait for the DNS records to update.
I am posting this because I am extremely annoyed. I moved from my previous host because of reliability issues, I had to do it fairly quickly, so I didn't have time to really ask around and find out the best host. I went to webhosting4life because it was fast to setup, and I believed that they were big enough to be competent.
I do not have a very big site, or a very complex setup, I run two applications, one needing MySQL, the other needing SQL Server. Neither of which is particularily tasking on the server.
As you can see, I had 6 outages in the last 20 days, some of them lasting mutliply hours. In nearly all cases, the reason that I was given was:
I could accpet it once or twice, but when it go to this level... below is a chat transcript of me and a tech from webhosting4life. It is complete with spelling mistakes and everything. As a result of this, and the evasive answers that I got, I am now on the lookup for a new host.
Notice that the operator simply left the chat when I tried to get additional information about the root cause of the problem. The chat equilent of hanging in my face! They have "Email our CEO" link on the help desk page, which I used to send a question regarding the frequent outages, I never even got a "Thank you for email us".
You can bet that I was a bit surprised when I saw this in the list of heavy methods in dotTrace:
Then I took a look at the number of calls... Did I mention that I don't like temporal data?
BTW, dotTrace rocks!
- I had to go through a similar process when porting Brail from Boo to C#. I have to say that the code afterward has a lot more cruft than before.
- Scott is making a reference to my Boo.Reflector project, I will try to update it to Reflector 5.0, but I can't make any promises. Getting coherent code is not a trivial task, and there are a lot of smarts that went into the C#/VB langauges in Reflector to make it look seemless.
The most important point, though, is about Scott's comment regarding Mono's licensing:
Mono is not GPL'ed. It uses several licenses over several components, but only the compiler itself is GPL'ed. You can see the details here, but basically, the libraries (including Mono.Security) comes freely, with no strings attached (MIT X11 licenses, to whom it may concern).
The guys from Eleutian had done it again, with a post that explains how you can use Windor and Rhino Mocks in order to make it easy to create the tests.
Jacob is raising some concerns about this approach:
Windsor setup is generally fast enough, but doing it per test is probably going to slow the tests down. It certainly makes the tests a lot easier to write.
Oh, and Beta 2 of Rhino Mocks 3.0 should be out later today...
I feel that I gave it too much already, and I am giving up. A workaround is too good for this issue.
Given the following class defination:
public class Dog
public static MethodInfo LastCall = null;
public class BarkInvocation<T>
public T Bark()
LastCall = (MethodInfo)MethodInfo.GetCurrentMethod();
public T Bark<T>()
return new BarkInvocation<T>().Bark();
Can you make this print true?
public class Program
static void Main(string args)
Dog d = new Dog();
Type returnType = Dog.LastCall.ReturnType;
List<string> strings = new List<string>();
A friend of mine is harping about the accuracy of live.com in comparison to Google for quite some time, I just run a search in google that gave me the exact answer that I wanted to. Running it on live.com gave me results that I would have to look for.
I literally had to merely copy/paste the command and I was done.
The results are good, but I would have to start looking inside for...
Take a look at Ken's post, his company is hiring, and the job looks really good.
Should I mention that I am looking for people with the same qualities?
I am using WATIN for integration tests, and I really like it. The only problem that I have with it now is that I can't get tests run via WATIN to be covered using NCover. This is understandable, since the tests are excersizing a different appdomain (and process, they are run on IIS). This is really sad, since I now have a great deal of focus on the UI, and this means that I can't get the proper stats about it.
I can't think of any way to make NCover profile another process, but I do think that it is possible to profile another appdomain, which bring up the possibility of hosting ASP.Net myself and running the tests against that. (This is basically what MonoRail TestSupport does, but I never checked it coverage).
I'll give it a try tomorrow, any comments before I try this?
All I want is something in the order of:
You would think that this is elementry, but apperantly we get things like this, which involve tricking the UpdatePanel into triggering.
It is generally accepted that reusability is not something that you should strive for in your tests. However, I have found that reusing tests is an interesting way to save time when you are testing a full process. Take for instance this piece of code:
public void SuccessfullySavingTemplateWillShowConfirmation()
AssertTextPresent("Template saved successfully", "Should have gotten a confirmation message");
public void CannotAddTemplateWithDuplicateName()
//Create a template called "Template1"
AssertTextPresent("There is already a template called 'Template1', please choose another name.",
"Should warn when trying to create template with duplicate name");
Some of the methods are utility methods, but on the second test, the first line of code actually calls to another tests to do the initialization of the test. I can see this kind of stacking useful in cases where I would usually want to have orderred tests.