Ayende @ Rahien

It's a girl

Dynamic Language Runtime on top of the CLR: This Is BIG

Check this out. Even though that Jim says that it is on the IronPython site, I can't find it, but I am still loving it. Making the CLR more friendly to dynamic languages is a Good Thing in general, but the thing that excites me is the possiblity that it can be leveraged from my code as well.

I paid my IL taxes, and while Linq expressions are nice, I would like to get solid support for runtime code generation without having to do it in the assembly level.

Watch what you say, Mister!

I am writing a document, and I just had a hilarious typo. The subject of the document is managing development environment, the title of the document, however, was: "Managing Secudction Envrionment". First time I realized that in hebrew, development and seduction are literally one typo away.

Considerring the target audience, it is a good thing that I got that in time.

JavaScript Race Conditions

About a week ago I posted how to handle Multiple Cascadiong Drop Downs using MS Ajax. That worked, except sometimes it didn't work.

Took me a while to figure out that there was a race  condition there. Let me tell you, Javascript in the browser is not the language for tracking down threading issues. At any rate, here is the fix.

function changeOnParentChangeOnProductsToAvoidRaceConditions()

{

      var behaviors =  Sys.UI.Behavior.getBehaviors($('<%=Benefits.ClientID%>'));

      if(behaviors == null || behaviors.legth==0)

      {

            //wait for the behavior to be defined.

            setTimeout("changeOnParentChangeOnPoliciesToAvoidRaceConditions()", 10);

            return;

      }

      for( var i=0; i<behaviors.length;i++)

      {

            var behavior = behaviors[i];

            //required to get around race condition

            behavior._oldBenefitsOnParentChange = behavior._onParentChange;

            behavior._onParentChange = function()

            {

                  if( $('<%= Insurances.ClientID %>').selectedIndex == 0)

                  {

                        return;

                  }

                  behavior._oldBenefitsOnParentChange ();

            }

      }

}

Event.observe(window, 'load', function() {

      changeOnParentChangeOnProductsToAvoidRaceConditions();

}); 

Individuals and Interactions over Processes and Tools

This post from Sam Gentile has made me realize that I need to clarify a few things about the recent TFS vs. XYZ discussion.

This post isn't really aimed at him [that would be me] but I do find a post by him seeming to suggest that you only can use OSS tools to be "Agile" to be, well, quite disappointing

I guess that I didn't notice the conversation digress, but I really should clarify that it has never been my intention to suggest, or seem to suggest, any such thing. Tools helps, and I think that they are important. I have my own preferences, based on my experiance and the way I like to work.

I am mostly talking about OSS tools because I am familiar with them and it makes it easy to point out and show. The best project management system that I have worked with is JIRA, but Trac does quite a bit of the work, and doesn't require as complex a setup.

There is no correlation or dependency between the tools that you use and the way that you work. And you most assuredly can use TFS to be agile.

I do think that tools can be of significant help, you can certainly be agile without them, but it is easier with them. My issues with TFS has nothing to do with agility, and everything to do with seamless usage, a whole seperate issue alltogether.

 

TFS vs. OSS - Round 4

Aha, my dear colleague. Yet you continue to use Visual Studio .NET, which, even without team system, has it's share of usability problems for you, I'm Sure. Why not use notepad or eclipse for writing the code, then running a command line to compile it, then use an XML Editor to change the config files? Why are you using a suite of integrated tools to develop code, in an environment which you probably aren't that crazy over?

Seeing the bigger picture perhaps? There is some value to it after all.

I actually do quite a bit of development using Notepad :-). I am not using Eclipse because I was never able to really grok it, I am afraid. A simple case of being lazy about it, I admit. I don't think that I ever said that there isn't a big value in an integrated environment. You won't find me running for kdbg to debug my CLR program (Ahem, yes, I was talking to that shady guy, right over there, not debugging the kernel to find ArgumentException).

I think that it is safe to say that it is public knowledge that I am not crazy in love in VS (maybe crazy because of it). There are two reasons that I use it over Notepad / #Develop / Eclipse. It has an excellent debugger, and I get ReSharper. Oh, and I am willing to degrade the debugger experiance. There is a reason why I would like a JetBrains IDE so much.

I should also point out that for a very long time, I developped in Boo, which entailed working in Notepad2, with Python syntax colors, compiling via NAnt and debugging with CLR Debug. Boo's syntax made it possible, and I worked on small enough projects that allowed me to live without R#.

And don't get me started on the "zone" thing. You develop on a windows machine, right? your "zone" gets disrupted a million times a day by tiny little operating system mishaps, dialogs, crashes and what not, yet you continue to use it and not another operating system. I guess sometimes you compromise to get what you need.

And how about working with people? People sometimes take me out of the "zone", and cellphones too. Gosh, I should just program in a cave with a rock and a wall in front of me. Oh, wait, there are advantages to doing the opposite.

While taming Windows is not a task for the faint at heart, I do believe that I have managed to do so. My system doesn't interrupt me, and if I want to do a bit of serious coding, I close outlook, phone and messenger, growls at anything that comes near, and work. That cave sound charming, I would need a whiteboard and someone to ping ideas from, though.

Did I really paint myself as a binary guy? "My way or none"?  I hope not.

About the people comment, I can get quite a bit done when I am working alone, but it is much more fun to work with other people.

Resharper has it's slow moments inside VS.NET. It sometimes makes you aware that it is working. Yet I still use it. Sum is great than thee parts etc..

There is a big difference here, actually. ReSharper almost never does anything unles I tell it to. If it takes time to refactor, or search for a reference, etc, that is okay, I just told it to do so. The problem with TFS taking me out of the zone is that it surprises me, and not in a good way.

Roy makes an excellent point when he asks who this is good for:

I'm wondering if the word "I" is to blame here. Yes, for you you get all that you want, while still paying the price of configuration, forking, patching, learning curve etc. What about "us"? Would it be just as easy for an entire team or an full dev group to do such a move? Assuming that this magical suite of OSS tools that provide exactly what you could get in TFS exists (yes, I'm sure you can make it happen, Oren), what is the cost of:

  • Learning that you can actually do something like this without having an OSS guru in the house? (Solution Discoverability)
  • Writing the documentation of how you did it and keeping it all as one big happy maintainable system?

Again,  not for you personally, but for an organization?

Let us put the forking/ patching out of the loop please. I don't do that, and it muddy the waters with irrelevant stuff. And configuration / learning curve exists for TFS as well, I would say.

Roy, I assume that when you do a TFS project, you rarely gets to go in, install it, and go home. Usually there is customization that needs to be done, to make it work the way that organization works. There is configuration, there is training an admin that can watch for it, there is documenting what you did and how / why it was done, etc. That one is not different for any stack that you choose.

For discoverability, that is another matter. Assuming that I am in the market for a new development environment, that is a big desicion, and I need to make an informed one, so I would look at what I need, and then see what set of tools would give me the best solution. It has long been heralded as the unix tradition, but collaboration of focused tools is often greater than the sum of its parts.  The integrated solution is tempting indeed. But most people have learned that integrated solutions often carry their own set of problem and would evaluate separate solutions working together as well.

Would it be easy for the entire team? Yes, I believe so. You would need to teach them what is the way that it is going to work, which you need to do anyway, and off they go. I didn't find any of the tools that I have mentioned to be of high maintaince, zero friction, again. And here I am talking not about me, I am talking about team memebers that doesn't have my experiance in working with those tools.

I am certainly not seeing any issues in my organization, and as I mentioned, my time troubleshooting those tools in the last 6 months was ~15 minutes. And they are certainly in active usage.

TFS may not be your thing, but it sure as hell saves lots of time for lots of people who learn to work with what they have and optimize it, extend it, and not go and work on something that may be the "perfect" thing for them, but less perfect for other people. If you plan on saying "just say no" you should at least know that people might find out that what they turned down actually has more value than the little things that annoy you.

I am not "just saying no", I have made multiply attempts to use TFS, in all cases, I have run into those issue which were unacceptable to me and my team. I am not saying that TFS is worthless, or that it doesn't bring value. I am saying that in the grand scheme of thing, I have found that TFS has issues that prevent me and my team from using its core functionality. And that without the SCM integration, I don't see the value in TFS over other tools.  I also think that you are underestimating how big those "little things" are.

I have seen people that works with it that accept those flaws as given. If they can manage that, fine, and I hope that they will leverage TFS to its full potential. That is not the way I feel, and from the attempts that we made, we found that after getting used to zero friction tools, tolerating friction gets very annoying, very fast.

When you aim wide, you have to aim a little lower. I think that's just the way it is.

And I disagree with this completely. You have different set of concerns when you aim wide, but that doesn't mean that you have to aim lower. That may work as long as there isn't a competitor that aims wide, but keep better quality. I think the Japanese proved that with their cars thirty years ago.

I may sound a bit harsh, but this is all in a good mood. I respect and like Oren very much, and much beer will be had together at DevTeach next month!

I am not my code, not am I my opinons, I can enjoy a good debate even if at the end I won't "convert" you to my way of thinking. Frankly, I don't even expect it. The fun is in the debate itself...

TFS: Potshots

Jeremy Miller just commented on my previous post, and I couldn't help responding:

I've heard pro-VSTS folks slam the OSS tools for being tinker toys and difficult to integrate (not in my experience, but it's their story), but many of these same pro-VSTS folks sell consulting services to set up VSTS.  If VSTS is so easy to get up and going, why are people able to make a living doing just that?

Because when you are integrating OSS tools, you are wasting your time. When you integrate TFS, you are being enterprisey.

(Yeah, cheap shot, sorry, can't help it)

Exensability: Ask, and you shall recieve

Bil Simser has a few things to comment about me and Roy's discussion about TFS vs. the OSS stack:

Yes, other packages out there are extensible by nature (Subversion for example) but require coding, architectural changes, hooking into events, all of which are nice but systems like this were not designed for it.

That depends on what you want, but usually coding is not involved, scripting usually does. Arhcitectural changes, etc, are usually not involved.

Was subversion really designed at the start to be extensible so I could maybe have my storage be in a relational database rather than the file system?

I am not familiar with SVN's internals, but I would answer yes, it already has two storage providers, adding a third should be possible. Let me turn that question around for a moment, can I extend TFS to use Oracle as the backend storage?

Could I crack open subversion to support a way to link checkins to an external feature list? Sure. Why would I when TFS has this already.

And since Subversion has this already, the point is moot at any rate.

And finally, the piece that caused me to post it here:

As for modifying open source systems to do your bidding, you enter into a fork scenario. Unless the system supports a plug-in pattern and you can just add a new assembly (like say oh the TFS policy sub-system) I really can't do much with a tool even if I have the source code, unless I want to run the risk of being in a maintenance nightmare from Hell scenario. Do I really want to do diffs of new relases of NUnit with my own code to support new extensions.

Forking isn't nice for you, and I would usually recommend against it. But the point is, most OSS projects already have an extension model. NUnit certainly does, Rhino Mocks does, Subversion does, NHibernate has them all over the place, Castle Windsor is a big (and beautiful) extensibility model all in itself, log4net lets you plug in at any point in the pipeline, etc.

In fact, that is one of the reasons that I like OSS, because so often they have this really nice model of safely extending the functionality. Bil mentioned extending TFS to include a "steps to reproduce" field, here is the same for Trac:

[ticket-custom]
repro = textarea
repro.label = Steps to repreduce
repro.cols = 60
repro.rows = 30

Very simple to extend, and if I wanted to add logic as well, that is simple to do as well.

And while it might have deficiencies in various places I can plug in new features or introduce entirely new concepts to the repository so that I can make it match whatever business process I use.

TFS makes me wait for it, that doesn't mesh with the way I do business. Please instruct me at the values I should put in the "MakeIsFast.config" file.

Is the source control system in Team Foundation Server extensible or replaceable? No, but I'm willing to live with a few problems while we get to version 3.0.

I think that you under estimate just how critical this deficiency is to me. It makes me snap out of the zone. It makes me aware of the tool, and not in a good way. If a tool get in my way I would either improve it or throw it.

TFS Vs. Open Source tools

This is getting fun, another reply from Roy in our discussion about TFS vs the other alternatives.

Regarding my last post, Oren (Ayende) points out that regarding TFS's features, he can either find a match for them in open source land, or he doesn't really care about them.

What bugs me is whether, assuming you can find and create such a solution out of a package of open source applications working together that operate with the same level of integration as TFS, can you still handle the things that matter to the organization using your solution.

I believe that I can, and we have been doing this for the last several projects that we built. Roy lists some interesting points about choosing the development stack:

Maintainability: Granted, it's not easy to maintain a TFS installation, but it sure as hell just as hard if not harder to maintain a full range of open source tools, each one from a different publisher, different versions and compatibilities, documentation and support services (if at all).

About 6 months ago I had a go at installing and configuring Trac for Wiki/Issue tracking. It took me a few days to grok the way it works, but we have been using it ever since. The last time that I had to administer it was three months ago, when we had to open a new project, and it tooks three minutes on the phone to deal with that.

There is a reason why I keep coming back to Zero Friction. That is what I get from my stack of OSS tools. They are there, and they are working, I use them when I need their services, at all other times, they don't get in my way.

TFS gets in my way, I accidently clicked on the Team Explorer band on the right side and it hung VS until it connected to the server and did something that is of absolutely no interest to me at the time. Annoying in the extreme, shove me out of the zone, and makes me feels out of control. Controlling Your Environment Makes You Happy!

Learning curve: I have no doubt that 9 times out of 10, it is harder to get up to speed on an open source product than a commercial one (there are always exceptions). Double that by the number of different products you are using and see what you get. Usually the documentation is not as good, and the support contract (if exists) is very poor as well. The efforts to maintain the source code and create your own version might actually be higher than what you are trying to save. And no, I don't think it will still be less that the cost of a commercial product (TFS or otherwise)

I am probably in no position to disagree (biased), but I still disagree. I could come up with a few examples, but they would fall into the exceptions clause, so I would just say that it is usually rare to maintain forked version for a long period of time, and usually not necessary. OSS usually has better solutions for this (well defined extension points, for instance).

In fact, I can say that synergy between OSS projects can cause some really sweet results. I have profiling and tracing for NHibernate built on top of log4net, and the NHibernate Search deal is really sweet. That said, I am using a lot of commerical tools because they are better/easier then a comparable OSS project. My reasons may be different, though. I am using SQL Server because Management Studio is a nice UI (although I wish it would still some features from PL/SQL Developers), WCF because it has good UI for logging, etc.

Ease of use: Usually I find that Open source products tend to be less usable than commercial ones. Money does matter. Yes, there are always exceptions. Actually, Eclipse is an IDE I find more usable than VS.NET. But then again, Eclipse's community is funded as far as I know. Money matters.

No argument here. The final coat of paint care make a lot of difference, and it usually takes some money to really get all those nooks and cranies.

That said, there is another element of ease of use that should be considered, and that is the simplicity of the architecture, the assumptions being made about the amount of time that is going to be invested in it, etc. The Entity Framework (to take an example at random) can afford to be complex and ungainly, because they can cover the ugly stuff with pretty designer, an OSS project is usually not at liberty to do so, and as a result, would arrive at a much simpler solution to the same problems.

If you found subversion to be working too slow for you, what are the odds you'd go hack the code for yourself to get a better version? then update it with every new drop? what would be the cost of that?

I don't think that Subversion being slow is the case here, but I will answer anyway.

I know C and C++, so yes, I would. I wouldn't need to update it with every new drop, I would submit the patch to the core team, and then have it there forever.

What is the cost of that vs. a commerical product (which is not slow in theory only)? Well, 0 for the software, and some high amount for spending time hacking at the source to make it faster. What is the cost for doing the same with commercial product? Some initial high amount (although probably less than than the hacking would be), then contiual bleeding of time as a result of problems that I can't fix. But you know that this comparision is flawed, because Subversion is not working too slow for me (and it is known to scale much better than TFS).

It's quite easy to say "I'm not using this because of X,Y,Z". It's quite a different story to say "I know it has it's problems, but looking at the bigger picture I owe it to myself to see if I can find a way to work with it despite of the shortcomings"

And it is another story yet again when to say, "I know a different solution, which can give me all that I need/want, so I don't have to deal with those shortcoming." The bigger picture as I see it is that I get everything that I want, and don't have to waste my time on patching holes in the tools that I use. Tools should be transperant, not road blocks.

But slamming on a tool just because some parts of it are not the fastest is just wrong. The Source control is still one of the strongest ones I've seen in terms of features. discounting it would mean underestimating a product that actually has much value.

Let me put it in the simplest terms that I can, TFS puts me out of the zone. That is simply unacceptable, period. If I need to be aware of "Don't go near the Team Explorer Tab, it will hung VS for 5 seconds", I am not in the zone. If I am not in the zone, I rarely get to code well. Beyond that, the source control in TFS doesn't offers anything that I haven't seen before, not in terms of features, and not in terms of the UI for them.

TFS, Zero Friction and living in an imperfect world

Roy responded to my post about disliking TFS:

I think that Oren is making one big mistake: he's throwing the baby out with the bath water. Just because the source control is not as zero-friction as some open source alternatives, does not mean that TFS is not a valuable suite of tools, with more added value than most open source tools that I know of.

[List of advantages that TFS has snipped, will cover them later]

I mean, Oren, c’mon! You don’t like a part of a part of team system – the source control aspect is not perfect for you, but what alternative do you have for a suite of tools that works together so powerfully?

First of all, I am grateful that I am only making one big mistake :-) That said, I don't really have an issue with the rest of TFS, but the problem is that without the source control integration, it is losing quite a bit of its charm. The source control is the most visible and annoying part of TFS, I would love to give it a short with SVN integration, but I don't really see it coming.

Roy goes on to list a few points, which I would get to cover:

The ability to associate a work item with a check-in action is very powerful in determining and reporting “delta” between failing builds

Just about any bug tracker has this ability, it is not unique for TFS by eany means. Usually it is a matter of a few config options for SVN and a post-commit hook.

Builds are also connected to checkin, and build history allows you to ‘drill down” to see all the source differences between the last success build and the current failing one

This is nice, but it won't work with anything other SCM, so that it not very good. I can get that with Trac as well, so that is not something that really bothers me.

The ability to use “workspaces”, which map current source control into local directories is powerful because it allows you to work concurrently on multiple versions of the same product, and switching between them in a couple of clicks from the IDE

Bug, not a feature. You can't move the folder around, you have cruft left over in your system, deleting the folder won't rid of the remains, etc. Beside, what is the problem with "MyProd\trunk", "MyProd\Branch-1.0", etc?

  • Powerful automated reporting
  • Distributed and extensible build capabilities
  • Task and bug management
  • All of which are nice, but by no mean unique or even very impressive on their on.

    Also, it’s still a version 1.0 product, and you know that version 3.0 is usually the one to remember, but as a version 1.0 product, and compared to most OSS tools (working together), VSTS gives me something that I find hard to get anywhere else – true collaboration.

    Actually, no, I don't. And I flat out refuse to suffer the initial pain for the promised land. Feel free to call me heretic.

    The promise of TFS is that you get everything integrated, in one package. The problem of TFS is that you get everything integrated, in one package. I actually like the work items, and if I could get it outside of VS (don't your dare make the IDE any slower), I would like it better. The problem is that I can get as much and more from freely available projects, that works in zero friction.

    And you know what, I can get them to work together in the same time it takes to setup TFS.

     

    OSS Weekend

    I have been slacking my OSS duties recently, spent a whole lot more time relaxing than anything else. It was fun, but it had to end sometimes. It ended when the "to do" list got over 50 items. At that point I knew that I had to do something or drown under the sheer amount of stuff that I postphoned. I decided to dedicate some time over the weekend to weed out the todo list. Not surprisingly, quite a few of the todos were related to OSS projects I contribute to.

    There are at least three major things here:

    • Cloning / traversal of criteria queries, which, in conjuction with...
    • Multi Criteria Queries - can produce some really interesting results, and deserve their own post.
    • NHibernate Query Analyzer UI upgrade (thanks to Sheraz Khan)

    Here is a taste of what is new in NQA:

    (Image from clipboard).png

    Anyway, here is the full list, releases will have to come at a later time, I am feeling tired. Off to a non techincal meeting, that should give me enough reason to want to write code that I actually paid for :-)

    NHibernate:

    • NH-987 - SQL 2005 views used in SQL 2000 dialect
      Now using select 1 from sysobjects where id = OBJECT_ID(N'{0}') AND parent_obj = OBJECT_ID('{1}')
    • NH-831: Adding MultiCriteria
      Fixing a bug with MultiQuery and the second level cache, where each query pagination status wasn't taken into account when retrieving from the cache.
    • Moving to Dynamic Proxy 2 (Patch by Adam Tybor)
      Removed the TODO in NHibernateProxyHelper, at last :-)
    • NH-924 - inspection of criteria.
    • Fixing spelling error from previous commit
    • NH-988 - Proxy validator should complain on non-virtual internal members
      Applied patch by Adam Tybor

    Castle:

    • (validator) Added a way to override message definition using external resources file.
    • Fixing the build (that I broke in last commit)
    • Allowing to get the interceptors for a proxy (NHibernate needs this).
    • Revert unnecessary change to BasePEVerifyTestCase
    • DYNPROXY-58 - Inherited interfaces - FIXED.
    • DYNPROXY-56 - workaround for CLR serialization bug, applied patch from Fabian Schmied
    • (Windsor) Fixing spelling mistake
    • Adding passing test case for IoC-73.
    • (Brail) Fixing MR-248 - Pre-Processor Issue
    • MR-247 - Brail uses different keys for putting and getting items from the compilation cache.

    NHibernate Query Analyzer:

    • Much better support for AR:
       - Version insensitive
       - Can handle merged assemblies
       - Can handle pluralizeTableNames
    • Applied patch from Khan, Sheraz - Adding tree of entities and properties tree.
    • Updating NHibernate to trunk
      Updating tests to work against the new version.
    • Final touches to tests.

    Rhino Mocks:

    • Applied patch from Ross Beehler, adding more smarts to Validate.ArgsEqual
    • Fixing an issue with mocking objects with protected internal abstract methods.
    • Adding (passing) test case for mocking interfaces with generic methods that has constraints.
    • Removing 1.1 legacy collections.
    • Fixing an issue where internal interfaces could not be mocked in strongly named assemblies.
    • Adding license header to all the files.
    Tags:

    Published at

    CodePlex, TFS and Subversion

    Interestingly enough, Subversion support is the most requested feature for CodePlex. I suggest reading the discussion, it is very interesting. The major points against TFS and for SVN seems to be:

    • Weak offline access support
    • No patching
    • No anonymous access

    The points above makes TFS a poor choice for an OSS project, which requires all three (but especially anonymous access and patching). For corporate scenarios, there is another advantage to Subversion over TFS. Subversion is a Zero Friction tool, TFS is anything but. A memorable quote can be found here:

    Source control is a utility.  It's a tool.  It should help you do what you need and stay out of the way when it's not.  I love developer tools, which is probably why so many of my open source projects involve building new ones, but I firmly believe that if you have to ever think about the tool then the tool is not doing it's job.

    CodePlex SNAFU

    I found this mostly by accident, but it looks like a few weeks ago CodePlex has lost the source code for some of the projects hosted on the site. I would like to address the part about "free means no guarantees" that came up in the post:

    CodePlex is a free service.  They've provided complete source code hosting along with one heck of a website and never asked for a cent in return.  So, I really can't hold it against them.

    I would.

    Regardless of the legalese involved. I have certain expectations from such a service. And having reliable backups is one of those. I don't care if it is free, SourceForge had ensured that there wouldn't be much point in charging for source control for open source projects, so I am not surprised that it is offered as a free service.

    CodePlex's stated goals are to host and support open source projects, I assume that it is also meant to show off Microsoft's technologies. That is nice, except that the fact that a project is OSS doesn't mean that its source is not mission critical data that really should be safe.

    Passonaite at Team System? Depending on which direction

    Roy is hiring Team System people, I was amused by the post title:

    We're hiring Team System People - Are you passionate enough?

    The answer it my case would be "Yes, to avoid it" :-)

    And the reason would be that it absoutely not a zero friction tool.

    As long as we are talking about Roy, he posted a poll about mocking frameworks for his upcoming book about unit testing. I am impressed by the amount of votes (and the comments) for Rhino Mocks, thanks.

    A developer so retarded

    I just run into this post, that talks about a presentation about mocking, and included this statement (about using Rhino Mocks):

    I once heard a story about a developer so retarded that he and his team spent a good amount of time trying to debug a mock only to find out that he forgot about ReplayAll()*.  Who hires guys like that anyway?

    That developer, it would be me, and it can happens quite often. And I wrote the tool, so I should know what I am doing when I am using it.

    Stuff happens, it is easy to forget a line of code and waste some time as a consequences. The most common sentence from me when I am developing is "Oh, man, I am so stupid that I did that".

    To give an example that would be a bit easier to grok, what is wrong with this code:

    using(TransactionScope scope = new TransactionScope())
    {
       Appointment[] appointments = SpanOnCalendar(appointmentSepc, DateTime.Today, DateTime.Today.AddDays(7));
       foreach(Appointment appointment in appointments)
           appointment.Save();
    }

     

    Paged data + Count(*) with NHibernate: The really easy way!

    Christian Maslen ping me about this article, which shows how to use NHibernate to execute multiply statements in a single round trip. Christian suggest a much neater solution:

    SELECT  C.*,

            COUNT(*) OVER() AS TotalRows

    FROM    Customers AS C

    I was sure that it wouldn't work, but it does, and I consider this extremely cool. So, now I needed to figure out how to make NHibernate understand this. There are several options, but extending HQL is simplest one in this case.

    NHibernate uses a dialect to let bridge the gap between Hibernate Query Language with is a database agnostics relational/object querying lanaguge. This allows NHibernate to work against multiply databases easily. The key here is that one of the extension points that NHibernate is offering is the ability to define your own custom functions, which can translate to arbitrary SQL.

    In this case, here is the query that I want to end up with:

    select b, rowcount() from Blog b

    Here is the dialect extension:

    public class CustomFunctionsMsSql2005Dialect : MsSql2005Dialect

    {

           public CustomFunctionsMsSql2005Dialect()

           {

                  RegisterFunction("rowcount", new NoArgSQLFunction("count(*) over",

                         NHibernateUtil.Int32, true));

           }

    }

    We register a new function, called rowcount, which translate to "count(*) over" string. The final "()" are added by NHibernate when rendering the function. Now, we need to register our new dialect:

    <property name="hibernate.dialect">MyBlog.Console.CustomFunctionsMsSql2005Dialect, Blog.Console</property>

    And here is the code we end up with:

    IList list = session.CreateQuery("select b, rowcount() from Blog b")

                  .SetFirstResult(5)

                  .SetMaxResults(10)

                  .List();

    foreach (object[] tuple in list)

    {

           System.Console.WriteLine("Entity: {0}", ((Blog)tuple[0]).Id);

           System.Console.WriteLine("Row Count: {0}", (int)tuple[1]);

    }

    The generated SQL is:

    WITH query AS (

         SELECT TOP 15 ROW_NUMBER() OVER (ORDER BY CURRENT_TIMESTAMP) as __hibernate_row_nr__, 

              blog0_.Id as Id4_,

              blog0_.Title as Title4_,

              blog0_.Subtitle as Subtitle4_,

              blog0_.AllowsComments as AllowsCo4_4_,

              blog0_.CreatedAt as CreatedAt4_,

              blog0_.Id as x0_0_,

              count(*) over() as x1_0_

         from Blogs blog0_)

    SELECT * FROM query

    WHERE __hibernate_row_nr__ > 5

    ORDER BY __hibernate_row_nr__

    Oh, and thanks for Fabio Maulo for helping me figure out the correct usage of custom functions.

    By foul moon

    Because I know that I will need it...

    # Not very accurate, but apperantly good enough for most purposes

    # source: http://www.faqs.org/faqs/astronomy/faq/part3/section-15.html

    def IsFullMoon(dateToCheck as date):

          two_digit_year as decimal = dateToCheck.Year - ((dateToCheck.Year/100)*100)

          remainder as decimal = two_digit_year %19

          if remainder > 9:

                remainder -= 19;

          phase as decimal = (remainder * 11) %30

          if dateToCheck.Month == 1:

                phase+=3

          elif dateToCheck.Month == 2:

                phase+=4

          else:

                phase+= dateToCheck.Month

          phase+=dateToCheck.Day

          if dateToCheck.Year < 2000:

                phase -= 4

          else:

                phase -= 8.3

          phase = phase % 30

          return 14.5 < phase and phase < 15.5

    Expected usage:

    if IsFullMoon(DateTime.Now):

          raise SqlException("""Transaction (Process ID 179) was deadlocked on lock resources with another
    process and has been chosen as the deadlock victim. Rerun the transaction."""
    )

    Update: I am still struggling with the code for WasChickenSacrifised(), will be glad to get suggestions...

    Open source and the programmer's dilemma

    Nick Carr is quoting an IEEE article about OSS Economics. I am going to respond to the article later, rigth now I wanted to comment on this piece:

    Given the natural imbalance between employers and employees, this aspect of open source is likely to increase competition for jobs and drive down salaries.

    I have one word to say to it, rubbish! I have stated it before, working on open source software means that you have credentials. There is nothing that speaks louder than code for developers. And experianced developers are worth quite a bit, regardless of their choice of platform or license.

    This statement seems to assume that the only thing that separate developers from one another is prioprietry knowledge aquired in mystic rights at the dark of the moon.

    MSDN vs. Google

    Here is a small experiment, I want to read the documentation for IDispatchMessageInspector

    Just to give an idea, here is the wget output for this address:

    wget http://msdn2.microsoft.com/en-us/library/system.servicemodel.dispatcher.idispatchmessageinspector.aspx
    --00:59:48--  http://msdn2.microsoft.com/en-us/library/system.servicemodel.dispatcher.idispatchmessageinspector.aspx
               => `system.servicemodel.dispatcher.idispatchmessageinspector.aspx'
    Resolving msdn2.microsoft.com... 207.46.16.251
    Connecting to msdn2.microsoft.com|207.46.16.251|:80... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 122,022 (119K) [text/html]

    100%[========================================================>] 122,022        5.74K/s    ETA 00:00

    01:00:12 (5.74 KB/s) - `system.servicemodel.dispatcher.idispatchmessageinspector.aspx' saved [122022/122022]

    That is right, over a minute half a minute to download just the HTML content of a single page, over two minutes just to get the simplest page to load.

    Let me contrast that with the comparable experience:

    • Go to google.com
      Load time: faster than I can measure
    • Put IDispatchMessageInspector and hit enter
      Load time: ~2 Seconds
    • Go to the first result's cached content
      Load time: ~7 seconds

    You know what the real sad part is? The google approach is faster than locally installed MSDN!

    Maintainability as a first level concern

    I have no idea how I missed this post from Anders Norås. It is talking about some of the problems in using traditional software factories (code-gen) vs. using smarter frameworks. It is a long read, but it is excellent.

    Maintainability is the enabler of all the other "itities" of an architecture. Experience shows that typical EJB applications often was hard to maintain even if code generation made them easy to develop. The IDE driven code generation for EJB paved way for vendor lock-in.

    The software factories of today will be part of Visual Studio "Orcas", but I would not count on this to lock users into the IDE and Microsoft Patterns & Practices way. Even with million-dollar investments in high-end application servers the enterprise Java community has largely moved from EJBs to light-weight frameworks. This has proven to be a economically healthy choice because of improved productivity and flexibility.

     

    Tags:

    Published at

    LINQ to SQL - Dynamically Constructing Queries - Um... No!

    Mike Taulty is talking about constructing queries dynamically in Linq to SQL. Sorry, but my response to that is Yuck! Here is the code that he shows:

      NorthwindDataContext ctx = new NorthwindDataContext("server=.;database=northwind");

       var query = from c in ctx.Customers
                   where c.Country == "Germany"
                   select c;

       if (RuntimeCriterionOneApplies()) 
       {
        query = from c in query
                where c.City == "Berlin"
                select c;
       }

       if (RuntimeCriterionTwoApplies())
       {
        query = from c in query
                where c.Orders.Sum(o => o.Freight) > 100
                select c;
       }

       foreach (Customer c in query)
       {
        Console.WriteLine(c.CustomerID);
       }

    This is not constructing queries dynamically, it is choosing a query dynamically. This isn't what I talk about when I am talking about constructing a query dynamically. You can look here as what I consider a complex query that is being constructed dynamically.

    Mike, how would you handle that scenario in Linq?

    Update:  I really should pay more attention when I am reading the code. Sorry Mike, the code that you have will work for the scenario that I have. I don't think that the syntax is very elegant for this scenario, but it will work. I missed the "from q in query" and read is as if a new query were constructed in each if.

    NHibernate and WCF

    I have run into some conceptual issues with using NHibernate with WCF. At the most basic level, I sought to duplicate the Session-Pre-Request functionality that I currently use for the web. This relies on the BeginRequest EndRequest events to create and close NHibernate's session.

    Appernatly there is nothing in WCF that is similar to this, at least not globally. It has been suggested that I can use IDispatchMessegeInsepctor* to open/close the NHibernate session. This is good enough for the common case, althought it is still more work than I would like (you need to add the behavior to each service, instead of doing it globally).

    One interesting thing that occured to me is using WCF PerSession services with NHibernate. Since NHibernate is now much more aggresive about release database connections, it make much more sense to use WCF PerSession while utilizing the Session Per Conversation pattern for NHibernate. This, however, I am not sure how to handle transperantly to the service...

    * By the way, is someone else is strunk by how similar WCF is to the way Castle Windsor works?

    Multi Table Entities in NHibernate

    A while ago I posted about the ability to map n tables to a single entity in the Entity Framework. I didn't like it then, and I quoted from the Hibernate documentation that discourage this behavior:

    We consider it an essential element of good object model design that the object model be at least as granular as the relational model. If the original data modeller decided that certain columns should be split across two different tables, then there must have been some kind of conceptual justification for that. There is no reason not to also make that distinction in the object model.

    I still believe that this statement is true, except... I just run into an issue with my model, I have a case where I am importing data from another database, and I need to add additional data to it. I could add additional columns to the primary table, but that would make the import process much more complex than I would like it to be. I would have liked to make it work by splitting the data by table, rather than by columns.

    With that in mind, I headed to NHibernate's JIRA, and found this issue about the problem. Conveniently, a patch was supplied as well.

    A big thanks for Karl Chu for making all the work of porting the functionality from Hibernate. I love Open Source.

    At any rate, you can now map several tables into a single entity in NHibernate, you can get the full details here (the new tests), but let us walk through a simple one first.

    (Image from clipboard).png 

    Name and sex are defined in the Person table, but everything else is defined on the Address table, we map it like this:

    <class name="Person">

           <id name="Id" column="person_id" unsaved-value="0">

                  <generator class="native"/>

           </id>

     

           <property name="Name"/>

           <property name="Sex"/>

     

           <join table="address">

                  <key column="address_id"/>

                  <property name="Address"/>

                  <property name="Zip"/>

                  <property name="Country"/>

                  <property name="HomePhone"/>

                  <property name="BusinessPhone"/>

           </join>

    </class>

    Obviously address_id is a FK to person_id (not the best names for them, come to think about it). Trying to load a person would cause this SQL query (reformatted):

    SELECT

         p.person_id,

         p.Name,

         p.Sex,

         a.Address,

         a.Zip,

         a.Country,

         a.HomePhone,

         a.BusinessPhone

    FROM dbo.Person p inner join dbo.Address a

         on p.person_id=a.address_id

    There is quite a bit more that it can do (optional joins, etc), and you can check it out at the tests.

    Note: this is on NHibernate trunk, so it won't be in the 1.2 release, which is currently in feature-freeze.

    JetBrains C# IDE, where did it go?

    Andrew is asking about the JetBrains C# IDE, I did some research right now, and literally all information about it is dated to 2005. There is no official response saying this, but I guess that JetBrains decided that there just isn't a point in trying to compete with Microsoft in this area. I find it very sad, since by and large I am very unhappy with VS itself, and I have been continually impressed by what JetBrains' stuff enables me to do.

    I would really like to see an IDE from JetBrains for .NET. It is a big undertaking, but I have faith in their abilities to make it work. JetBrains, please give us a reasonable IDE, I am so tired of fighting VS, I want an IDE to be my friend not a foe.

    It is the WSDL, Stupid!

    In my current project, I need to talk to BizTalk, and have BizTalk talk to me. I usually don't like to work with things such as BizTalk and SSIS, because almost invariably, they make simple things more complex than they should be.

    At any rate, the sceario is simply consuming web services, and having web services that BizTalk can call. I had two options to use here, I could use the usual ASMX web services, or I could use WCF. I did some testing with WCF, and it seemed fairly straight forward to use, but what made me decide to use it (beside the wish to try it out) was the logging support. It makes debugging so much easier when not only does it write everything to file, but it also provide a tool that allows easy browsing of messages and conversations.

    At any rate, predictably, I run into problems. I had taken the test code for the web services that I needed to put so BizTalk can call me, and converted it from ASMX to BizTalk. Everything seemed to work fine, I would get the mesage, but I wouldn't get the values.

    Consider this:

    public class AddOrderLineConfirmMessage
    {
       public string WhoAuthorized { get { .. } set { .. } }
       public string OrderId { get { .. } set { .. } }
       public string OrderLineId { get { .. } set { .. } }  
    }

    I am skipping the attributes here, I think that you get the mesage :-).

    Well, I managed to get the message just fine, and the OrderId and the OrderLineId were filled with the correct values, but the WhoAuthorized field (which is the most important one here) was null.

    Naturally, I blamed BizTalk for this, and called them to have it fixed. They swore up and down that they are sending the value is being sent from their end, and that the problem is on my side. Since I don't believe that my code can be flawed, I decided to prove them wrong, and took the sample code that used ASMX, and tried that.

    That worked, not only did it work, but it also had the oh so important WhoAuthorized field. I then pulled the logs from the WCF service and saw that indeed, the message was something like:

    <AddOrderLineConfirmation>
       <WhoAuthorized>foo</WhoAuthorized>
       <OrderId>1</OrderId>
       <OrderLineId>2</OrderLineId>
    </AddOrderLineConfirmation>

    At that point I was getting annoyed by the whole "WCF can't even work for my simple scenario" and decided that I would solve this issue if I had to write my own XML parser to do it. I began to dig into the WCF configuration options (a world of its own), and find out why it was ignoring the value that was clearly there.

    I tried this, I tried that, and I couldn't figure it out. Until eventually I pulled out the WSDL and looked at it, trying to see if there was a namespace difference that could case it to ignore the value, or something of this order, but everything looked fine.

    After quite a few of head banging, I finally noticed something odd. The fields in the WSDL were ordered alphabetically. So the WhoAuthorized field came last. That was when I knew that I had the issue solved.

    The problem was with field orderring and versioning.

    Basically, ASMX service would generate a message where the fields are ordered by their source code order (unless you explicitly specify otherwise). WCF, however, will order the fields by default according o the alphabet.

    By chance, the OrderId and OrderLIneId were placed in the source code in an order that matched their alphabetical orderring. That meant that when WCF was parsing the message, it encountered WhoAuthorized field at the beginning of the message, and discarded it because it wasn't valid for the first field. It continued to discard fields until it riched the OrderId field, after which it found the OrderLineId field. Both of them matched the definition of the service message, so they were filled, but anything else turned out to be out of order and thus ignored.

    The solution was to put Order=num in all the DataMember attributes, which let WCF know what is the expected orderring of the field in the document.

    The lesson, always look one level down, and make sure that you are looking, not staring.

    The cost of upgrade

    Scott Bellware started it, and Sam Gentile continued, Windows is not usable out of the box for power users. Right now I quote a figure of about three days for me just to set up a new machine so I can start working, and there is going to be a period of reduced prodactivity when I get the machine the way I want it.

    It is not just installing software, it is also setting up path variables, letting Windows know that I am not a dummy, bringing over shortcuts and extensions, etc.

    Sometimes people don't realize why I hate moving machines, or work at their machine, even for a short while. I am sorry, but WinKey+R, N should bring Notepad2, and Ctrl+B will take me to the implementation of a method. I should see file extensions and typing Booish would give me a interpreter into the framework.