I am one of those fellows that tend to run on the trunk.
What does it mean? It means that I am using the latest version of the source for my projects. So, I am on the trunk for Boo, NHibernate and Castle (which can be tricky, since Castle isn't on NHibernate trunk).
This has significant advantages for an active OSS project, NHibernate's trunk has the multi criteria support, MonoRail's trunk has the testing support, and since stuff is moving in response to the needs of the users, it is evolving fast. Sean Chambers was impressed by how fast this can happen in many cases. (In others, it is a slower process, but still mostly visible.)
There are some disadvantages, people (recently it was mostly me, sadly) can break the trunk, or commit functionality that breaks your code. In general, I have not found this to be the case for the projects that I am working on. A lot of the committers are running on the trunk as well, and there is a big pressure to make sure that the trunk is stable (ready to ship) because I, at least, do ship from it.
The major disadvantage here, from my point of view, is using a piece of code that is still under development or review. The recent case was Criteria cloning in NHibernate, which has gone through some modifications to get to its current form. Personally, I don't find it onerous to do so, and those are usually rare.
The big advantage is that you get continiously improvements. I does reviews of the changes every now again (now limited to scanning the logs and looking at what has changed) and it keep amaze me how much good stuff is going in there.
Alex has caught me red handed, opps, I went home last night and dropped to bed, doing only the necessary minor email preventance. I actually have something prepared that I want to post, but I just couldn't find the strength to finish it. (Later today, I promise.)
(Yeah, I got the joke)
This month I am at 114 posts (likely more when today is finished), which translate to ~3-4 posts a day, every day.
Do people actually read them? I have an unusual "is it worth a post" standard, and it is likely that I am producing noise.
I have been thinking about doing some delay posting, so you will get a post every time interval, rather than whenever I feel like posting. Thoughts?
Jamie Cansdale has just published the email corrspondance between him and Microsoft about TestDriven.Net on the Express SKU. Among other things, this has led to him losing him MVP status, pulling the TD.Net express, and more than a bit of angst. The clause that they are using is: "You may not work around any technical limitations in the software."
I strongly suggest reading the entire set of emails, and considerring that the clause they choose to use, that includes everything that they don't like, from key shortcuts to VS Macros.
In an eerily similar timing, Fowler manages to hit a lot of the stuff that I am feeling, and he is doing it much better than I could.
This paragraph has certainly hit a note:
That is one of the more frustratinng points in working in the Microsoft space, Microsoft is actively competing against its own community, and that diminishes the entire community (and Microsoft as well).
Okay, here is another exampe of using Cecil, but I think that you can figure out what I think about having to deal with raw IL.
Feel free to figure out what this does, and why I would be interested in such a thing. BTW, in Boo to Boo, that sort of thing so painless. Having to deal with the raw IL takes all the fun out of it.
I know about Cecil.FlowControl, but as far as I can find, this is mostly (only?) for reading assemblies. I am looking for a way to have a reasonable AST over cecil. This is not scalable no matter how you cast it.
No, it is not a rant against some non OSS tool.
When I was at DevTeach, I talked with a few people that mentioned that their compnay has an... aversion to OSS and they wanted a way around that. Beind technical minded and a "rules, what rules?" types of guy, I suggested that they would take an OSS project (assuming the license allows it), and just rename the namespaces.
This is a possible approach, but it may be too much work in many cases. I suggested post compile weaving, and people looked at me as if I have grown two heads ( I didn't, I checked ). I just now remember this conversation, and wrote this little script with Boo and Mono.Cecil, which would take an existing assembly and rename everything from an old value to a new one.
booi AsmReplace.boo Castle.ActiveRecord.dll Castle Tower
Now enjoy your Tower.ActiveRecord.dll :-)
Oh, and written in notepad on the command line while I waited for #Develop to be installed on a new machine.
Okay, I mentioned that I am working on fairly complex piece of query code, something that is completely off the charts, even for me. As such, I have completely test driven that. How do I Unit Test a complex query? By using in memory database (in this case, SQLite). NHibernate makes it very easy to switch databases, and I never gave it much thought after making it work for the first time.
Until today, where it was time to do an additional tweak to my query. It worked perfectly on the tests, running against SQLite, but failed when running the application against SQL Server. Here is the query that demonstrate the problem:
The generated query looked like:
This fails which unknown "addresses0_.Street" error, I run into the problem before, and I looks like a bug in SQL Server, or a nasty (appernatly undocumented) surprise.
This was my critical query, I had to run it. In desperation, I opened NHibernate source, and started debugging into it, in order to see how the query is being composed. Haven't looked very closlely at this area in a while, and things have changed somewhat, but I had that sinking feeling that it was one of those really big changes that I am really not ready to make close to the time that I want to go home.
Then I did something that I was sure would fail, I changed the query to:
It worked, produced SQL that would make SQL Server happy, SQLite was able to process this SQL without any issues either. Problem solved, and will hopefully remain in memory for the next time that I need it.
Oh, and if you didn't notice, today was a jumpy day. I am going to do some deep end stuff to relax.
Chris, this looks like it can take a long time :-) Keep it up, I enjoy the debate.
I disagree with this statement, leaving aside the fact that I am dealing with compiler stuff on a routine basis (NHQG, Brail, Dynamic Proxy, NHibernate - all have some aspect of compilers in them)*, I think that learning something by going to the lowest level of abstraction that can be had is the best way to really learn something, and this is regardless of whatever you are doing it on the academia or at work.
I should qualify that with saying that I am also a lazy person by nature, therefor I would tend to learn just enough to get myself out of problem. The problem is that I appear to be a contrary person as well, so I run into a lot of problems :-) I am trying to learn WPF for the last two months, can't find the stregnth to go through an excellent book when I already know the basics. (XAML has routing events and dependencies properties, everything else is lazy learned).
That is usually a reflection of the precentage of time that I need to devote to something, usually, and I am not doing a lot (any) WPF.
I seem to recall several episodes where the computer took things too literally, and things got really bad. Do you feel that you would like to be the person on call to debug this?
* Yes, I do know that I am probably not a representive sample of the developer population. I was already told to REPENT.
I just had left work with a potential big issue left open (big defined as - "we may need to re-write the entire UI layer"), I was very pissed off, to say the least. I had a problem with the ASP.Net Session not working in Ajax call back. I just made a significant architecture change that require that I would store some information in the ASP.Net session, and I didn't feel like having to either re-write it or the UI layer.
Googling wasn't really helpful, which only made me angier. At that point, I got sick of it and went home, thinking very bad thoughts about some @#$#@%%. On the ride home, I suddenly had the answer. The issue was only happening when I made a CascadingDropDown call to the server, everything else was behaving normally. CascadingDropDown is using a WebService to get the data from the server. Now I can't think of a good reason that a Web Service should have an ASP.Net session by default.
The moment I knew what the underlying problem was, it was seconds to find what the solution was, until then, it was nearly impossible.
Chris Holmes commented on my thinking for tools, but I think that he missed a crucial distinction that I made (but not clear enough).
If you require a tool in order to work effectively with something, this is an issue. By requiring a tool I mean something beyond VS + R#, which are the bare minimum stuff that you need to be effective in C# (it does says something on C#, doesn't it?). The Entity Framework Designer is a good example for something where I think that the dependence on the tools indicate a problem. I have been corrected about requiring the SCSF to work effectively with the CAB, which was good to hear.
Tools can make you more effective, but I think that you should always learn without them. Case in point, I am currently teaching people about .Net (I'll have a seperate post about it soon, I promise) and teaching them stuff like NHibernate or Active Record would be actively harmful for them at this stage, when they don't understand the basics of ADO.Net completely yet. In my opinion, it is always a good idea to know what the tool is doing, why, and where its shortcomings are.
Oh, and about the program in notepad thing, I don't program C# in notepad (usually), but I have written fully fledged applications in Boo from notepad and the command line. Boo makes it easy & painless to do so, and it is a pleasure (and painless) to work with it.
Probably the best part at being in DevTeach was meeting and talking with a lot of people. That was also the biggest frustration. The moment that there were more than three of us, the discussion split into several interesting avenues, leaving me really wishing that I could take part in at least three conversations at once.
I can (usually) take part in two discussions at once, but it is confusing (both for me and participant in the discussions), and there is some prioritzation that is going on which decide who I am most interested in listening to right now. Part of the reason that I like email for communication over IM/Voice in many cases is that it allows me to handle concurrent discussions better.
That said, there is nothing better to communicate ideas than a face to face meeting where you can easily interact with the person(s) you are talking to. A lot of the discussions that we had at DevTeach would be simply impossible to take to a written form and keep the same tone (which is what made it interesting in the first place.)
In the comments to my OR/M Smackdown post Adam Tybor noted:
To which Ted Neward has replied:
Which reminds me of a conversation that I had with Udi Dahan recently, which we concluded with this great quote from him:
All of which brings me to the following conclusion, performance tuning in the microseconds is a waste of time until you have profiling data in place, but that doesn't meant that you should think about performance from the get go. You can code your way out of a lot of things, but an archtiecture that is inherently slow (for instance, chatty on the network) is going to be very hard to modify later on.
Udi had an example of a Customer that has millions of orders, in this case, the performance consideration has a direct affect on the domain model (Yes, I know about filters). From a desing perspective, it basically means that the entity contains too much data and need to be broken. From a performance perspective, that is making it explicit that a potentially very costly call is made (and obviously filtered for the need).
A good rule of the thumb for performance is that you should consider an order of magnitude increase in the number of users/transactions before you need to consider a significant change in the archtiecture of the application.
That is absolutely not to say that you should consider everything in advance, and I had my greatest performance success by simply batching a few thousands remote calls into a single one, but architecutre matters, and that should be considered in advance, and built accordingly. (And no, it doesn't necessiate a Big Architecture Up Front either, although where I would need to scale very high I would spent a while thinking about the way I am going to build the app in advance. Probably with some IT/DBA/Network guys as well, to get a good overview.
Oh, nevertheless, premature optimization...
A few of the comments to my post had an unpleasant tinge to them with regard to Ted, and after listening (just today!) to the podcast, I have to say that I disagree with them.
I think that the debate took an unforeseen approach (which may have made it less exciting to listen to) in that it didn't rehash all the old OR/M vs. SP debates*, but instead focused on stuff (that at least to me) that is not really discussed much. As a result, much of the discussion focused on points of failure for each approach in most applications.
I think that such a discussion cannot really be settled to one side or the other, especially since it looks like the major differences between Ted an me is where we would draw the line in where to move from one approach to the other. There is a big difference there, but it is a matter of a value judgement, more than anything.
As someone who is deeply involved with OR/M, it was interesting to hear about the approaches from db4o and the other 2nd generation OODBS, even though I still have my doubts about such systems. You can check here for some of the details. I would be interested in learning how versioning, refactoring, deployment, scaling, optimization, etc applies to an OODBS project, but I am already slicing time to minutes, I don't really have the time to do more. (If you are interested in hiring my company for a project that uses an OODB system, taking into account that I have 0 experiance with them, I would be delighted to here about it :-) ).
When I listened to the podcast today, I kept think ,"Oh, I wish that I said XYZ", and then I listened to myself saying something to that affect. Overall, I am very pleased with the "smackdown", although it may have been less of a smackdown than anticipated.
A few things about what I have to term "logistics" because I can find no better words for it. I don't like how I sound when I speak English, I think much clearer than I can speak, and I am sorry for all those whom I subjected to my English. Both Ted and I have interrupted each other when we had something that had to be said, but Ted sounds so much smoother when he does it... I can understand (although I disagree) why it was thought that he tried to highjack the conversation.
As Ted mentioned, barely an hour before that, I was reminded that gesturing with the hand that holds the mike is not very productive for good sound quality, I listened to that advice, switched hand, and immediately started gesturing with that hand. I probably need more experience there as well.
To conclude, I had great time participating in the debate, and I would like to thanks Carl & Richard for supplying us with the chance to make it.
P.S: And for those who wanted solid truths, I have the consultant answer to you: It Depends.
P.S.S: I have seen the object models that Ted talks about them, that make my 31,438 tables DB looks almost (but not quite) ordinary.
* Sorry Ken, ain't going to bite this one again.
Juxtapose the usage and the reasoning.
Hint: Customer is an aggregate in DDD terms.
I am on a new computer, which I hate (the new, not the computer :-) ). I am trying to do the usual things, and I keep enoucnterring all sorts of of ads in places that I have never seen before. I guess that explains why some sites has mysterious blank areas sometimes.
It only took solving something I was dreading of dealing with, with a much easier approach that I believed possible. The problem right now is convincing myself that I don't really want to finish this feature right now. I can't believe how well success can turn a day that was to be rotten.
Anyway, I am off for home now.
I need help preventing the hero syndrome, anyone?
It looks likes I still have a lot to learn about NHibernate, today I took a foray into criteria's projections, which are new and exciting to me, but what excites me right now is this little puppy. It is not that the query is terribly complicated, it is that this is traversing six tables as easily as it can be. From the Multi Query reference, you can probably understand why I am excited, I need to get ~6 results sets for this to work, and it is looking like this is going to be very easy to do it in a performant way.
You have to realize, I am at the point when I am running passing tests in the debugger, just to check out how cool this is. In the time that it took me to write this post, I have implemented the bare bones of what is most probably the hardest part of the system. This particular functionality requires data from 30(!) tables. The final query is over 5Kb in size, and goes on for 3 pages. (I am talking about the final query, not the one above.)
And to forestall the obvious questions, yes, I use this color scheme sometimes, I amuses me.
I am teaching beginners right now, and we started out from:
using (SqlConnection connection = new SqlConnection(Settings.Default.Database))
using (SqlTransaction tx = connection.BeginTransaction())
using (SqlCommand command = connection.CreateCommand())
command.CommandText = "SELECT COUNT(*) FROM Books;";
result = (int)command.ExecuteScalar();
But they said that they already know this stuff and I am boring, so we moved to this:
return With.Transaction<int>(delegate(SqlCommand command)
command.CommandType = System.Data.CommandType.Text;
command.CommandText = "SELECT COUNT(*) FROM Books;";
Where the implementation is:
public static void Transaction(Proc exec)
using (SqlConnection connection = new SqlConnection(Settings.Default.Database))
SqlTransaction tx = connection.BeginTransaction();
using (SqlCommand command = connection.CreateCommand())
command.Transaction = tx;
They didn't think it was boring any longer :-)
That made for some interesting diagrams, just to show the flow of the code. The piece of code above covers a lot of topics, next step is to introduce Unit of Work.
They are beginners, so they need to understand the basics well before they can do anything with frameworks, but I just find it so frustrating to work on the naked CLR. It is like a construction worker that can arrive on a building site with an anvil and a hammer, and then need to build all the tools before they can start working.
Zabasho - (Hebrew: זבש"ו) - can be literally translated to "It is his problem", but is usually meant to also express complete disinterest in the result of an action done by someone else.
Example: "If he will send that email, he is getting fired, I told him that. Zabasho."
Ayende's Example: "The user wants all the screen aqua on pink, he will go blind. Zabasho."
StoryVerse is an agile management web application, based on MonoRail & Active Record. The UI is very googlish in nature, and I really like it. Still very early in the project, but showing a lot of promise.
The way that the guys from LunaVerse are using the criteria objects had me scratching my head for a while, until I saw how it is used on the controller, then I had a "Wow! That is cool!" momemt.
Here is a sample:
PropertyBag["projects"] = Project.FindAll( projectSearchCriteria.ToDetachedCriteria() );
You really need to check out the code in order to appriciate it. It is like placing a lot of domino pieces and watching them all fall into places, no extra work required. Here is one technique that I am so going to take advantage of next time I am going to write a search screen.
Quite a few parts of NHibernate are serializable. ISession and ICriteria are of particular interest to this post. Serializing the session (neccesiate that the entities are serializable, of course) is very useful for the Session-pre-conversation mode. But it is the criteria that I would like to talk about right now.
Criteria in NHibernate is the OO interface for making queries. So, why is it important that it is serializable? Well, it opens up some interesting scenarios. Saved searches or reports are just one of many. I have used similar functionality in the past to build a rule engine, where saved criteria where the basis of selecting what a rule should process.
The only problem with that is that you usually want to display the query to the user. I have recently added inspection capabilities to the criteria, but they are focused more on infrastructure stuff rather than ease of extraction for the UI. For those cases, you might want to add a way or the UI to restore itself to the appropriate state for the selected search.
Okay, here it is. This is a little diferent style than the one I have made before. This isn't scripted at all. This is literally a recording of me trying to solve the Event Broker issue. As I have mentioned, I have spiked the issued previously, but not in any serious manner.
As a result, you can see me stumbling over issues in the implementation, and it is much less professional sounding. It turns out to be less than one hour recorded (+ 5 minutes spent checking the Rhino Mocks source code "off stage"), and I think that I have a good solution for the Event Broker issue.
I am afraid that at times I have been reduce to unintelligable muttering at time, but I hope that it is still valuable.
Stuff that is covered in the screen cast:
- Event Broker
- Declerative Event Wiring
- Registering to events from classes we don't own
- Avoiding memory leaks
The code starts at 2:30 minutes, and it is pretty much just code (and my mumbling) from then on.
As usual, the code is supplied, and the download page is here.