IM is not really the medium to express complex thoughts, and I thought that it may be helpful to expand a bit on the subject.
I am currently working on a very big project that uses ASP.Net Web Forms model, and there isn't a lot of fancy UI stuff in there, and I still suffer from the model that ASP.Net tries to make me work in.
- ViewState is a problem that we lost days on. Mostly because we are doing a lot of dynamic control generations, and we sometimes got the viewstate from the previous page into the new page, causing much havoc. But also because it just plainly so different than the way that the web works.
- It highly encourage the PostBack model.
- Simple things are not simple. Take for example needing to do something like disabling links on certain rows in a grid because of their state (capture the RowDataBound, get the cell by its numeric index, find the right control, disable it).
- Complex things are complex. Page life cycle issues, nested child controls initialization order, events firing order, view state corruption issues when building multi-view controls, etc.
- It is not fun to work with.
It got to the point where for my demo, where I need to present a simple hierarchy of messages, I tried at first to do this in ASP.Net, but stopped when I realized how complex this is going to be, and wrote a little demo that did it all in WinForms. Using WinForms, it took me half an hour to get to 70% of what I wanted, using ASP.Net WebForms, it would take me a day.
Frankly, at this point, I am not going to do any ASP.Net WebForms work unless I am paid for it, and I am going to push hard to make sure that any incoming web projects will use MonoRail.
When asked, I listed the following as things that make MonoRail hard to embrace:
Not Microsoft - some people sees that as a con, not me
Potshot: At least this way I know that it will not have a hard dependency on the office 2007 beta 3 refresh 1. And it will continue working if I install IE7 on the server machine.
Not something that everyone knows.
But, it is very easy to learn how to start, and it doesn't punish novices by letting build something in completely the wrong way. The learning curve is very smooth, as well. And frankly, if you can't learn MonoRail without a 400 hours course that will chew the material for you, I don't want your on my team.
Tool support not that great
If only I could get Brail syntax highlighting in Visual Studio... I don't even need intellisense, I am not doing anything complex in the views. For that matter, I usually edits my views in SharpDevelop anyway, so this isn't that big of a deal.
No 3rd party controls
There aren't any 3rd party controls for MonoRail that I know of. But, it can (easily and painlessly) use all the HTML controls that are out there. No I'm not talking about <input> ;-). I'm talking about everything that the non-ASP.Net world uses, which is more than powerful enough to create beautiful sites.
No designer - a plus for me.
I am not using the designer at all, mostly because it is horrendously slow and like to put stuff like style="height=29px" which are a killer to remove afterward.
So I gave the lecture to some guys at work (reminder, I am giving an Active Record lecture on monday at the Tapuz User Group meeting in Microsoft), and it went very well.
I found some interesting holes (NHQG doesn't work on exe because of an annoying (and soon to be fixed) bug), and some stuff that I wasn't sure about with regard to some of the more recent things in Active Record.
I set the database dialect to SQL 2005 by mistake and was facinated by the way NHibernate did paginated queries. I forgot how hard it is to do this in SQL Server.
For some reason, my laptop refused to replicate the display to the second LCD, but will happily work with it in a dual monitor setup, so I had to code with my head to the wall just to see what was going on. My laptop is a Thinkpad R50e, and I usually use it with a dual monitor setup, so it may be related to that.
It also run about 15 minutes over the planned time (and I can certainly see it taking 3 hours easily), but I think that I can safely cut some of the more advance stuff to deal with it.
The guys that I lectured for weren't mainly programmers, so they actually didn't get too much of the pain that just went away, but I still got a Wow from them when I showed them how they can use strongly typed queries.
Okay, hopefully, some ambulance chaser lawyer will hear about this and put it into action. Here is a new way to go after spammers. Don't try to sue them for sending spam, sue them for sexual harrasment. I mean, sending a person 5,000 emails suggesting that they can't keep it up is got to be at least slander, isn't it? I still carry deep emotional scars from the sex change operation photos spam.
In the US, you can sue over hot coffee that you spilt on yourself, and win. This has got to have some merit...
I got a couple of disappointed responses about the time of my User Group talk, since not everybody can make it. Tomorrow I am going to give a trial run of the talk to some of the guys at work, and if you are going to miss the User Group talk, you are welcome to join. It is going to be tomorrow (Wendsday) at 17:00 in We!'s office (Hertelia, Maskit 27).
Just drop me a call if you want to come...
Do you care about perfromance? Do you want to avoid that last three weeks of intesive performance work that require you to break apart a beautiful domain model and cut wide swathes of optimization horrors into your code?
How to do this? Setup the system so it would fail if a given performance objective is passed. A simple example can be to add a check to the request finish that verify that the amount of database queries falls beneath a pre-defined threshold*. If the page is executing more than X queries, it fails, immediately, on the developer machine, with a clear signal saying why. Take this approach to the test / QA machines as well. Make sure that when a performance issue hits, it is something that can't be ignored, and it can be pointed directly to the developer responsible.
The only way to do this reliably is to fail as early as possible, which would force the developer to fix the issue on the spot, when it is still a single, isolated issue.
I can think of two or three very easy to measure stats that you can get:
- Number of queries per page.
- Number of remote calls (web service, remoting, etc) per page.
- Page processing time (only when debugger is not attached!).
I can't think of any way to assoicate CPU usage with the current request, otherwise I would have included that as well.
The rapid response cycle here has the same affect is here that it has when fixing bugs. Localized, immediate, fix is much faster / better than a global optimization stage later on.
* You do have a way to associate this data with the specific request, right?
- You concatenate strings to build dynamic queries.
- You have no consistent way of querying the database.
- You need to create bad relational model to satisfy the object model that you want.
- You have a hard time mapping between the database and the objects, and you are not a beginner.
- You goes against well defined concepts such as layering, seperation of concerns, etc.
- You need to implement functionality that is already part of the OR/M
- You need to do a lot of extra work in order to make the OR/M work correctly.
If any of the above is true for you, STOP! Take a deep breath, and go take a look at the manual. There is a better way.
I believe that I can make OR/M works for 95% of the possible schemas that are out there. And that is a bold statement since I sometimes work with schemas defined for main frames. I also believe that if it is hard to do, it is hard to do because of valid reasons. A good OR/M will steer you in the direction of good database design, when you fight it, it means that there is a problem.
Most Israelis should get it*, but for the non Israelis among my readers, I would like to introduce my pal, Silver Bullet, AKA "The 5Kg Hammer".
Amazingly versitale tool in many environments. Anyone who thinks that computers doesn't have emotions should see how they quack when the two of us approaches them. Refactoring is a tiring work, but very satisfying :-)
* IDF Joke, "If I can't solve it, then a 5Kg Hammer would".
Paul Stovel talks about defaulting to making your code public (accessible to developer).
I said it before, many times, and I fully agree with him on that.
Scott Hansleman reponded:
Yes, and when you give me six months and real world use cases, then maybe you can get something out of it. You aren't hitting the need of those until you are really deep in complex scenarios. Scenarios that aren't going to be covered in any shorter time frame. It is not the scenarios that you thought of when you design the framework that worries me. It is the scenario of needing to integrate the way I work with a new framework and finding out that I need to modify the existing functionality just a little bit, and then finding the hood welded shut.
Scott, this is for an internal application, used by several dozens of developers, building most the same kind of applications, right? What Paul is talking about, and I fully cuncur him, is that when you don't have a release cycle that is measure in days/weeks, and when your target audiance is far more diverse, you need all those extention points.
In the comments to Scott's post, Joshua Flanagan suggests:
Widget x = new Widget();
x.$InternalHelper(); // call a non-public method on an object
I have issues with the syntax, but I agree with the idea, for myself, I use the [ThereBeDragons] attribute.
A good point that was raised in this issue was that exposing everything limits extensibility and will hurt forward compatability. As much as I like those terms, they are not sacred. In the name of the Holy Backward Compatability, Microsoft refuses to fix this bug:
Is there anyone reading this that thinks that this is a valid approach?! In the name of forward compatability, this approach hurts the developers far more than it helps. In the commercial world, there is such a fuss about Supported vs. Not Supported anyway, so it is not an issue. The framework vendor should have the balls to say, "Breaking Changes" and deal with it.
Thinking that you can control software once it is outside the gate is silly. If I need this functionality, I will get it. Take a look at my SqlCommandSet, huge performance improvement that the ADO.Net team didn't see a reason to expose to the outside world. Of course that they didn't see a reason, no one tried to build a OR/M framework on 2.0 Alpha/Beta/Gamma until very late into the game, and they didn't encounter the batching performance implications until very late into the game.
Microsoft is very good in saying "This is unsupported scenario" (recent example, putting Atlas' assemblies on the GAC, rather than on the application base, which completely breaks continious integration scenarios) - they should learn to say "unsupported extensability" and move on, instead of slapping sealed and internal over everything and anything in sight. Strangely, on the Java world, where everything is virtual by default, people still do manage to issue a second version of their software. So I don't see a big issue there.
Note that I am not saying not to use internal, I use it myself, but usually for very different reasons. To hide complexity or remove things from direct reach. In 95% of the cases, if I have internal, I have a public way of safely overriding it (and if I don't, it is a bug, page me).
Let me give an example of a scenario that you wouldn't think about until late into development. I have a static IoC class that holds the reference to the IoC container that I am using. Part of the initialization process of my container recursively uses itself to load stuff. I need to replace the container dynamically, but then I hit a problem. I can't fully initalize the container because if I do, it will use the previous container and not its own, and I can't replace the global container because I have other threads that are running and might access the still-initalizing container and break. This is not something that you are going to run into until very late in the game, and if you can't fix this issue, then it is Game Over replace everything from scratch.
Until I run into this scenario, I would have never thought that I would need a seperate container (accessibly globaly) side-by-side with the global container. But I did, and if I had been limited to the external interface, there would have been hell to pay for it.
Make everything public, and only interalize stuff after a lot of careful thought and agonizing decision process. Other people will use your stuff, be polite to them.
NCache has a new version out, and while the freature list seems really impressive, they added support to clustered, fail over caching. I have strong issues with any application where caching is mission critical to the point where you need to cluster it for reliability.
To me, this points to one of two things:
- You are Google (and they have other solutions to this).
- You rely on the cache to an extent that the application can't perform at all without it.
I get handed a lot of resumes, and I developed a system to speed-evaluate the resumes.
- Quickly scan the recuiter's stuff, see if they have some personal stuff about it, but ignore the fluff about the technologies that the candidate used (according to them).
Reason: They tend to put everything there. I suspect that the method they use is to ask a candidate if he every heard about SQL Server, and then put "Worked with SQL Server, Stored Procedures, Triggers, Indexes" instead. Perhaps they fail to understand the difference.
I consider myself proficent in SQL Server, and I wrote two triggers in my life. There is no need to put stuff like this in every resume, it just end up being skipped.
- Go to the projects part, see if they have done anything interesting. Interesting being define as hard / rare. Usually, I search for doing stuff with multiply threads, AI, big systems, etc. No hard rule here.
- Technologies, don't really care about that. It is nice if they have experiance there, but we can just teach an emlpyee that ourselves.
I am interested in seeing that they have a diverse background (at least it shows that they can learn well)
- Mentioning unit testing, agile experiance or continious integration give a 10x of an interview.
- Working with open source software - just about any experiance with open source tools / frameworks is going to get a notice from me. The reason here is that it means that they manages to escape the evil clutches of Microsft Is The Source Of All Software mantra, which is at the very least very helpful.
- Being a member of an open source project - this one gets you an interview, period.
This is not an altruistic move on my side, by the way. If you work on open source software it usually means that you are doing it on your own time. This, in turn, means that you care about the software. It also means that I can instantly see your code and check what kind of a developer you are. And, of course, that you are cofident enough in your abilities to make your code public*. It also makes you part of the community, which is very valuable.
- The candidate has a blog about software - also get an interview, period. For much the same reason that an OSS project does.
Sadly, I get a lot of really bad resumes / people. I would like to take this chance to announce again, if you are in Israel, and you are looking for a job as a software developer, give me a call or send me your resume**. Both my email and my phone are on the web site, and I would love to hear from you.
* As an aside, it also means that you probably don't suffer from "That is my code that you just touched!"
** And if I don't return to you within one day, it means that your email is in the spam folder or I'm dead. Try sending it again.
I look at a lot of resumes, and one of the things that catches my eyes in "Design Patterns". So, several time, when I had an interview with a candidate that had this in his resume, I ask them what design patterns they know. Invariably, the question produce the following response:
"Uh... (deep, frantic thought) Singleton!"
A guy interviewing for a team lead job manages to recall Factory after a very long thought.
Bzzt, Wrong! I don't care what GoF says, Singleton is not a design pattern. If that is the only thing you know, don't put it in your resume.
If you do, expect to be asked about the good/bad of it and giving me "only a single instnace" isn't going to help you any.
Putting design patterns on your resume means that you know what Decorator is, when to use Observer (and why it is rarer in .Net than in Java), how to implement Strategy properly, that you can convince me that I should use Front Controller instead of Page Controller, that you can tell me why Model View Controller is a good approach for this app, and why I should avoid using Transaction Script for a complex application.
It means that you can talk knowledgly about it. Preferably, that you have implemented (and understood) it.
Singleton is not a design pattern, if that is all you know, don't bother to put design patterns in your resume.
The very simple example is finding a US SSN:
Which turns out to be:
Regex socialSecurityNumberCheck = new Regex(Pattern.With.AtBeginning
Here is the output from The Regulator about the above regex:
Exactly 3 times
? (zero or one time)
Exactly 2 times
? (zero or one time)
Exactly 4 times
$ (anchor to end of string)
Comparing the two, they are nearly identical. Which is impressive, since I use Regulator all the time to deal with regexes.
What it more, for about the first time ever, this allow me to consider building a regular expression on the fly. I can't think of a good use case to do this, of course, but now I can.
I mentioned before that I hate infrastructure, but right now I am building the continious integration facilities for my projects, and I can't escape it. One of the biggest hurdles so far has been the install for NHibernate Query Generator. I don't know zlit about MSI, installers, etc. So I tried to take WixEdit, Votice and SharpDevelop and create a very simple Wix script that can install NQG. Took me longer than it would have taken to write an installer myself, I suspect, but I finally managed to do it.
I then turned to the skinning of the installer. I just have an issue with the default computer & disk images that 99% of the installers use. I dug up my log psd file, got the latest photoshop trial (I tried using the GIMP, horrible horrible results), and produced the following:
Pretty good for someone that can't draw a straight line with a ruler and three hours.
As an administrator, I get an error:
As a mere user, I barely warrant a warning:
Sorry if you thought that you would find any content here :-)
Watch this, and think about the amount of history involved.
I should start by mentioning that my WIX knowledge is infinately approaching zero. But I want to be able to build MSI packages as part of the daily build process, and you can't do that with VS Setup Projects.
So, I choose NHibernate Query Genernator as my model, downloaded Wix and Votive, and cracked open the WiX tutorial and started epxerimenting. I am using version 3.0.2211.0, by the way, which is currently under development.
I had no issues with building the simple MSI iteself, (part 1 of the tutorial) but I run into some problems when trying to add UI to the MSI. There are a couple of things that are needed using Votive that the tutorial doesn't mention. First, you must add a reference to WixUIExtension and you need to add the wixui_en-us.wxl file to your project. After you download the file, make sure to edit the WixLocalization tag and add a Culture attribute, I added en-US, and it seems to work fine.
I am using WixUI_Mondo UI, and it seems to work just fine, except... I can't figure out how to change the license type.
All the documentation says that it is just a matter of placing a license.rtf file in the current directory, but everything I tried so far has failed :-( and it keeps trying to use CPL license. Furthermore, I can't find where it is coming from.
I wanted to run a Wiki at work, to handle all sort of issues.
I already am using Media Wiki, but this requires MySQL database, and I didn't want to have to install that. I tried to install Trac, and I did manage to install it, but afterward it was... "Okay, now what?"
I then found out about ScrewTurn via Avery, and decided to give it a shot. It has an installer that I downloaded, and in 5 minutes, I had a fully functional Wiki. The only issue that I had was that the installer didn't setup the file premissions, but that took a second to fix. I got it running, and it is a lot like MediaWiki, except that I don't have to do anything to get it to work. I am very impressed.
Now all that remains to see if I can put it to good use :-)
This is the profiler results from trying to understand why I got very slow responses from the server...
Notice what is costing so much time here. The first issue was that
the system was generated huge amounts of queries, and the quick & dirty
solution was to throw caching on top of this, which had a difference of three to
five orders of magnitude. The image above is after applying the caching, by the way, and there is no database activity in this call graph.
But performance continue to suffer, and I wasn't quite sure why. A quick look at the profiler and then at the mapping showed the reason quite clearly. In this case, the issue is a God Class (you know those, the one that is connected to all the entities in the database) that has no lazy collections.
As a result of this, per request, NHibernate had to construct the entire database in memory. Often just because the developer wanted to display a single (simple) property from the God Class.
When I removed the access to this class, I got about 10x performance boost...
This is a 101 (introductory) talk, showing what you can do with Active Record and how much more productive it makes you.
There are a couple of nice tools for code generation that I have reviewed recently, but I don't think that I am going to go for the flashy screens and the wizards and reams of generated code.
I am going to give a RAD talk without a single wizard...
If you are in Israel, come and tell me how spectacular a failure this is going to be... :-)
P.S: Some poor guy actually asked to reserve two hours of my time to talk about OR/M. I'm not sure if I should pity him or not...
P.S.S: I'll make the slides / code avialable after the lecture.
I'm listening to DNR #198 at the moment, and Carl compares Ruby to VB6 and says that the main difference is that now we have tests.
I don't agree with this statement.
You can't extend VB6 in any meaningful way. In a dynamic language, mixin is a natural paradigm, extending classes and objects is a matter of course. Those features give the developer a lot of flexibility to work with. VB6 didn't have any of the real advantages fo a dynamic system.
The implementation fo this feature is done by keeping the state of the object when it was loaded from the database inside the session, and comparing the initial data to the current state of the object when flushing. The problem, in Darrel's words is:
From my perspective there are two major drawbacks to this approach. First, considering the data you are manipulating is probably consuming the majority of the applications memory, you have double the memory requirements for the writable objects. I am assuming you only take this hit for objects retrieved as writable, but even so, this just seems unreasonable especially considering the fact that many people use Hibernate on a server that is a shared resource!
The second problem is that when doing a flush, it is necessary to do a comparison on each and every entity property. This is necessary even if no changes have been made.
Let me tackle the second problem first. Yes, Flush()ing when you have large number of entities in the session is going to perform poorly. That is why NHibernate provide you with ways to tell it that it shouldn't make those checks (calling Evict() on the stuff that you know wasn't changed, etc). In practice, you only need to do this in rare cases. Loading large amount of objects, modifying a few and saving them is a use case that doesn't turn out that often, in my experiance, and when it does, it is simple to optimize it. If you really wish, you can provide a way (through the interceptor) to tell NHibernate which entities are dirty or not, and save it the effort.
The first problem that Darrel has is with the memory consumtion. I will start right off by saying that I have been a devot user of NHibernate*for over two years now, and I'm building big, complex systems using it. Memory consumtion was never an issue with NHibernate. Optimizing NHibernate's performance is almost solely focused on reducing the amount of queries and getting NHibernate to fetch the data in the most efficent way.
The one time I heard about an issue with memory consumtion and NHibernate, it wasn't because of the entities cache, but because the developers had abused NHibernate's query translation cache by generating hundreds of thousands of queries from user's inputs.
The basic assumption that I contest is that the entities are going to be the majority of the application's memory. This is not the case in nearly any application that I know of. Let us say that we have an entity that is 1Kb in size (this is a farily large entity, by the way). Let us say that we have 250 of those in the current request ( again, this is a very big number ). So we have 250Kb for the entities, and another 250Kb for the loaded state. 500Kb is a lot, you say?
Let us try finding other stuff (per request) that has similar size. The first thing that I can think of, of course, is ViewState. I am sure that I am not the only one who got a "WTF is ViewState doing that it is 80% of the page size, including pictures!?". And, of course, you pay for ViewState's (memory-wise) twice, once for the serialized data which remains in the request's variables collection until the request ends and once for the de-serialized data.
The page itself and its controlls collection is another very big object. And I am sure that you can see where I am going from here. So, even for a case with a lot of very big entities, I would bet that the other factors in the request would overhsadow the amount of memory consumed by NHibernate by a large precentage.
But, and this is important, since when has memory been a problem for scaling an application? (Leaving aside doing silly stuff like putting a 400Mb dataset per user in the session). I/O is a far bigger factor in scaling applications, especially database I/O.
To conclude, I don't see NHibernate's Automatic Dirty Checking memory consumtion as a problematic issue. Quite the reverse, actually. You have no idea how powerful this is until you realize that your business logic can do its stuff (and you can test it) completely seperated from the persistance mechanism, and NHibernate will simply pick up the changes and persist them for you. The first time I did it, I felt like I was doing magic.
Oh, and just to point out, there aren't really any other way to handle this, expect by doing exactly that. When I spoke with Luca Bolognese about DLinq, he mentioned that it uses basically the same approach to solve this issue.
* Mainly because whenever I stray from the Path I get hit hard on the head by the ADO.Net gods of venegance and boring code.
This is a completely non technical post, by the way.
In the last two days I have been busy moving my current project to using NHibernate 1.2 and dropping all references to NHibernate.Generics in favor of NHibernate 1.2's native generics collection. (Finding some horrible business logic inside the Add/Remove delegates that was quite tricky to repreduce reliably).
This project is "mostly read" system, and it feels significantly faster after doing the upgrade. I don't have any numbers, and it may be just a wishful thinking, but I see things responding more quickly.
I wanted to give someone a message, so I wrote this:
GueniaPig tester = (GueniaPig)you;
What was I trying to say*?
Sidenote; Do you find these quotes interesting? I seem to be doing quite a bit of them lately.
* Only after I sent the mail I realized that it may be insolting to the person recieving it.