Yes, this is another challenge that I ran into which I consider well suited for an interview.
- It is short
- It doesn't require specific knowledge
- There are a lot of ways of solving that
- I can give the develop access to Google and the test is still valid
The test itself is very simple:
- Detect if another instance of the application is running on the network which is registered to the same user
- It doesn't have to be hack proof, and it doesn't have to be 100% complete. The purpose it to stop casual copying, not serious hackers.
- A great example of the feature in action is R# detecting that two users are using the same license at the same time.
Oh, and for real world scenarios, use a licensing framework instead of rolling your own.
Recently there has been a lot of discussion about how we can make development easier. It usually started with someone stating "X is too hard, we must make X approachable to the masses".
My response to that was:
You get what you pay for, deal with it.
It was considered rude, which wasn't my intention, but that is beside the point.
One of the things that I just can't understand is the assumption that development is a non skilled labor. That you can just get a bunch of apes from the zoo and crack the whip until the project is done.
Sorry, it just doesn't work like this.
Development requires a lot of skill, it requires quite a lot of knowledge and at least some measure of affinity. It takes time and effort to be a good developer. You won't be a good developer if you seek the "X in 24 Hours" or "Y in 21 days". Those things only barely scratch the surface, and that is not going to help at all for real problems.
And yes, a lot of the people who call themselves developers should put down their keyboards and go home.
I don't think that we need to apologize if we are working at a high level that makes it hard to beginners. Coming back to the idea that this is not something that you can just pick up.
And yes, experience matters. And no, one years repeated fifteen times does not count.
Learn programming in 10 years is a good read, which I recommend. But the underlying premise is that there is quite a lot of theory and practical experience underlying what we are doing on a day to day basis. If you don't get that, you are out of the game.
I refuse to be ashamed of requiring people to understand advanced concepts. That is what their job is all about.
This is something that has been making the rounds lately. Just about any developer that have seen it was greatly amused.
Check out the video.
This is so good, I intend to make this into a part of several of my presentations.
Adi has posted a post that I am not in agreement with: there are some things a good developer is NOT required to know.
[Quoting Jeff Atwood and Peter Norvig] Know how long it takes your computer to execute an instruction, fetch a word from memory (with and without a cache miss), read consecutive words from disk, and seek to a new location on disk
I did learn these things, but after reading Jeff's post I have been trying to remember a project I worked on during the past 10 years which required this type of knowledge - and I got nothing.
Well, I used knowledge about reading & seeking to / from disks the last two weeks. They are extremely important when you are thinking about high perf large I/O. Fetching words from memory, and the implications of ensuring good locality have huge affects on the type of performance that you are getting to get. As a simple example, a large part of the reason that thread switching is slow is that the CPU cache is mostly misses when you switch a thread.
You should know those things, and you should know how to apply them. No, they don't come often, and I can think of a lot more things that I would like a developer to know (OO, to start with) more than those low level details, but those are important to know.
How old were you when you first started programming?
Depending on how you define programming, I remember playing with Logo and that annoying turtle at 8 or 9, using a machine that was old for the time, and using floppies that where about 30 cm square (never seen them before or after).
How did you get started in programming?
I was bored and programming was a way to do stuff with the computer that didn't required being online. At the time, being online cost a fortune, so I really had to limit myself, programming helped. Thinking back, I burned a lot of hours on trying to figure this out.
What was your first language?
Probably BASIC or Pascal, if you want to discount Logo. I remember being very frustrated with programming, because I hit a ceiling in my understanding of programming, and I wasn't not able to go over it. Throughout high school, I was simply unable to understand dynamic memory allocation. After high school I took a course at C & C++, and then it was: "Oh, of course, it makes a lot of sense".
What was the first real program you wrote?
Hm, good question. The first that I would define as an Application, rather than just a bunch of code was an online forum, then still called BBS (although it run on http). I had a lot of fun building that, but thinking back, it was scary.
What languages have you used since you started programming?
Another complex question. What is used?
Written applications at: C#, obviously, VB3,4 and 6, C and C++, PHP. Perl.
By that standard, I never used VB.Net, but I wouldn't say that I don't know it.
What was your first professional programming gig?
Writing a website for a printing company. The pay was lousy, but the job was a dream. I was getting paid to play around in the computer, heaven!
If you knew then what you know now, would you have started programming?
Yes, no question about that.
If there is one thing you learned along the way that you would tell new developers, what would it be?
If you didn't get it to fail, you haven't done anything. The only way to learn is to fail, and after it has failed, fix it.
Oh, and read the bloody error message.
What's the most fun you've ever had programming?
Writing Rhino Mocks, I would say. That was a purely intellectual exercise at the time, but it was a lot of fun coming up with the design and making it work. I remember capturing two of my soldiers and having a do a design review. None of them was a programmer, or a technical person by any means, but it was very useful. I was teased about that for years :-)
Casey had this to say:
I have actually seen organisations where (in one case actually explicitly expressed, and in many where it wasn't spoken out loud) software delivered roughly meeting the requirements on how the UI worked was considered delivered. The work to make it work (usually way more work than the initial delivery) was considered 'bug fixing' and therefore was billable additinally by the IT department or outsourcer.
Which reminded me of a joke about consultants, no relation to anyone I know, etc...
One day the manager calls the consultant to talk about the time sheet report...
Manager: You charged us on Wednesday for 19 hours, but you were here for only about 9 hours on Wednesday.
Consultant: Well, of course. Look, it is very detailed. I was here from 9:00 to 18:00, right?
Consultant: And because we left without a good solution, I kept thinking about it in the car, and when I walked the dog. You see, it is the entries for 18:00 - 19:30 and 20:00 - 20:45. From 19:30 - 20:00 I had dinner, I didn't charge you for that.
Manager: Nice of you. And the other 8 hours? 22:00 - 06:00 ?
Consultant: Well,when I walked the dog, I finally had a vision, everything came together in a moment of brilliance, and I could see the solution in my head. All I had to do is connect some little pieces and it would work.
Manager: Oh, so you did an all nighter?
Consultant: Ha? Of course not. I went home thinking about the idea, and then I went to bed and slept on it for 8 hours.
The above is not my modus operandi, nor am I willing to work with those who does. This is relevant because I am not going to consider practices built in those kind of shops as important to most of the discussion about software development.
All that aside, how the hell do you get the client to agree to pay for bug fixes? All my contracts include a 6 months guarantee for bug fixing, and most of the time they also include SLAs that says "drop whatever and get there", which is annoying as hell when this happened*. This means that I can't bill someone for bug fixes (change request are another matter, but those are for another time**), which is a great incentive to not have bugs.
* At one time I was called, after being on the phone for about an hour, came to the client, sat for 5 minute, sent a death treat to the DBA, and left. No space on the DB hard disk, argh! They backed it to the same HD and never cleaned it up.
** "Oh, you wanted it to also work? That wasn't in the original spec..." doesn't really fly in the real world
While there is value in the item on the right, I value the item on the left more.
This is in response to a comment by Jdn, I started to comment in reply, and then I reconsidered, this is much more important. A bit of background. Karthik has commented that "Unfortunately too often many software managers fall into the trap of thinking that developers are "plug and play" in a project and assume they can be added/removed as needed." and proceeded with some discussion on why this is and how it can be avoided.
I responded to that by saying that I wouldn't really wish to work with or for such a place, to be precise, here is what I said:
I would assert that any place that treats their employee in such a fashion is not a place that I would like to work for or with.
When I was in the army, the _ultimate_ place for plug & play mentality, there was a significant emphasis on making soldiers happy, and a true understanding of what it was to have a good soldier serving with you. Those are rare, and people fight over them.
To suggest that you can replace one person with another, even given they have the same training is ludicrous
From personal experience, when I was the Executive Officer of the prison, the prison Commander has shamelessly stole my best man when I was away at a course, causing quite a problem for me (unfortunately not something that you can just plug & play). That hurt, and it took about six months to get someone to do the job right, and even then, the guy wasn't on the same level. (And yes, this had nothing to do with computers, programming, or the like.)
Now, to Jdn's comment:
In a perverse way, I can see, from the perspective of a business, why having good/great developers, who bring in advanced programming techniques, can be a business risk.
[...snip...] you have to view all employees as being replaceable, because the good/great ones will always have better opportunities (even if they are not actively looking), and turnover for whatever reason is the norm not the exception.
Suppose you are a business with an established software 'inventory', and suppose it isn't the greatest in the world. But it gets the job done, more or less. Suppose an Ayende-level developer comes in and wants to change things. We already know he is a risk because he says things like:
"not a place that I would like to work for or with."
If you view me as replaceable, I will certainly have an incentive to moving to somewhere where I wouldn't be just another code monkey. Bad code bothers me, I try to fix that, but that is rarely a reason to change a workplace. I like challenges. And there are few things more interesting than a colleague's face after a masterfully done shift+delete combination.
What I meant with that is that I wouldn't want to work for a place that thought of me and my co-workers as cogs in a machine, to be purchased by the dozen and treated as expendable.
You know what the most effective way to get good people? Treating them well, appreciating their work and making them happy. If a person like what they are doing, and they like where they are doing it, there would need to be a serious incentive to moving away. A good manager will ensure that they are getting good people, and they will ensure that they will keep them. That is their job.
Mediocre code that can be maintained by a wider pool of developers is in a certain respect more valuable to a business than having great code that can only be maintained by a significantly smaller subset of developers.
At a greater cost over the life time of the project. If you want to speak in numbers the MBAs will understand, you are going to have far higher TCO because you refuse to make the initial investment.
To quote Mark Miller, you can get more done, faster, if you have good initial architecture and overall better approach to software.
Jdn's concludes with a good approach:
I'm offering services for clients. I can't disrupt their business because I don't think their code is pretty enough.
What I can do better, going forward, is learn to make the incremental changes that gets them on their way to prettier code. My attitude is *not* "well, I can't do anything so I won't even try."
But at the end of the day, I have to do what is best for the *client*. If that means typed datasets (picking on them, but include anything you personally cringe over), then I can partial class and override to make them better, but typed datasets it will be.
I would probably be more radical about the way that I would go about it, but the general approach is very similar, especially when you have an existing code base or architecture in place.
My feeling about designers, etc is that they are most certainly no inherently bad, they can be extremely valuable to get me someplace quickly. I do think that a lot of the designers, etc in use today are bad, because they produce unmaintainable code. Hell, WCF proxy generator is a good example of one, it doesn't recognize the common binding settings, always producing custom binding. It is much easier from the developer point of view, much harder from the user point of view.
My objection is for dumbed-down tools, not for the existence of the tools. I don't think that it is a good business decision to make use of those for anything but a demo, and I believe that I can, with the use of good framework and maintainable practices, get faster velocity than anyone using the wizards for most kinds of applications. This include the forms over data approach, by the way. <% EditForm(Table: Customers) %> is faster than doing dragging a table to the page.
A *large* number of applications have a clear set of requirements, and they won't be extended in the future. You need to handle the requirements that the business users require right now.
If an application takes less than a man month to write, you are going to need to maintain it, simply because it is not sustainable to keep adding features to such an application without changing things. Doing it in a way that help maintenance is valuable, even if the application is not going to be significantly extended later on. Fixing bugs, working with the code, etc are all much easier when you write with maintainability in mind. And it is not as if maintainability is some painful tax that you have suddenly acquired, it is simply what happens when you write good code.
And it is *unarguable* that it is easier to use, say, the ObjectDataSource, to accomplish this than to do MVP/MPC. There is *no* argument against this. None.
Ha? I would certainly argue that. Just try doing anything remotely interesting with the ODS and you will see my point. The ODS is just a way to call a method to get a data source, and it has such wonder methods that pass me IDictionary of parameters, and expect me to do something with it. That may be good for tabular data, but I don't have that.
It is unarguably easier to use a wizard or designer to design the vast majority of applications that are used by businesses around the world.
Again, I would certainly argue that. Because you ignore the part that happens after you generated the grid or the form, where you now need to do all the tweaks that the client want (debt rows should be yellow, big debt rows should be red, late returns are marked with an icon, give me row level security and column level filtering, etc).
The code that is produced by using designers, wizards, etc. is more easily maintained by people who aren't alpha-geeks.
Here is something that I can't really understand. If alpha geeks can't maintain this code, why would others be able to do so? Unmaintainable is just it. And I have worked on creative solutions to a lot of technical problems that stemmed from the fact that I wanted to do it the RAD way, and have came to regret it.
Jeff Atwood is talking about why background compilation is part of a culture of code monkeys and that we should accept it and get on with the program:
One of the problems with the army of monkeys approach that everyone seems to ignore is that treating someone like a monkey will get you monkey-like responses. We have a problem of escelating complexity in software, and trying to solve it by serregating the problems to the "Smart Dudes" and the "Monkeys" is not really helping. We are not writing Hello World applications any more.
I had the chance to try to port a monkey's code from one langauge to the other, and I couldn't make sense of it. You can imagine at what stage I had to throw up my hands and seek that monkey out to have it explain to me what the hell is going on in the code. It was 2 months old code, and the monkey couldn't do it. The magic numbers in the code were what the monkey was told to write, the logic constructs in the code were what the monkey was told to write. The bugs were the monkey's own fault, I assume, but it may very well be the case that the monkey was instructed to put them in as well.
Trying to get a good product out of an army of monkeys requires constant policing, lest they do something stupid. Software is such a complex beast that have a monkey anywhere in the process will put a serious risk to the project at large. Some of the stuff that I have seen:
- Using a public static bool g_IsUserAuthenticated;
- Subscribing each page to a global & static event handler.
- String concentration for query building (using the safeForSQL() method, of course)
- The single user only web applications
- For more references, check the Daily WTF
Each of those share a single trait, this is a single stupid thing that had a drastic affect on the entire application.
In the case of the static event handler, it took about two days to see the effects properly, at which point the application crushed with OOM errors. That was fun to find out.
Armies of monkeys simply doesn't scale, a monkey can't handle complexity well, so you end up dumbing the environment to the level of the monkey, therefor, you are reducing your ability to make any sort of change and maintainability is a nightmare.
So, Jeff, I do agree with you that there is this cult of monkeys, I do not agree that we should agree to this.
Jeremy Millier is brining up a pain point to me here. I see quite a lot of people who treat developers as data entry guys for the Architect perfect design. It gets to the point where people argue with me (vehemently) that this or that features should abosuletly be removed, because it is too complex for Darl, their developer stero-type.
Darl is a nice guy, he got into the business a few months to a year ago, although I have seen Darls with quite a few years of experiance under their belts. He may have a CS degree, but that doesn't mean he can pass the FizzBuzz test. Darl best friend is the designer, and he is completely lost without it. He may be completely lost with the designer. Design and architecture are mystic concepts to Darl, they are handed from above, and are to followed religiously. Darl doesn't understand the business problems he is working on, nor can he understand the technology he uses. He was told: "Call this method with this parameter, and don't forget to use Try-Catch".
Darl should aspire to be a Mort, but he doesn't even know that such a creature exists. The last time that Darl has opened a technical book/article was cramming before an exam. He gets all the required information from the team lead or the architect, in the five minutes they have going back from lunch.
It doesn't take much to move away from the Darl arch-type. But is seems to me that this is something that a lot of business do not want or not willing to change. On the contrary, they would like to get more of them, and then they get tools that have "No Code Required" stamp on them, and expect to get results.
I have a junior developer working with me at the moment, my biggest pain point with him? He isn't lazy enough. It is something that I am working at right now, making sure that he will understand that Lazy is Good(TM).
Yesterday I spent two hours pairing with another developer, going over everything that we needed to setup the current project. This meant that I had went over the build process target by target, explaining its use, and showing how to deploy, we created a new project and setup everything from a database to the configuration file so we could get build the [ActiveRecord] class and going over the various database inheritance implementation scenarios and what the trade-offs.
The easiest way to work with good developers is to invest in them and help them grow.
As you can probably guess from the title, I don't agree. Rocky makes a good point, but I simply do not agree with his prediction.
I have not a clue about how SQL Server Tabular Data Stream work, nor do I have any interest in it. That doesn't mean that I need to find a SQL guru to do my databases. Or understand what goes on the bus when I am drawing an image using GDI+. The whole point of abstracting away the underlying layers is to let me focus on doing what I want without getting distracted by the implementation details.
I am one of those that like to have a good understanding on what is going on under the hood, mainly because I am also one of those that keep running into problems because of this stuff. Nevertheless, quite a bit of it Just Works. And unlike the medical field, which is what Rocky compare the devs into (at least it is not construction again :-)), we can move into new areas relatively safely.
There is a place for specialists, certainly. If my database is running slow, and I can't figure out why, I'll call up a SQL guru to point out where I am being stupid. But, that is not something that I would need on a general basis. I expect developers to know a lot, about a wide variety of subjects, but I don't expect them to be experts in all those fields. They need to have a good understanding of what they are doing in any field they are going to spend significant amount of time on, and they should definately have at least one or two areas of expertise where they excel.
I expect to see a lot more work going into building non leaky abstractions in the future, and I think that we are getting better and better at it. Furthermore, I believe we will see a lot more emphasis on Not Surprising The Developer. I fully expect being able to get a new framework, read the overall idea and be productive in a matter of a day or two. If I am not, then the fault is with the framework, period. This means good naming convention, discoverability and googlability, among other important attributes.
In short, technology scale better than people, so I expect technology to fill the gaps. The alternative that Rocky suggest doesn't hold water, in my opinion. If I need to hire a whole bunch of consultants at 250$/hour just to get a BuzzwardTechnology working for my forms over data scenario, I'll simply stick with what I have now. BuzzwardTechnology be damned!
Technology doesn't exists for the sake of technology alone, it exists to answer some sort of a business need, and if it can't handle that, it wouldn't succeed. Handling that, by defination, means that I can get my money's value back.
But still start using it now. Another reply to Adi, this time a new post in which he clarifies what he meant before. The main idea is that Microsoft's products get a big mindshare regardless of their relative qualities. I do not doubt that this is true, a lot of people goes for Microsoft because it is Microsoft. Expecting that this will pay off in the future is still the wrong thing to do.
I am one of those that would move to a new technology just because it gives me a tiny bit more, if it preserves everything that I can do in my current technologies. I am now using MsBuild in favor of Nant, because I can get the build script and the VS project to match (and yes, I know I can call msbuild from Nant). I moved to NUnit from MbUnit and back again, for much the same reasons.
But if it doesn't improve, why bother? Adi brings up a couple of examples:
I will refuse to use MS Test until:
- It has a performance on par with NUnit/MbUnit - currently aroudn 30% slower, clock time.
- It support the very basic of test patterns, Abstract Test Class.
- It can be run as part of the build without jumping through hops.
There are people that would use MS Test instead of using NUnit, I am sure, although you don't here about it almost at all. They make compromises because they are using Microsoft, compromises that I am not willing to make.
Linq is an extention to the language, not a technology. Assuming that Adi is talking about one of the ORM technologies from Microsoft, I do not think that I can argue that this is the case, only that I do not think that this is something that is done with planning and foresight. Just to point out, no one answered to my Linq challange yet, and I have no idea if this is even possible.
I started doing C# when I was mostly building Windows applications. The only good story at the time was VB or C++. If you wanted to use Java you could, if you really liked laying stuff out in code, and then running it to see what you wanted. Java was never very strong on the client. C# came to replace a market that was dying for a replacement, which is why it caught such a big audiance in such a short time. If Sun had its act together in 95-00, it could have made Java the prevasive technology everywhere.
I would really love to compete on a technology level with someone that is buying completely into this approach. I can do better than most of the market by using the best tools for the job. Limiting myself to Microsoft tools is limiting myself to the level that Microsoft believes most programmers should be.
I once had to resort to runtime code generation in order to sort a grid view. It is complex, yes, but it meant that I had sorting working for all the grids in the application, for the cost of half a day, while the upstream team wrote custom code per grid per page, because that is how they were told it should work.
Going with MS related technology blindly is not a good thing at all. It is not a strategy, it is herd thinking. It is the old argument about: "No one was fired for choosing Microsoft".
Bill is asking how he can improve the work process of his team. The worst thing in his tale (aside from 1,500 lines of code in a set accessor!) is that there isn't somebody that is in charge for the team, at least not someone who is technical. Unless a person is in a position of authority (preferably well established, rather than ad-hoc one), making significant changes is difficult.
I get a lot of milage from giving the client a link to the continious integration machine. They check to see what progress we made routinely, and the most common request I hear from them is to make the CI build more stable. (The answer to that is to give me a staging machine, but that is another story).
Doing something like this can prove to the stake holders that it is possibl deliver something that they desire, therefor, they will back the requirements when demands are made. In this situations, hiring the boss can be an interesting approach, go look for someone that you really admire, and then convince them to come work as the team leader /architect. Other ways include bringing in a coach, or sending the team to training.
All of the above assume that there is some sort of backup from above. If this is not possible, then support from the team members is crucial. From Bill's post, it doesn't sound like the other members of the teams are interested, but this may be because it wasn't presented in the right manner. It is very easy to give advice about people I never knew, but broadly:
- Show Mort how he can be more productive using smarter tools
- Pair with Jade to get stuff done.
- Challange the Primadona to a duel at sunrise.
More seriously, since Jade is apperantly the most experiance guy in Bill's shop, a good way to start would be to sit with him and try to work incremental improvements. "For the next month, we are putting in continious integration system, month after that, we start working on our design debt, etc". Define a certain level of quality that you will not go beneath. When two developers are collaborating, it should be easier to guide Mort into the fold. To begin with, the Primadona can be ignored, but once stuff started to roll, and the Primadona breaks it, take his code out out. I have reverted changes that broke unit tests in the past, much to the angst of a developer who didn't consider what would happen when I come in to a broken build with no idea what had happened.
Failing all of that, starting to carry a 5Kg hammer and speaking softly always worked for me.
Bill also mentions:
Bill, if you are really serious about that, get another job. To start with, you will need to educate the rest of the team about what is happening. They need to be able to go to your code and understand what is going on. If you aren't talking at the same level, this is a problem. And you are the guy to fix it.
But they still want cutting edge results...
Guilty as charged about the suggested. I argued that that we can get better productivity and higher quality, at the cost of having to train developers that we pick off the street. Adi think that I am overlooking the cost here:
I would like to refer to this post for backup. There is a tremendous amount of effort already invested in the tools. This means that I wouldn't have to build it myself in order to get the functionality that I need. Hell, something as simple* as MonoRail's [DataBind] can drastically reduce the amount of code that you need to write and maintain.
At the end, the job has to be done, and insisting on trying to do it via custom code is simply NIH. There are cases where I say that whatever exists out there is not going to work for this scenario, and I roll everything from scratch, but to get to the point where the official policy is "either Microsoft or hand-written stuff, nothing else" - which was an actual statement made by a client to me is a long shot.
About training, I do not think that it is too much to invest three days to a week in giving a new guy a chance to learn all the stuff that you are using. I had to bring in three new developers into my projects, none of them had any previous encounters with any of the technologies that we are using. All became productive in a very short time, one of them is exteremely annoying in managing to find the wierdest corner cases in NHibernate. They are all good developers, but I never even thought to ask them about whatever they knew about any of the stuff that we were using.
As an aside, I keep talking about the first steps of interviews, because I stop so many interviews there, but one of the things that I do in an interview is find out what the candidate doesn't know, and have them write something trivial with that technology. For instance, one candidate didn't know GDI, so I asked him to draw a line on the screen.
One of the worst candidates that I interviewed was someone that actually had NHibernate experiance, he was one of those that could not reverse a string...
* Simple is defined in terms of usage, not implementation, the data binder is certainly not simple.
This is a reply to Eli's IoC and Average Programmers.
Just to clarify:
- Coding to the Mort's level will get you bad code, period. I deal with big applications, which means that I have zero use of demo-ware features. Trying to force an application to use them "because the programmer can use the designer to do all the work" is a stupid. It creates more work, it add more complexity and it results in hard to maintain code.
- Not investing in developers is stupid, period. Right now We! are hiring. I get to interview a lot of people, and the bar to get hired is not with knowledge. I can teach knowledge, and I can mentor beginners. It is fully expected that you would have a learning curve. Trying to avoid that by mandating stupid code means that you will not be able to keep good people, and those that you keep will not like what they are doing...
That said, there is such a thing as Too Much Magic. But it isn't at the Mort's level.
Update: this looks relevant - Technical Debt
I got a couple of comments on my previous post about code quality and developers quality, where clients demand lower quality code. Basically, "that is the way it is".
I don't argue that this is happening, I had to butt head against to approach before, and I can sort of understand the arguments from the business side. The problem is that the approach is flawed.
I get to see quite a bit of average code (By my standards, average doesn't mean bad). It is usually straightforward code, sometimes it does more than it needs, but in general, it is not a problem to understand what is going on. The problem with average code is that it usually much harder to handle change.
It is hard to change because the code needs to do a lot more than it would have done had it been using a smarter approach. I find that it is much easier to train developers to use better tools and approaches than to lower the level of the code to the level of the programmers. Most developers would like to learn new stuff, not work on the same rusty codebase.
The end result, from the business perspective, is a higher quality product that was delivered faster, costed less, and whose maintainace costs are significantly lower. Of course, this necessitate investing in the developers, and that comes out of the IT budget. The cost of projects comes from someone else's budget.
- Is it possible to be passionate without being a zealot?
Hm, I certainly hope so. I am passionate about many subjects, but I think that I am still able to listen to the other side and have a reasonable discussion.
- If someone came up with something better than the .net framework, would you switch?
This is a really tough question. It would have to be something with an order of magnitude improvement over .Net, something like the switch from C++ to C#, in order to make me consider it for most of my activities. Right now, I don't see it coming.
Beyond that, there is a big issue with abandoning knowledge. I know a lot about .Net, from how to do dynamic code generation to why you are allowed to do double locking to what is happening when you specify RegexOption.Compiled. I can read a stack trace and usually pinpoint the problem with ease. I wouldn't be able to do that in another technology for a long time, and that is an important part of what makes me effective developer, that I know my environment. I'm not talking about just the API, I am talking that I am comportable editing msbuild / nant files, and that I have extensive knowledge of keyboard shortcuts, that I know how fusion loads an assembly, and how I can interfer with that. I can fuzz around with the internals of ASP.Net and AppDomain load issues with confidence.
Getting anywhere near this level of confidence is not something that is just going to happened.
There is a reason why I don't label myself as Java / C++ guy. I know both language reasonably well. And I have a lot of experiance
- How much of your identity is bound to being a .net programmer?
Moo. The question cannot be answered within the given parameters. (I actually has an exception message that throws this.)
My identity has nothing to do to being a .net programmer. I am a developer that happens to be working on .Net. If I were to make a different decision several years ago you would have probably seen quite a lot of posts here about log4j and Hibernate, praising IntelliJ and writing Groovy DSL for Spring configurations :-)
This is a great list. Rob manages to score with each and every point. (Check the rest of the posts as well, it is interesting, although I strongly disagree with some of the stuff that he is saying).
What really interested me was point 7, "Building Something That Matters". Rob gives several examples, all of them different. I got a system in production that moves money around, it doesn't really matters to me. Being able to use NQG's queries does.
What excites you as a developer?