Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,546
|
Comments: 51,161
Privacy Policy · Terms
filter by tags archive
time to read 1 min | 87 words

This is a story from a recent planning meeting that we had. One of the developers was explaining how he is implementing a certain feature, and he went over a potential problem that we would have. That problem is a potential one, because it would only show up if we extend the project in a certain direction, which isn’t currently planned.

My response for that was: “Let the users submit a bug report for that if it happens”.

It is a cruder form of YAGNI.

time to read 2 min | 277 words

image I recently reviewed a code base. Nothing unusual about that, I do this all the time. But this is the first time that I actually had a migraine from reading a codebase.

I talked to a couple of team members about that, and the name of a previous developer came up repeatedly. That developer is no longer on the team, however, and has no input on the way it currently works. Moreover, the code base is pretty small, and the team had had sole ownership of that for months by this time.

Here is a small piece of advice, learned from my army experience, if you own something, you are also responsible for it. Now, there are some special cases, where it takes a while to turn the freighter, but after a while, that excuse is no longer valid.

It is your code, why is it so ugly that I have a migraine?!

And no, saying that X did that this way is not a valid option if it past a week from the time you got the code base. Take a look at the image to the right, what do you think is the first priority of a new captain on that ship?

And if it wasn’t your priority, then it is going to be your fault that it is still in this stage. You don’t get to blame the other guy. You own the code, you are responsible for it, period.

time to read 2 min | 263 words

I explicitly don’t want to go over the exact scenario that this is relating to. I want to talk about a general sentiment that I got from several people from Microsoft a few times, which I find annoying.

It can be summed up pretty easily by this quote:

You all know that we work on the Agile process here, right? We get something out (perhaps a little early) and then improve it. Codeplex is for open source and continuous improvement with community feedback.

The context is a response to a critique about unacceptable level of quality in something Microsoft put out. Again, I do not want to discuss the specifics. I want to discuss the sentiment, I got answers in a similar spirit from several Microsoft people recently, and I find it annoying in the extreme.

Agile doesn’t mean that you start with crap, call it organic fertilizer and try to tell me that it will improve in the future. Quality is supposed to be built in, it is the scope that you grow incrementally, not the product quality.

I actually find the open source comment to be even more annoying. Open source does not mean that you get someone else to do your dirty work. And if you take something and call it open source, it doesn’t mean that you are not going to get called on the carpet for the quality of whatever you released.

Calling it open source does not mean that the community is accountable for its quality.

time to read 3 min | 510 words

In DevTeach, we had a panel that Kathleen Dollard has covered in depth, in which we talked about what the bare minimum aspects of Agile project would be. The first thing that was thrown up was testing.

That is quite predictable, and I objected to that. One of the things that bother me about much of the discussions in the agile space is the hard focus of tests. Often to the exclusion of much else.

My most successful (commercial) project was done without tests, and it is a huge success (ongoing now, by the way). A previous project had tests, quite a few of them, and I consider it a big failure (over time, over budget, overly complex code base, a lot of the logic in the UI, etc).

Update: I wanted to make clear the distinction between Agile and TDD. I consider the project without tests to be a fully agile project. The project with tests was a heavy waterfall project.

I think that I can safely say that it would be hard to accuse me of not getting testing, or not getting TDD. Not that I don't expect the accusations anyway, but I am going to try to preempt them.

I want to make it explicit, and understood. What I am riling against isn't testing. I think that they are very valuable, but I think that some people are focusing on that too much. For myself, I have a single metric for creating successful software:

Ship it, often.

There are many reasons for that, from the political ones and monetary ones to feedback, scope and tracer bullets.

Tests are a great tool to aid you in shipping often, but they aren't the only one. Composite architectures and JFHCI are two other ways that allow us to create stable software platform that we can develop on.

Tests are a tool, and its usage should be evaluated against the usual metrics before applying it in a project. There are many reasons not to use tests, but most of them boil down to: "They add friction to the process".

Testing UI, for example, is a common place where it is just not worth the time and effort. Another scenario is a team that is not familiar with testing, introducing testing at this point would hinder my topmost priority, shipping.

Code quality, flexibility and the ability to change are other things that are often attributed to tests. They certainly help, but they are by no mean the only (or even the best) way to approach that.

And finally, just to give an example, Rhino Igloo was developed without tests, using F5 testing style. I applied the same metric, trying to test what it does would have been painful, therefor, I made the decision not to write tests for that. I don't really like that code base, but that is because it is strongly tied to the Web Forms platform, not because it is hard to change or extend, it isn't.

Okay, I am done, feel free to flame.

time to read 1 min | 172 words

Regardless of the actual project, I usually ask the following questions.

  • Scalability Requirements - How many users are we expecting?
    Expressed as users per day. As part of that, however, we also need to consider spikes in traffic, and if we need to handle them.
  • Distribution Requirements - How many data centers are we going to run on? How many machines?
    Numbers I want to hear: 1, a few, lots.
  • Security Requirements
    • Authorization Requirements - Role based? Data driven? Dynamic? Rule based?
    • Sensitive data - Do we store any? If so, how secure do we need to make it?
  • Physical Deployment Layout - DMZ? Inside firewall? Different components in different zones?
  • Regulatory Requirements - Are we required to meet some regulation? If so, what are the requirements in the regulation?
  • Monitoring - How often? To whom?

It is not so much the answers that I am looking at, it is the discussion the posing this question that is the really interesting part.

time to read 4 min | 680 words

The concept of reducing the cost of change is a one of the core values of agile practitioners. In essence, it boils down to being able to make changes when we want them. Practices such as TDD and Iterations enable us to actually make changes without attaching a high price point for them. Some tools make change much easier than others. Using NHibernate, I can evolve my data model (and notice that I am explicitly talking about data models, not domain models. This is using NHibernate just for DTO mapping) much more rapidly than if I am using SP or code gen. If I have a real domain model, then that is even easier, in most cases.

But it is neither the tools not the practices that actually enable change. They are not even significantly responsible for reducing the cost of change. Beyond anything else, it is the will of the team to accept the pain of making the change and actually doing this.

I recently had a meeting in which I presented a demo of my current project. During the meeting, we hashed together what the application is doing, and in what way it is supposed to work. By that afternoon, I was able to get it to work using the new model. It involved breaking the entire application to pieces and restructuring it from the fragments. That wasn't pleasant, and it took me half a day of just trying, but it was done.

When I demoed it to the client in the same afternoon, he was quite pleased. I am not sure that I managed to covey the actual reason that I was able to affect such a change in the application. It doesn't have anything to do with technology, it has to do with a mindset. I would have been able to do the same if the application was written using ASP Classic with Stored Procedures, not as easily, maybe, but within roughly the same time frame.

That mindset, at least for me, starts from the first line of code. I treat each piece of the project as utterly disposable. Since I don't really care how each individual piece works, I am able to roughly sketch a fair amount of the application very rapidly, and then focus on each of the items in isolation, and replace that with a much better implementation. I think that I stated before that I tend to rewrite most of my application core at least two or three times before I am happy with them.

When you have disposable pieces, it is no big deal if you mess up and need to start over, because the whole project is structured in a way that allows you to do so. Going back to using my current project as an example, the algorithm used for the core part of the system is crap. I thought it up while being on a coffee break, and it is enough to demonstrate what the software is supposed to be doing. I don't really care, because the moment that I do need the real algorithm, I can drop it in (need to change the implementation of a single method).

But it is not just preparing ahead. It is also just plain willingness to do the work. About a year or two ago we (Castle Team) wanted to be make document all the public API for Castle Active Record. If I recall correctly, I did it by asking the compiler to break the build if we didn't have an appropriate XML comment for all out public types and members. There were some 800 build errors. And the only way to fix them was to go and document all of them. It took several days, but then it was done.

I don't think it was pleasant by any stretch of the word, but by trying we were able to make it happen.

So, to conclude, the best way I know to reduce the cost of change is to actually accept change. After that, the reduction will happen on its own.

That Agile Thing

time to read 2 min | 367 words

Agile introduction is an interesting problem. One that I have learned to avoid. I am not feeling comfortable standing up to a business owner and saying (paraphrased, of course), "if you will do it my way, your life will be better". At least, not about development methodologies, I have no problem saying that about techniques, design or tools. The reason that I don't feel comfortable saying that is that there are too many issues surrounding agile introduction to talk confidently about the benefits.

I usually never mention agile to the customer at all. What I am doing, however, is insisting on a regularly scheduled demo. Usually every week or two. Oh, and I ask the customer what they want to see in the next demo. Given that I have a very short amount of time between demos, I can usually get a very concise set of features for the next demo.

Having demos every week or two really strengthen the confidence in the project. So from the point of view of the customer, we have regular demos and they get to choose what they will see in the next demo, that is all.

From the point of view of the team, having to have a demo every week or two makes a lot of difference in the way we have to work. As a simple example, I can't demo a UML diagram, so we can't have a six months design phase. Having to accept new requirements for each demo means that we need to enable ourselves to make changes very rapidly, so we go to practices such as TDD, Continuous Integration, etc. It also means that the design of the application tends to be much more light weight than it would be otherwise.

In other words, everything flows from the single initial requirement, having a demo every week or two.

Once you have that, you have a free reign (more or less) to implement agile practices, as they are needed, in order to get the demo up and running. You get customer involvement from the feature selection for the next demo, and you get customer buy-in from having the demos at frequent intervals, always showing additional value.

time to read 2 min | 295 words

I am sharing this story because it is funny, in a sad way. And because I don't post enough bad things about myself. I also want to make it clear that this story is completely and utterly my own fault for not explaining in full what we were doing to the other guy.

A few days ago I was pairing with a new guy about some code that we had to write to generate word documents. We run into a piece of code that had fairly complex requirements. (string parsing & xpath, argh!)

I thought that this would be a good way to get the point of unit testing, since the other guy didn't get the all the edge cases when I explained them (about 7, IIRC). So we sat down and wrote a test, and we made it pass, and we wrote another test, and made it pass, etc. It was fun, it made it very clear what we were trying to achieve, etc. This post is not about the values of unit tests.

So we were done with the feature, and I move on to the next guy, trying to understand how he can make javascript do that (I still don't know, but the result is very pretty). About two hours afterward, I update my working copy with the latest source, and try to find the test that we have written. It is not there.

Hm...

I asked him were the test was, his answer was: "I deleted it, we were finished with that functionality, after all." *

* Nitpicker corner: I went and hit myself on the head for being stupid, he had no way of knowing, because I didn't explain nearly enough. I did afterward, and then I wrote the tests again.

time to read 6 min | 1098 words

Jeremy Miller is talking about agile adoption, and he raised the issue of fixed bids in agile projects. Fixed money, fixed time, fixed scope projects are the ones that are easiest for the bean counters to work with. You pay X$ you wait Y months, you get Z features. If the other side fails to hold their end of the bargain, you start reducing X$.

Easiest for bean counters doesn't mean that it is easiest for the software project itself. The problem is that not RFP that I have seen is close to detailing all that the customer wants, and the behind the scene political struggles to get more & more on the RFP by other departments (hi, it comes out of the ZYX department budget, it us get our critical feature U there) can produce direct contradiction in the underlying requirements.

Here are a couple recollections from some recent projects:

  • In one memorable occasion, I was working on a project for 6 months before the customer discovered that there was a critical failure in the spec and that we need to do some heavy duty stuff modifications for all the entities in the system. (Where heavy duty means that code couldn't compile for days on end, when we basically restructured the core entity model for the new requirements).
  • In a more recent project, not only no one could explain a requirement, but no one could ever recall who requested this feature, and why.

To summarize, the fixed time, scope & money projects aren't really working. A favorite question by agile speaker is "who have been on a project whose requirements has not changed?".

I was, but it was a two days one :-)

So, since change is a given, how can you approach such a project, if you want to work in an agile fashion. As Udi points out, there is a tendency in the industry to bid at cost or at a loss, and generate revenue by change requests. Leaving aside the issue of the morality of this behavior, that behavior is also leading to another of the common questions by agile speakers: "how many of you delivered a system according to what the customer asked, but not what he wanted?"

Since the customers has already got that this was the Operation Method de-jour, that has gotten problematic. The RFP is as broad as possible, and the spec is as narrow as possible, and the spec is not something that is meant to provide a guide for the developers, but rather a tool that is meant to be used when arguing whatever a particular request is within the project scope or not.

So far, this has nothing to do with Agile or no Agile. The entire premise of waterfall methodology is capturing requirement in concrete so they wouldn't change, except that never works.

Now, I am purposefully avoiding this part of the business, so take this with a ton of salt. There are several options that can be used, where the best is complete acceptance of agile from the customer, so they pay you per iteration. A more reasonable alternative is to work at the fixed money / time frame, which is far easier for the customer to arrange in the corporate environment, but be flexible on the delivered features. This tend to push unnecessary features to the end of the project, where they are already seen as unnecessary.

But, the question goes, how do you win such a project?

Well, you need to estimate the work from the RFP as you would usually do, and submit a proposal based on that estimate. It is likely that you will have a higher proposal than the low-balling guys, and if the only thing on the table was money, that would be a problem. The key as far as I see it is to find, not the bean counter that is signing the check, and usually couldn't care less about the project itself, but the real customer, the one that wants the actual work done.

If you can get there, you can explain why your proposal is higher (quality, no back stabbing), why you would end up with a better result (early deliverables, immediate feedback, etc) and how it directly benefits the customer to go with this approach. Some people would rather not take a risk, which is sad, but understandable, but those who would want to get things done can usually get the point across.

Then you do things as usual, letting the customer dictate what will get implemented and when. At the end of the project, it is the customer that decided what would be there or not, so not only are they in a better position to understand what is going on, they are directly responsible for that. From my experience, they usually get "change request" (features not on the RFP) for "free" (trading with unnecessary features, basically, and within reason, of course) during this time, since they get to decide what we will work upon.

The bas part is that sometimes political issues can force you to do stupid things (implement something you know no one will use), but hey, that is what the customer wants.

Assuming that in the end of a project, someone find a missing feature and demands it, the customer can explain why this feature was cut, and what the client got as a result of this. I find this much better than trying to explain to a client why authenticating against their mainframe is not a good idea, and authenticating against active directory is probably a better approach.

The counter argument is what happen if you customer has a party on change request all the way to the end of the project, at which point they turn around and demands everything that they ignored so far. That is the point when you politely points out to the responsible party, and let it go. Producing a bill for those change requests is also an effective measure to stop that kind of an approach.

But frankly, most people wouldn't behave like that, and it is fairly clear what type of person you are dealing with fairly early in the project. Then you plan accordingly. Frankly, I wouldn't want to work for a dishonest customer.

We used this approach in my last project, to great success. This also has to do with a really good customer (I really want more of those), but the basic principle is sound. And we are going to use it more in future projects.

time to read 3 min | 548 words

Jdn is making an excellent point in this post:

Okay, so, TDD-like design, ORM solution, using MVP.  Oh, and talk to the users, preferably before you being coding.

One problem (well, it's really more than one).  I know for a fact that I am going to be handing this application off to other people.  I will not be maintaining it.  I know the people who I will be handing it off to, so I know their skill sets, I know generally how they like to code.

None of them have ever used ORM.

None of them do unit testing.  One knows what they are and for whatever reason hates them.  The others just don't know.

None of them have ever used MVP/MVC, and I doubt any but one has even heard of it.

All of them are intelligent, so could grasp all the concepts readily, and become proficient with them over time.  If they are given time by their bosses, or do the work overtime, or whatever.

There is a 'standard' architecture in place that they have worked with for quite some time.  I personally think it blows, and frankly, so do most of them, but it is familiar, and applications can be passed between developers as they use a common style.

There are several things that are going on in this situation.  The two most important ones are that the currently used practice of bad code, is also (luckily) wildly recognized as such and the people who work there are open minded and intelligent.

Before I get to the main point, I want to relate something about my current project. If you wish to maintain it, you need to have a good understanding of OR/M, IoC and MVC.  Without those, you can't really do much with the application. That said, good use of IoC means that it is mostly transparent, and abusing the language give you natural syntax like FindAll( Where.User.Name == "Ayende") for the (simple) OR/M, and MVC isn't hard to learn.

Back to Jdn's post, let us consider his point for a moment. Building the application using TDD, IoC, OR/M, etc would create a maintainable application, but it wouldn't be maintainable by someone who doesn't know all that. Building an application application using proven bad practices will ensure that anyone can hack at it, but that it has much higher cost to maintain and extend.

I am okay with that. Because my view is that having the developers learn a better way to build software is much less costly than continuing to produce software that is hard to maintain. In simple terms, if you need to invest a week in your developers, you will get your investment several times over when they produce better code, easier to maintain and extend and with fewer bugs.

Doing it the old way seams defeatist to me (although, in Jdn's case, he seems to be leaving his current employee, which is something that I am ignoring in this analysis). It is the old "we have always done it this way" approach. Sure, you can use a mule to plow a field, it works. But a tractor would do much better job, even though it require knowing how to drive first.

FUTURE POSTS

  1. Partial writes, IO_Uring and safety - about one day from now
  2. Configuration values & Escape hatches - 5 days from now
  3. What happens when a sparse file allocation fails? - 7 days from now
  4. NTFS has an emergency stash of disk space - 9 days from now
  5. Challenge: Giving file system developer ulcer - 12 days from now

And 4 more posts are pending...

There are posts all the way to Feb 17, 2025

RECENT SERIES

  1. Challenge (77):
    20 Jan 2025 - What does this code do?
  2. Answer (13):
    22 Jan 2025 - What does this code do?
  3. Production post-mortem (2):
    17 Jan 2025 - Inspecting ourselves to death
  4. Performance discovery (2):
    10 Jan 2025 - IOPS vs. IOPS
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}