The Tale of the Lazy Architect

time to read 6 min | 1171 words

So, there is the interesting debate in the comments of this post, and I thought that it might be a good time to talk again about the way I approach building and architecting systems.

Let me tell you a story about a system I worked on. It started as a Big Project for a Big Company, and pretty much snowballed into getting more and more features and requirements as time went by. I started up as the architect and team lead for this project, and I still consider it to be one of the best projects that I worked on and the one that I hold up to compare to others.

Not that it didn’t have its problems, for once, I wasn’t able to put my foot down hard enough, and we used SSIS on this project. After that project, however, I made a simple decision, I am never going to touch SSIS again. You can’t pay me enough to do this (well, you can pay me to do migrations away from SSIS).

Anyway, I am sidetracking, I was on the project for 9 months, until its 1.0 release. At that point, we were over delivering on the spec, we were on schedule and actually under budget. The codebase was around 130,000 lines of code, and consisted of a huge amount of functionality. I was then moved to a horrible nasty project that ended up with me quitting after the first deliverable. The team lead was changed and a few new people were added, in the next 6 months, the code size doubled, velocity remained more or less fixed (and high) and the team released to production on scheduled with very few issues.

I lost touch with the team for a while, but when I reconnected with them, they had switched the entire time again, this time they brought it fully in house, they were still working on it, and a code review that I did revealed no significant deterioration in the codebase, in fact, I was able to go in and look at pieces that were written years after I left the project, and follow the logic and behavior as if no time has passed or the entire team changed.

Oh, and just to make things even more interesting, there were no tests for the entire thing. Not a single one. I did mention that we did frequent releases and had low amount of bugs, right? Yes, I am painting a rosy picture, but I did say that I consider this to be the best project that I was on, now didn’t I?

The question arises, how did this happen? And the thing that is responsible for this more than anything else was the overall architecture of the system.

Broadly, it looks like this:

image

Except that this image is not to scale, here is an image that should give you a better idea about the scales involved:

image

That tiny red piece there at the bottom? That is the application infrastructure. Usually, we are talking about very few classes. In NH Prof case, for example, there are less than five classes that compose the infrastructure for the main functionality (listener, bus, event processor (and probably another one or two) if you care).

The entire idea was based around something like this:

image

We had provided a structure for the way that the application was built. The infrastructure was aware of this structure, enforced it and used it. The end result that we ended up with a  lot of “boxes”, for lack of a better word, where we could drop functionality, and it would just pick it up. For example, adding a new page with all new functionality usually consisted of about few things that had to be changed:

  • Create the physical page & markup
  • Create the controller for the page
  • Optional: Create associated page specific markup
  • Optional: Create associated page specific Json WebService

If new functionality was required in the application core itself, it was usually segregated to one of few functional areas (external integration, business services, notifications, data) and we have checklists for those as well. It was all wired up in such a way that the steps to get something working were:

  • Create new class
  • Start using new class (no new allowed, expose as ctor parameter)
  • Done

The end result was that within each feature box, if you weren’t aware of the underlying structure, it would look like a mess. There is not organization at the folder (or namespace), because that wasn’t how we thought about those things. What we had was a feature based organization, and a feature usually span several layers of the application and dealt with radically different parts.

Note: Today, I would probably discard this notion of layering and go with slightly different organization pattern for the code, but at the time, I had a strict divide between different parts of the application.

Anyway, because we worked with it that way, the way that we actually approached source code organization was at the feature level. And a feature usually spanned all parts of the application and had to be dealt with at all layers. So let us take the notion of tracking down a feature, and what composed it. We usually started from the outer shell (UI) and moved inward, by simply looking at the referenced . Key part of that was the ability to utilize R#’s capabilities for source code browsing.

I’ll talk a bit more about this in a future post, because it is not really important for this one.

What is important is that we had two very distinct attributes to the system:

  • You rarely had to change code. Most of the time, you added code.
  • There was a well defined structure to the application’s features.

The end result that we produced a lot of code, all to the same pattern, more or less, and all of it was isolated from the other code in the same level.

It work, throughout several years of development, several personal change and at least two complete team changes. I just checked, and the team added new features since the last time I visited the site.