Flatten your architecture: Simplicity as a core value
Originally posted at 2/17/2011
In a recent codebase, I had to go through the following steps to understand how a piece of data in the database got to the screen:
- Visit Presenter needs to show the most recent visit
- It calls VisitationService
- It calls PatientsService
- It called PatientDataProvider
- It calls Repository<Patient>
- It uses NHibernate
- It called VisitDataProvider
- It calls Repository<Visit>
- It uses NHibernate
All of that in order to just grab some data, but you won’t really get the grasp of why this is bad until you realize that you need to change something in the way you load stuff from the database.
A common example (where I usually comes in) is when you have a performance problem and need to optimize the way you access the database.
The problem with this type of architecture is that it looks good. You have good separation, and there are usually tests for it, and it matches every rule in the SOLID rule book. Except, that it is horrible to actually try to make changes in such a system. Oh, you can easily try to replace the way you handle patients, for example, because that has an interface and you can switch that.
But the problem that I usually run into in those projects it that the things that I want to change aren’t along the axis of expected change, and the architecture is usually working directly against my ability to make a meaningful modification.
Guys, we aren’t talking about rocket science here, we are talking about loading some crap from the database. And for the most part, the way I like to see is:
- Visit Presenter needs to show the most recent visit
- It uses NHibernate
Basically, we want to make it so that reading from the database has as few frills as possible, because it is taking too much effort otherwise.
Writing is usually when we have to apply things like validation, business logic, rules and behaviors. Put that in a service and run with that, but for reads? Reads should be simple, and close to where they are needed, otherwise you are opening yourself to a world of trouble.
Oh, I just realized that I am describing something quite similar to the CQRS model, although I think that I got to it from a different angle.
Comments
I'm not really convinced that shortening the call stack from 4 to 2 makes a great difference and turns a crappy app into a perfect engineering example. What if someone wanted to apply security to all read operations to control what records are shown to the user?
BTW have you seen a call stack in any java web application?
Rafal,
This isn't about the call stack, it is about actually being able to make any sort of meaningful changes
Maybe, but you should give a better example about what you can improve with using NH directly bypassing some layers. Because it's rather obvious that modifications are easier if the code is less complicated. You are talking about some unknown application and some changes you've made but show nothing meaningful and your general opinion can be easily refuted by giving examples to the contrary.
I understand your point, but I find your solution being the other extreme. I agree with Rafal on the security example.
Ayende,
I agree with your postulation. That architecture is nothing but developer self-aggrandizement. Beats me why any sensible and pragmatic developer would through all those useless layer upon layer of class heirarchy and in the end just calling some select statements. This is extreme over-engineering meant only to inflate some insecure developer ego and if you ask me quite unethical.
Oh Yeah! Lets place all SELECTs under the buttons :> Finally! After all those of years with ranting aoublt poorly written code spreading similiar logic in multiple places - we CAN almose PLACE SELECT under THE BUTTON!
How the Americans says: Goooood Job! :P
Fine, but at least in your CQRS-ish approach (minus binding to the actual entities for reading which is questionable) toss a thin facade over NH for reading. I mean, do you really want to write paging and sorting code over and over again. And, do you really want to reference NH in your UI project? There is a middle ground here....
Rob,
Paging ?
Sorting?
SetMaxResults
AddOrder
Ayende,
Abstract is important, in every layer.
implement everything myself, since i access the database manually alone?
Your advice is fine as a compromise, or RAD, but in a large scale application it just doesn't cut it,
Moti
1) I have never seen such an effort succeed without a major re-write. So it doesn't really matter. In addition, YAGNI.
2) I have built large scale systems, SUCCESSFUL large scale systems using this approach.
Cross cutting concerns can be implemented in several ways, usually via some sort of AOP (like the MVC filters)
I'm largely with Oren on this front, especially for reads - if you want to implement security, then you can do that by wrapping up the call using the russian doll approach (as the codebetter guys would call it).
For my read layer with NH or whatever I'm using (Raven these days) and a bit of EF, I tend to have a single interface for doing queries. It often looks something like this: (With FubuMVc, this is my action)
SomethingQueryHandler
{
}
I will damn well use the underlying API (if it is good enough) to get the data in whatever way makes sense here, with NH and Raven and EF they all have good APIs (LINQ for Raven and EF and QueryOver for NH) - why would I go to all the effort of creating a wrapper around those APIs for the sake of "swappability".
If I decide to swap stuff out later, then it'll be less effort to just replace the contents of all my QueryHandlers then try to bastardise a pointless wrapper to keep consistent behaviour across different stores.
Rob Ashton has the right idea. Abstracting away your persistence model and data access activities behind a facade is in general a good design principal (also single responsibility, etc).
However you should NOT be creating a general purpose facade, aka a wrapper around your data access API. Unless maybe that API sucks, like if you're using straight ADO.NET and sprocs/views.
The facade should have an API written in terms of the business model and/or application requirements and use cases. The implementation of which will use the data access API of your choice directly. There is no need to to further abstract things.
Most queries are typically unique anyway. There are seldomly two places using the exact same query. Factoring them out into a service makes no sense.
AOP has its own problems. I remember a post you did once about using the concept of a command object for db interactions. I quite like that idea.
And this is the reason I love rails for fetching crap from the database. Simplicity!
I could not agree with this post more!
especially because the product I'm working is like
It calls PatientsService
It called PatientDataProvider via IPatientDataProvider
<patient via IRepository <patient
It uses Entity Framework
one small change means 6 classes to change... ;'(
I think a middle-ground between the original example and using NH from the UI is a good idea, personally. Having at least just one layer between the UI and NH (whether it's a service or a repository), assuming it does not grant fancy querying features, allows you to change the NH mapping w/o breaking the layers above--just adapt the service or repository to the NH mapping changes until the integration tests pass.
For queries that inherently have to be dynamic, I say use the O/R Mapper in a fairly direct fashion. For queries that can be parametrized more easily, wrap them in a method in, say, a Repository.
One thing to understand about repository interfaces and porting from NH to another O/R Mapper, etc. is that there is no guarantee that you will be able to implement the same interface with the same semantics. Even worse is using a unit-of-work interface directly from the application, then trying to port to something else. You have to choose between the features and performance of the O/R mapper you are using vs. portability to arbitrary persistence strategies.
OK, then - Oren or someone else in the same camp. Post a small snippet of code from something like a MVC controller where you are querying data from the read side. I want to see if I am understanding exactly what you are suggesting.
Mr_Simple is again handing out degrees today to those who have listened to him over the years. To those who haven't, you will.
To be more clear, I guess I am asking if you are suggesting using the ISession directly in the controller? Just inject the ISession via an IoC container and go at it?
Rob :: github.com/.../ImageController.cs
I've removed the common interface these days too, and do it as part of the API - but it's an example of a thin read layer.
Would love to see an example using MVC as unfortunately my project has this same horrible repository structure leading to select n+1 issues or numerous overloads for the same query.
Would you inject the session directly into the controller and use that directly, or would you have some other encapsulation?
(hat is, with FubuMVC I don't need those pointless actions all doing the same thing, and instead wire up the query handlers by convention to a front-controller.
That means I don't need that silly IViewFactory interface, and can instead locate query methods by the fact that they sit in a certain namespace, take in one input and give out one output :-)
@Rob Ashton
OK, thanks that's what I was looking for. So you have ViewRepository, cool. Now could you briefly explain how/where the creation of the queries occurs? I know that is a Raven sample, but in a NH sense where would one construct the query? In other words, take the input from the view and create a query off of the ISession.
Thanks.
This is what CQRS is about.
The logic is easier to implement on update than on read.
An this way, those who read don't need to knwo the logic. Keep it Simple.
For Action that display data, use ISession, or what you want.
For Action that update data, use a Repository etc.
I think this is becoming known as a "thin read layer" in the CQRS circles.
I'm with Oren and Jason here. All this ceremony to pass through 5-10 layers, just to do something basic, kills productivity. Something has to be done.
I understand the hesitation to move off the established tiers, but pragmatics have to come into it, and perhaps with very comprehensive unit testing of this thin read layer, you can have your cake and eat it.
@Rob the query happens entirely in the action responsible for that
github.com/.../ImageBrowseViewFactory.cs
In ASP.NET MVC This looks like
Controller Action
In FubuMVC it looks like
Query Handler
If I was to do a new project in ASP.NET MVC I'd probably do
Controller Action
But have single action controllers, and in fact they'd essentially just be query handlers (at the read side of things)
Who gives a crap if the ORM changes, if you are doing it right then all you are passing around at the higher levels is POCO objects or DTOs. Even having to reference NHibernate in the main project isn't a huge deal, while avoidable, it in of itself achieves nothing.
"Services" without a single responsibility that are kept around for "just-in-case" scenarios lead to some of the worst implementations I have seen and as Ayende has pointed out, AOP is more flexible for extensibility and keeps things from getting overcomplicated.
Note: Those last two obliviate the need for any hand-rolled framework code until the need for such stuff arises (at which point it's easy enough to re-factor).
In Fubu, I added a "behaviour" to validate the queries, and a behaviour to "secure" the queries at the query level, that was sufficient for that project.
Oren,
While I agree on the general idea, that layers that don't add value should be removed, I think separation of concerns IS a useful thing, both for general maintainability and testing.
To take your example, PatientsService, PatientDataProvider and perhaps Repository might be redundant.
But having the transaction, business and data logic in the Presenter? Seriously?
In my current architecture, the presenter has one or more services.
Services provide business transactions on one or more repositories.
Repository is a thin abstraction (just one interface/generic class) on top of NH/EF, more for unit testing purposes than for technology hiding.
It's all on the details.
It really depends on the type of application you are building. In those applications with little business logic, few releases, with skilled and detailed resources, this approach will work. With larger, more complex applications with lots of business logic, many releases, with less skilled resources, more rigor is required of the architecture. As with most architecture question the appropriate answer is... it depends.
Hi Ayende,
I totally agree with you, TOTALLY!!
My only questino about what you are talking about, to do that simplification, I need to have a read-model and a write-model? I think yes .. thinking about prerventing changes to the model when using NH directly in the Presenter..
What d'you think?
In our projects, our views and forms are usually really, really ... really complicated.
To write eveything in the controller would be an absolute nightmare to maintain.
We came up with this pretty basic pattern:
Core
Model
DomainEntity1.cs
DomainEntity2.cs
- <etc
Controllers
<area
<nameController.cs (invokes services and binds ViewData/FormData models)
Model (DTOs)
- <nameViewData.cs
- <nameMoreViewData1.cs
- <nameMoreViewData2.cs
- <nameFormData.cs
- <nameMoreFormData1.cs
- <nameMoreFormData2.cs
Services
Finders (ISession/query)
- <whateverFinder.cs
- <whateverAnotherFinder.cs
- <nameViewDataService.cs (translates finder results to ViewData)
- <nameFormDataService.cs (FormData is passed into a method)
- <any
It works so efficiently (especially with ReSharper templates) that we usually follow this pattern even for simple implementations.
As everyone knows: a simple requirement usually turns into something a lot more complicated (thank you, product owners...).
We have learned it is absolutely worth the time to break it down this way. It makes changes so much easier to manage.
Unless I misinterpreted the article. :)
GAH ... lost my formatting. Well, hopefully you get the idea.
This complete mess is why I moved to Rails. I wanted to get some work done.
Been there, done that, still learning. I just thought I'd add, something that helps me avoid all those layers is relying more integration/component tests rather than unit tests.
Just another example of BUFD. Now that real-world context is being applied to the application, it's showing brittleness for change.
Spot on with this call out.
Porting from one dal technology to another might be a difficult thing, but you don't have to make it even harder than it should be...
Guros once said you shouldn't spread sql everywhere in your application.
I used to be a developer in an application that did just that (with ado.net). It didn't take long before the same query was required in a few screens, and it was of course repeated in multiple screens, not even copy pasted... the same, sometimes complex queries were written over and over again. Try to change something with how the application access the db now...
even if they've found out that the same query they need exists, this approach didn't define the right common place to put it. And what about the times you need the same complex query but with a little twist? using criteria you can easily create it in a private method of a dao and use it in two method, or wrap it with a query object, but directly using nhibernate from the UI just feels too wrong to me.
may be you are a super programmer and that doesn't happen to you, but sometimes you just gotta protect your application from your programmers.
If VisitationService (second layer) will use nhibernate directly, you will be happy, i will be happy and everybody will just be happy :).
I think ideally our team and ourselves are disciplined enough that we can together evolve the code in a proper direction. Dirty, hairy, duplicate code is gradually refactored into patterns. In this scenario it would seem you really could start off with the simplest thing that could work and grow out what you need. You would be continually managing the technical debt.
Unfortunately what I run into are developers on a team who don't care or are not as disciplined. If they run across "bad" code they either don't notice or add to it and complain later. Or even better they keep saying YAGNI and just keep duct taping code in to get it done. Also situations where ridiculous promises are made over and over by sales or managers and it's the developers and code that suffer (I'm sure the users too). Technical debt is just something you complain about until they decide to rewrite.
So the idea is that rather UI -> Service -> Repository -> NH
you do writing as UI - > Service -> NH and reading as UI -> NH?
So, does the UI talk directly to ISession for reads, or is there some layer in between?
I think I get where you are coming from. The idea is that a Service / Repository will have a simple, ORM-neutral, interface - thus restricting what you can do with it. If you need to use any of the more advanced NH specific features, these will not be available via Service / Repository and you will need to redesign both to allow this.
Is that the gist of the idea?
All of this is very funny. If you did start your project 2 or 3 years ago with the ideas and solutions from this and other popular blogs you are now in the situation where the same blogs tell you, that you are doing things the complete wrong way. Well, let's see where it goes in the future -)
I lost my fate in all this and simply go the way that meet smy needs....
@Ayende
1) I have never seen such an effort succeed without a major re-write. So it doesn't really matter. In addition, YAGNI.
And as much as you think it cannot succeed without a major re-write, in some cases it's probably not possible to rewrite. I currently work on a project with two self-rolled ORMs with Services/Repositories properly interfaces that I'm currently replacing with NH, it's great that I can write a new Repo and change next to nothing else and everything works fine.
In your example thats impossible.
@Ayende
so you reduced a well-designed-lose-coupled system to something that whilst arguably simpler, is considerably more likely to require more code changes becuase of tight-coupling. not to mention increased QA.
so where does it stop? why dont you just have all your code in main( ) and be done with it?
i suspect procedural programmers said the same thing when OO came about and compained about the overhead of the v-table.
simplicity is only good for the short term, not for the long term.
i would punch up "undo" if i were you
I've been waiting to read this post since I noticed it queued a few days ago and it's not as "controversial" as I thought it would be...
I don't think this is really CQRS, at least not my understanding of it. One thing in the demo posted above is the lack of separate command and query data stores. If you don't decouple your UI model from your object model, you're asking for trouble when you need to refactor. From a DDD standpoint, this is a glaring issue.
Another issue it brings up (especially with NHibernate), is now you are blurring the lines of a session. Changes made to objects in the UI could be accidentally flushed and you end up detaching-reattching objects and messing around with StaleObjectExceptions. (We've had this exact problem on our project).
IMO, the complete decoupling of View Model from Domain model is a must. CQRS avoids this by having the query store be essentially in DTO format.
StanK,
No, the UI directly talks to ISession, there is no need for additional abstraction
". CQRS avoids this by having the query store be essentially in DTO format."
That's not really true, an implementation when applying CQRS might do that but there is nothing to say you can't apply CQRS and still have your queries going against the single data store.
It's not about how the data is persisted or structured, it's about having different logical models for reading and writing, and how that gives us the ability (when we need it) to store our data differently for those different purposes.
IMO anyway.
Yeah, it depends on whether you do 'strict' CQRS (which is just about how you structure code) or take on the 'typical' architectural implementation of it.
Rob, you beat me to it. Yes, separate read / write stores are not mandatory in CQRS. That said, most CQRS aficionados probably lean that way. I'm personally waiting for Oren to jump on the CQRS bandwagon just to see what he comes up with. Maybe a CobraQRS or something with RavenDB as the read store and Rhino Queues or RSB syncing the write and read stores.
The base principles of CQRS are very useful to most projects we encounter on a daily basis. However, the full implementation that most blogs seem to focus on lately include separate data stores for the read/write model, event sourcing, messages, esb, and on. The biggest issue with this new "hot topic" as it's portrayed in the blog world is that the full implementation has a great benefit for only the great minority of projects that most devs work on. People are pushing it like it's some sort of default architecture. I wish they would stop. But I digress....
"It's not about how the data is persisted or structured, it's about having different logical models for reading and writing, and how that gives us the ability (when we need it) to store our data differently for those different purposes."
I agree. What I said about having a separate physical store isn't really correct. I do think that what you said about having different logical models IS necessary. I think that follows my statement about decoupling your View model from the Domain model. How can you go straight to Hibernate from your UI for reads and not have that coupled with your Domain model? The only way I see is to have another mapping layer between domain and data store, which is just moving the additional layer somethere else.
IMO at the very least, you need a facade that translates from your Domain model to the View Model if you are going to have only one physical representation of the domain model.
You don't translate between the domain model and the view model. You just have a different model.
I've found that it's generally cleaner to encapsulate queries into their own Finder classes. One class per query, and this allows you to easily find where to make changes, add ResultTransformers, re-use the query elsewhere, etc., all without cluttering up the Controller class or anything else.
Ayende, do you find that using the ISession directly within the controller, even when you have a complicated query with involved transformations adds significant clutter? I can see it working fine for simple scenarios, but when you get into large, variable queries that transform large data graphs it seems to me like it would become unwieldy...
@Tyler
A bit off topic but I think that's where the "read model" from CQRS can come in handy. And not necessarily a different data store, at a basic level you can map to DB views or something similar. The light-switch moment for me was realizing the folly of trying to wrangle with NH to create the best query possible off of my domain. Why not just query exactly what you want, how you want (even raw sql), and leave the domain for writes and complex logic?
@Rob
The projects that I work on tend to be very, very large (hundreds of different views, thousands of tables, etc.). It's not an overly-complicated system, but the domain is just that big. We do denormalize data and use some views where it makes our lives easier (or drastically improves read performance), but we found that to create flattened views/tables for every screen or use case within the DB added a LOT of friction and overhead to the development process. Additionally, our apps tend to be very write-heavy (not the norm, I know), so denormalizing the data at the DB level using triggers or indexed views was a big hit on performance.
Encapsulating the queries within classes of their own allows us to use Views , HQL, LINQ, or Criteria where appropriate, and since the finder will always return the same flattened DTO it's easy to change out the method within the class. Even using raw SQL or views needs result transformation, though, so I would still want that encapsulated.
I don't think it's over-abstraction. It's simply "Form/Controller needs data, Form/Controller calls Finder, Finder uses ISession directly and manages the querying and result transformation in whatever manner is appropriate, Form/Controller is kept very simple and knows only about the 'read' model". We can also then apply cross-cutting concerns to the finders (caching, logging, etc.).
(sorry for the long reply)
@Tyler
Thanks for the response. That doesn't sound like over-abstraction at all. As your post demonstrates, it depends on the app/domain, right?
Curious...what does a finder class look like? What responsibility does it have?
@Rob:
Well, knowing the formatting probably won't come through:
We have an interface IFinder[EntityT, ArgsT] with one method:
EntityT Find(ArgsT args);
We also extend this to:
ICollectionFinder[EntityT, ArgsT] : IFinder[ICollection[EntityT], ArgsT]
This allows us to create finders like:
UsersFinder : ICollectionFinder[SomeUserDTO, UsersFinderArgs]
ICollection[SomeUserDTO] Find(UserFinderArgs args){...}
where UsersFinderArgs would be a class containing all of the search parameters needed by the query, and possibly any desired paging info (pagesize, page number, etc.).
We have a NullArgs singleton class for finders that don't need any arguments.
In the Find method itself we query the db using ISession (passed in via IoC), usually using HQL or LINQ, sometimes raw SQL. We then generally use a ResultTransformer (I have one called DelegateResultTransformer that allows you to make the transformation inline instead of implementing a separate class) and return the DTOs. I use Resharper templates to make adding a new finder painless.
@Tyler
Interesting, thanks.
Tyler and I work together. He came up with all that stuff. It is awesome to work with and makes development so much easier.
For being so great, I bought him some half-and-half for his coffee and let him fix my bugs this morning. :P
I use a class called DAO for simple CRUD operations. This class directly uses an ISession and is called by Controllers and occasionally the Presentation directly. If I need to do anything more complicated, I use an ISession directly in a Query Object. If I end up needing to change my persistence layer (doubt it), I can change the DAO to handle the basic CRUD operations. In all likelihood the complex queries would need to be re-written anyhow when using a different persistence layer.
Just my two cents
@Tyler
"we found that to create flattened views/tables for every screen or use case within the DB added a LOT of friction and overhead to the development process"
Could you perhaps explain what the friction is? Is it related to how you use Resharper Templates?
Creating flattened views/tables obviously requires more writing of code (it isn't something you can automate, not that I know of) but from a maintenance perspective, what I've been finding is that having specific custom view models that don't go through the 'write' domain model offsets that cost.
I don't doubt your assertion, I'm just interested in getting other perspectives on how they view (pun intended) it.
Thanks.
As a TDD Developer, for me, as long as one can write a good unit testable code (atmost 10-15 lines and I am pretty sure one can not write a good one without following SOLID or atleast the S ), there is no point of creating extra abstraction layers.
In the real world there are two types of developers - the proper developers and the hackers. While the hackers just get things done and then regret later how they did it because it's not scalable and it wasn't properly planned to grow, the proper developers plan ahead and do things properly - and then still later regret how they did it when they find it's over complicated and convoluted, doesn't scale and is not future proof.
I am slightly confused about this. I understand the points Ayende is making, but unless I'm missing something it seems like he is advocating going back to the old WebForms way of just doing the data access code in the event handler (or controller, in MVC's case)? So having the controller itself set up the NHibernate query?
I can't see how that's a good thing, I'm sorry. I understand your point about the other example being needlessly complex and YAGNI and all of that, but I can't imagine going back to the "dark ages" and just lumping all the code and stuff into the UI. I could maybe see a compromise and having a repository class or something to handle the actual NHibernate querying, and just expose that to the Controller e.g.
var latestVisitation = visitationRepository.GetLatest();
and skip out on all the service layers (I admit I'm one of those developers that would have said the first example is a proper software design, properly abstracting things away an exposing only what is needed).
I'm interested to see what others think, because I would argue vehemently that the method proposed is a step back, even for something as simple as querying some crap from the DB; with the more complicated architecture you have more flexibility to change things, or have very fine-grained control (for instance, the Repository exposes IQueryable and the Service exposes an actual List) over exactly what is returned. I do agree that can be very complex where it's not needed (in most situations do you really have go safeguard against another developer adding an item to what is meant to be a read-only list? Wouldn't it be easier to just say "Hey Bob this page exposes a read-only list" and trust Bob not to go and try to add things? I could certainly see adding a bunch of layers as guard clauses if you were writing a public API that others would be using, but for internal classes used by one application?)
Erich Gamma, coauthor of Design Patterns, answer these questions. In the following excerpt, Gamma is interviewed by Bill Venners.
Venners: The GoF book [Design Patterns] says, “The key to maximizing reuse lies in anticipating new requirements and changes to existing requirements, and in designing your systems so they can evolve accordingly. To design a system so that it’s robust to such changes, you must consider how the system might need to change over its lifetime. A design that doesn’t take change into account risks major redesign in the future.” That seems contradictory to the XP philosophy.
Gamma: It contradicts absolutely with XP. It says you should think ahead. You should speculate. You should speculate about flexibility. Well yes, I matured too and XP reminds us that it is expensive to speculate about flexibility, so I probably wouldn’t write this exactly this way anymore. To add flexibility, you really have to be able to justify it by a requirement. If you don’t have a requirement up front, then I wouldn’t put a hook for flexibility in my system up front. But I don’t think XP and [design] patterns are conflicting. It’s how you use patterns. The XP guys have patterns in their toolbox, it’s just that they refactor to the patterns once they need the flexibility. Whereas we said in the book ten years ago, no, you can also anticipate. You start your design and you use them there up-front. In your up-front design you use patterns, and the XP guys don’t do that.
Venners: So what do the XP guys do first, if they don’t use patterns? They just write the code?
Gamma: They write a test.
Venners: Yes, they code up the test. And then when they implement it, they just implement the code to make the test work. Then when they look back, they refactor, and maybe implement a pattern?
Gamma: Or when there’s a new requirement. I really like flexibility that’s requirement driven. That’s also what we do in Eclipse. When it comes to exposing more API, we do that on demand. We expose API gradually. When clients tell us, “Oh, I had to use or duplicate all these internal classes. I really don’t want to do that,” when we see the need, then we say, OK, we’ll make the investment of publishing this as an API, make it a commitment. So I really think about it in smaller steps, we do not want to commit to an API before its time.
@jdn in response to "Could you perhaps explain what the friction is?"
We work on very large applications (far-reaching financial apps). These apps tend to have hundreds, if not thousands, of complex views (MVC Views, not DB Views). Writing views in the DB for a good chunk of those added a lot of time to the development cycle, and it cluttered the database. We saw no benefit to writing DB Views over writing queries directly in our codebase. It keeps us in one environment, we can use named queries, LINQ, HQL, Criteria, or raw SQL when appropriate, and we don't have a DBA griping at us for our use of thousands of views. It was also much easier to make changes to the model that might affect multiple queries; just search and replace for the HQL/SQL ones and you get free refactoring support with LINQ.
We DO use some flattened tables to query against for performance reasons, but our apps are pretty write-heavy, so we can't go overboard with denormalizing data. The overhead to keep those tables in sync with the real, normalized data would soon grow to large, so we limit that practice to REALLY involved queries that span many tables and thus can't take advantage of our built-in indexes.
I've been doing this for about 6 months now. In my last two projects (ASP.NET MVC2&3), I've been injecting my ISession into my controller ctor. My business rules are on my domain models. (As opposed to anemic models or simple DTOs.) If I need to write a "service" (as in MVC action that returns XML or JSON), I use automapper to flatten objects.
In my estimate, 90% of queries in an application are really simple (get by ID, get where Active = 1 and Customer ID = XXX). When they start getting complicated, I can easily create a spec and do session.Query <t().Where(spec.Predicate). LINQ Skip(), Take(), Fetch(), and FetchMany() makes it way too easy to eager load entities, and that tends to differ from action to action, depending on what exactly is displayed on each page. A simple GetCustomerByID() method doesn't work because sometimes I need that customer to load with invoice info eagerly fetched, and sometimes I need contacts eagerly fetched, and sometimes I need something else eagerly-fetched, so the service-tier "reuse" doesn't happen because each action ends up doing it's own thing.
My unit tests go against a real database, and I use SchemaExtract to blow up the test base and rebuild it when I run tests. I've been bitten too many times by queries that work in memory but then don't translate the exact same way to SQL. So integration tests that hit a real DB are critical. My architecture is far simpler. It is way easier to make changes. It's probably less enterprise-y; it's definitely more agile.
And yes, I started coding .NET this way because this is much closer to how Rails does it. Rails taught me that you can still build an incredibly high-quality application with fewer layers, anonymous objects, dynamics, etc. and still sleep well at night.
It's still N-tier. It's just a much smaller value for N.
I tend to agree with you in many circumstances when the application is actually small scale or the backend is not going to be reused for integration scenarios and the like.
For anything of decent size with true SOA and broader reach, write once, use anywhere type of backend, I am sure you agree that simplifying the architecture the way you describe it is near impossible and highly undesirable.
Have a nice day!
-f
Excellent - agree entirely with the original article! Thanks for a refreshing view.
This sounds like a hybrid of the Naked Objects pattern and CQRS. It works. I've used it successfully. It requires more discipline than piling on layers of abstraction which is key point. Most people lack discipline.
If you have bad quality developers (which vastly outnumber the competent ones) or are one yourself, it will not work though.
Also, lots of misquoted stuff on SoC. SoC is about clean abstraction (read SICP which predates all the design patterns guff). Your commands are already abstractions on the domain model and your queries are already abstracted by the ORM (providing you deal with aggregate roots properly etc). You do not need layers of facades and transfer objects to protect everything.
Regarding extension and maintainability, refactoring late and massive extension is costly anyway and will affect scalability. Worry about it then when you can afford to (if you can't your business model sucks).
Duh
You mean to say you can actually load crap from database through just Service/DataProvider/Repository/NHibernate!
I always thought you needed a service bus and a boatload of WCF config and data contracts and view models sitting behind a facade on top of a service locator that wraps all dependency injection containers.
So glad I do Django nowadays :)
But using NHibernate directly may not be feasible if IT ops says no database access from the DMZ...
Ajai
This is all good if,
You are a one-man shop
Work on a small-scale project
Know the inside-out about NHibernate
This is not good if,
You work in a team and people have different skill sets. You don't want front end developer to write arbitrary (crappy) queries
DBA don't allow direct access from DMZ
Work on a large projects
I could not agree more with this post.
After studying/using numerous lasagna code frameworks, and even building one myself, I think it is time to go back to basics. ( check github.com/.../BeerEventModule.cs for a straightforward example).
Most of my customers do not care whether their project is using some/the right kind of architecture or not, they just want a working product at a reasonable cost.
So I usually start out with some kind of basic MVC/M-V-VM-model accessing the data directly from the controller.
Since most queries are specific to a certain action in the controller anyway, there is no real value in abstracting them away in a repository. In fact, abstracting them away requires the dev to interpret yet another extra layer, and adds complexity/maintenance costs without extra added value.
I am a big fan of YAGNI, and applying it to everything:
I usually start on a project using duct-tape dev; only when the manual testing starts slowing me down, or the app is getting to complex, I start using AT/BDD, post-implementing the initial spec tests as well.
If my controllers start getting to complex, I start abstracting functionality away into a domain service, using TDD if necessary.
If this starts getting to complex, I switch everything to proper DDD, using the CQRS principles as a guideline (usually no eventsourcing and stuff, just the basic principles)
** Slightly off-topic rant **:
In the end it is all about being pragmatic, and IMO in a lot of cases the software development industry is valuing form over function.
A lot of the TDD fanatics seem to dislike the duct-tape first approach, but IMO their reasoning is a bit flawed:
if you use TDD to guide your design, why don't you also write tests to verify whether your application's speed is sufficiently fast enough ?
Their usual answer is: because it usually is fast enough, in the rare case the speed might be an issue, we implement a test for it.
Well, if your design is pretty straightforward, then why would one need to drive it using tests ?
Being an architecture junkie myself, and having made the mistake of overarchitecting things a numerous amount of times, I now call for simplicity.
IMHO Getting to know the different forms of architecture or development methodologies is the easy part; knowing when to use them - and even more important: when not - is the hard part.
Comment preview