Complex EDM
Hammett posted "Is EDM the unlearned EJB lesson?", he is not impressed with the amount of complexity that is required. I reviewed EDM a while ago, and came back unimpressed, I guess that I'll have to look again to see what they did in the meantime.
Here are some interesting quotes from Hammett's post.
This is something that Hibernate (but not yet NHibernate) actually supports, using the <join table="employees"/> syntax. You can get some of the details here. I remembered seeing something about that in the Hibernate documentation, and indeed, I found this:
I can certainly agree with the sentiment above.
The only cases when I wanted to do something like this was when there was something really bad in the data model in the first place. I should mention that this is also extremely easy to build using the Delegate pattern at any rate. Scratch that one as an important feature.
Oh, yeah! With that we are fully in agreement. I am so pleased that Microsoft released the .Net 3.0 without releasing all the tools for it. It means that the technology had to be usable with the crutches of the tools.
Comments
You somtimes might want (or actually be forced) to do vertical partitioning when you exceed the maximum possible rowsize (which e.g. is about 8k afaik on sqlserver 2k) of the database. This would be an example where you most likely don't want the table-split slip into your object model as the "conceptual justification" is actually a workaround for a limitation of the dbms, and workarounds should always be as encapsulated as possible.
That is a valid point, but even them, just randomly splicing the table is a bad practice, you need to give some though about how you can do that. I would also argue that if your rows are so big, you probably not going to want to load them all at once, but to partition them again for the business layer.
Data often outlives the application that created it, this means that often the justification for a particular data design may no longer exist. And with it this mistaken assertion that you should objects at least as granular as the database.
If you are using / creating data in a way that doesn't need that granularity why expose it?
EDM: I'm not impressed either"
A prominent example that comes to my mind I had problems with in the past was a table storing the EXIF info of an image in a metadata table. It is absolutely unlikely that the EXIF info will actually be >8k, but it theoretically could if you add up all maximum field sizes and your only option is to randomly splice it.
Database normalization applied to the data structures, since the database outlives the applications build around them, increases the gap between the logical and the conceptual model. I hardly come accross 1:1 mappings in the enterprise field these days. The principle “an essential element of good object model design that the object model be at least as granular as the relational model” is an interesting one to discover further.
Comment preview