Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,592
|
Comments: 51,225
Privacy Policy · Terms
filter by tags archive
time to read 3 min | 436 words

Frankly, I am quite amazed that I even need to write this post. Source Control is such a basic part of the development process that I didn't want to believe there could be anything to say about it.

In the previous release of Entity Framework, there were deal breaker issues in the source control story. In short, it didn't work. I installed Visual Studio 2008 SP1 Beta to check how the new bits behave.

Let us start from the simplest scenario, defining a model with two classes, and commit to source control:

image

Now, let us say that Jane wants to add a Total property to the Order class:

image


At the same time, Joe is adding a Name property to the Customer class:

image

Jane commits first, and then Joe attempts to commit. Here is the result:

image

This is bad.

In fact, I would say that calling this unacceptable is not going far enough.

I made two unrelated modifications in the model, something that happens... all the time, and it puked on me.

This is not an acceptable Source Control story.

Now, to be fair, I dug a little deeper and tried to find what has cause the conflict. Here is what I find in the Model1.edmx file:

image

The conflict is in the visualization section of the file. This is a huge improvement over the previous version, but is is still broken. Leaving aside the fact that I have no idea why the visualization information is in the same file as the actual model, I shouldn't get merge conflicts on this trivial a change.

I didn't check the generated code, but there were conflicts there as well, which might cause additional issues. They could be regenerated, I assume.

In the end, I was able to get to the model that I wanted, after manually fixing the merge conflicts:

image

In summary, this is a huge improvement from the previous version, but it is still fall far from the minimum expected bar.

Please note that I have tested the most trivial scenario that I could, I have no idea what the behavior would be when dealing with more advance scenarios.

time to read 1 min | 129 words

After the pain of VS 2005 SP1 (which killed my machine, as a matter of fact), I decided to install the SP1 beta for VS2008 on a clean VM. That VM is a simple install of the OS + VS 2008, that is all.

Here is the result of installing VS 2008 SP1 Beta. I have no idea what happened, at one point it was installing, now it is rolling back.

image

I suppose I could try to figure out what is going on, by hunting in the logs and trying the cargo cult approaches.

But I have a simpler solution, just avoid the whole thing. Is it too much to ask that the installer will work?

time to read 4 min | 771 words

I was taking part in a session in the MVP Summit today, and I came out of it absolutely shocked and bitterly disappointed with the product that was under discussion. I am not sure if I can talk about that or not, so we will skip the name and the purpose. I have several issues with the product itself and its vision, but that is beside the point that I am trying to make now.

What really bothered me is utter ignorance of a critical requirement from Microsoft, who is supposed to know what they are doing with software development. That requirement is source control.

  • Source control is not a feature
  • Source control is a mandatory requirement

The main issue is that the product uses XML files as its serialization format. Those files are not meant for human consumption, but should be used only through a tool. The major problem here is that no one took source control into consideration when designing those XML files, so they are unmergable.

Let me give you a simple scenario:

  • Developer A makes a change using the tool, let us say that he is modifying an attribute on an object.
  • Developer B makes a change using the tool, let us say tat he is modifying a different attribute on a different object.

The result?

Whoever tries to commit last will get an error, the file was already updated by another guy. Usually in such situations you simply merge the two versions together, and don't worry about this.

The problem is that this XML file is implemented in such a way that each time you save it, a whole bunch of stuff gets moved around, all sort of unrelated things change, etc. In short, even a very minor change cause a significant change in the underlying XML.

You can see this in products that are shipping today, like SSIS, WF, DSL Toolkit, etc.

The problem is that when you try to merge, you have too many unrelated changes, which completely defeat the purpose of merging.

This, in turn, means that you lose the ability to work in a team environment. This product is supposed to be aimed at big companies. But it can't suppose a team of more than one! To make things worse, when I brought up this issue, the answer was something along the line: "Yes, we know about this issue, but you can avoid this using exclusive checkouts."

At that point, I am not really sure what to say. Merges happen not just when two developers modify the same file, merges also happen when you have branches. As a simple scenario, I have a development branch and a production branch. Fixing a bug in the production branch requires touching this XML file. But if I made any change to it on the development branch, you can't merge that. What happen if I use a feature branch? Or version branches?

Not considering the implications of something as basic as source control is amateurish in the extreme. Repeating the same mistake, over and over again, across multiple products, despite customer feedback on how awful this is and how much it hurt the developers who are going to use it shows contempt to the end developers, and a sign of even more serious issue: the team isn't dogfooding the product. Not in any real capacity. Otherwise, they would have already noticed the issue, much sooner in the lifetime of the product, with enough time to actually fix that.

As it was, I was told that there is nothing to do for the v1 release, that puts the fix (at best) in two years or so. For something that is a complete deal breaker for any serious development.

I have run into issues where merge issues with SSIS caused us to have to drop days of work and having to recreating everything from scratch, costing us something in the order of two weeks. I know of people that had the same issue with WF, and from my experiments, the DSL toolkit has had the exact same issue. The SSIS issues were initially reported on 2005, but are not going to be fixed for the 2008 (or so I heard from public sources) , which puts the nearest fix for something as basic as getting source control right in a 2-3 years time.

The same for the product that I am talking about here. I am not really getting that, does Microsoft things that source control isn't an important issue? They keep repeating this critically serious mistake!

For me, this is unprofessional behavior of the first degree.

Deal breaker, buh-bye.

time to read 1 min | 96 words

Hopefully I'll get the same quick "you are an idiot, this is how it is done" that I got the first time I posted about it. Here is my current issue. Attempting to open SQL CE from multiple threads has locked it out. This has been the state of the system for ~6 hours.

I don't mind the locks, I do mind the fact that there seems to be no way to specify a timeout for that, so the app just sit there, waiting, waiting, waiting.

image

Unity Annoyances

time to read 1 min | 109 words

This has nothing to do with the code itself, and anything to do with how it is managed. Take a look here:

image

The weekly drop is an MSI. I don't like MSI. They taint my system, put things in places I don't want, and in general annoys me.

Okay, so let us just grab the source directly, right? That is what I tend to do anyway.

Oh, I forgot, there isn't any source repository available.

image

A zip file is the least I would expect.

MsBuild vs. NAnt

time to read 2 min | 205 words

A long while ago I moved the Rhino Tools repository to using MSBuild. I am not quite sure what the reason for that was. I think it was related to the ability to build it without any external dependencies. That is not a real consideration anymore, but I kept that up because it didn't cause any pain.

Now, it does.

More specifically, I can't build the rhino tools project on a machine that only have 3.5 on it. The reason is that I have a hard coded path with 2.0, which worked well enough, until I tried to work on 3.5 machine. In which case all hell broke loose.

Well, that is an easy fix, right? Just go and change the hard coded 2.0 to the current framework version, right?

Except that the msbuild doesn't have any way to do that.

I am surprised, to say the least. And yes, I know that I can extend it to support this. I am not particularly sure that I want to, however. At this point, it would be easier to just move it to NAnt, I think. It would also align rhino tools with the tooling all its dependencies are using.

Reviewing Unity

time to read 10 min | 1847 words

I am sitting at the airport at the moment, having to burn some time before I can get on a flight, and I decided to make use of this time to review the Unity container from the P&P group.

A few things before I start. Unity is a CTP at the moment, it is not done, and that is something that should be taken into account. I have several design opinions that conflict with the decisions that were made for Unity. I am a contributor to Windsor. Overall, I am biased. Please take that into account.

I am also going to put it through some of the paces that I put Windsor through. Just to see how it goes. Again, I am comparing a > 4 years old container to a very new new, another thing to keep in mind. I do not foresee a situation in which I will use Unity over Windsor, but I do hope that this post will be helpful in future improvements to Unity.

A lot of this feedback is based on my experience and may contain comments that can be considered personal preferences.

Let us start with the things that I like:

  • This piece of code is very nice, and it is useful in many scenarios where the instance is created by an external source (Page instance, Forms, Web Services, and many more). At the moment, this is something that Windsor doesn't have (I need two find a few hours free on a machine that has a development environment.
    myContainer.BuildUp<MyRealObject>(myObject);
  • I am ambivalent as far as the Method Injection goes, in the abstract, I say cool, nice feature. The practical side says that it is mostly not required. Nevertheless, let us put it here.
  • The XML configuration is very clean and easy to follow. At least the sample ones that I have seen. I am not sure if it is possible to put it in an external file (not in app.config), but that is probably my only reservation with that.
  • The ability to configure instances from the configuration is cute. Especially if you add the converter support that you have there.  It doesn't seem, at least from a brief glance, that you can build complex objects there, though. That is not a bad thing, however :-)
  • It was a really good decision, to decouple the container and its configuration.

And now to the things that I don't like:

  • The documentation (and a feature which attempts to resolve a type by default, even if it is not registered) direct toward having concrete class dependencies. I strongly dislike this, I think that except for rare cases, an interface should be used. And I generally dislike this feature.
  • For that matter, the ability to resolve a type that wasn't registered in the container strikes me as problematic. There were some long discussions on that in the ALT.Net mailing list, and I can see the other side's point of view. Yes, the container should enable this feature, but not in the common scenario. Preferably, add a method that make it explicit what you are doing.
  • According to the documentation, having several constructors will result if Unity picking one of them at random, and supplying its dependencies. I find the greediest possible ctor approach much more predictable, and a more natural model.
  • [DependencyConstructor] bothers me. This is the container invading into the application in an unseemly manner. It bothers me more that it is not optional if you have more than a single ctor.
  • Much worse, from the point of view of container ignorance, is the facilities for property injection. Here you have to specify [Dependency] on all the properties that you want injected. Again, this is the container reaching into my code and messing with that in unseemly ways. My code should have nothing to do with the container whatsoever. By the shape of my object, so will the container work. Not the other way around.
  • container.Get<ISomethingMadeUp>() will return a null reference if it can't create the type. I think that this is the wrong design decision. It should definitely throw here. It means that instead of getting a nice exception "ISomethingMadeUp is not registered" I get "Object not set to an instance of an object". While the ability to try resolve from the container is useful, it should be in a separate method. TryGet would be my preference for that.
  • Introspection capabilities are missing. That is, there is no way (that I could easily see, and I looked for one, to ask if a given type or a given extension are registered in the container. This is critical for many advance scenarios.
  • Something that I consider to be a sever bug, but from the code looks like a design decision, is ignoring missing dependencies when you resolve a component. That is, assume that you component Foo that have constructor dependency on ILogger. If you resolve Foo, and the container will not be able to find an ILogger implementation, it will pass null to the constructor. Read that one again. Instead of getting "Cannot resolve Foo because ILogger cannot be constructed" you are going to get "Object reference not set to an instance of an object".
    This is a problem, period.
  • Yes, you can use [Dependency(NotPresentBehavior)] to change that, but that is again, an invasive approach. Not a solution.
  • There is no protection from cycles in the dependency graph A depends of B depends on C depends on A. Again, you need to get a good exception here, explaining the problem. What you get is a stack overflow exception and bye bye to the program.
  • No support for generic components. That is, registering IFoo<> and getting IFoo<string>.
  • At a certain point, calling a method on the container will raise an event that will be handled by a strategy. Is seems like a very odd thing to me. The case in point is setting a component to be a singleton, I would have grabbed the singleton strategy directly and called that, instead of going through the event.
  • Life time support. Right now Unity supports two levels, singleton and transient. That is good, and you can probably extend that with policies. The problem is that those ideas are hard coded into the container interface. Off hand, you need at least local (per thread or per request) models as well. And I think that exposing singleton at the container interface is a mistake.
  • I haven't seen how you can control the dependencies yet, which is a very important concern. That is, when I am building EmailSender, I want it to use port 435 and the file logger, instead of the email logger that everything else uses. This is a key feature, but reading the forums, this is currently a problematic feature.

Object Builder 2 Observations:

  • I like the use of dynamic methods to create instances, but the implementation (at least at first glance) seems awfully complicated. The main object that I have here is that you have the control flow of generating this method separated over a large number of objects and interfaces. The advantage that this give you is fine grain control over this process, but consider the task involved, this is a highly cohesive process, and you can't really just plug your own stuff to it without properly understanding everything that goes there.
  • In the same vein, the use of policies and strategies for everything obscure the intent of the code. The DynamicMethodConstructorStrategy, for example, need to get the generation context, the existing parameter is use for that, which seems like a bad abuse of the given API. I would create an independent component with an API that would explicitly make this dependency. If you wanted the ability to replace that, have it registered in the container by default, then replace that before you resolve anything.
  • In the above case specifically, we have the existing parameter that sometimes holds the existing object, and sometimes holds the current context. That is not a friendly API.

I wanted to try Unity's extensibility model, but I think that I put enough time in it already. What I have learned so far makes it pretty much pointless.

I wanted to build the StartableExtension, which automatically starts up components that implements the IStartable interface. On the surface, it is very simple, register to TypeRegister event, get the type and Start() it. The problem is that you need to wait until all the type's dependencies are available. For example, I may register IHealthChecker before I register ILogger, which it need. There is no way in Unity at the moment to enable this scenario.

Something else that I would like to see is Unity's version for Environment Validation. Here is is likely to be possible to do this, but it expose a problem with Unity's dependency model. You can't register arguments for a component, you can only do that globally, and that is if you are willing to perform invasive operation on your components.

Overall, I think that is an okay solution. You can see some Windsor influence in the API and in the configuration, which make me happy.

It has too many red lines there for me to be able to use it (even if Windsor wasn't around), however. Some of the things that I have outlined here (error handling, for example) can be fixed easily enough, I am much more concerned with the design decisions that were made. Specifically, not throwing when a dependency is not found, both in the specific case of directly calling Get() and in the general case of resolving a dependency for a type that is currently being resolved.

The invasiveness of the container is one of the top issues that I would have fixed. That is one of the major reasons that CAB got lambasted.

I remember having to work with Object Builder circa 2005 - 2006, and it was a major pain. Unity looks significantly better than that. As well as the bits of Object Builder 2 that I have examined. I still object to the strategies / policies design, however, it seems like an overly generic solution. And methods like AddNew offend me for some reason. Add( new Foo() ) isn't hard.

And, to save myself a lot of trouble later on, this is not an attack on Unity, this is simply me going over what is there and expressing my professional opinion on an early version of a project.

time to read 4 min | 686 words

Phil Haack is talking about why the MS MVC team changed IHttpContext to HttpContextBase. I follow the argument, but at some point, I just lost it. This, in particular, had me scratching my head in confusion:

Adding this method doesn’t break older clients. Newer clients who might need to call this method can recompile and now call this new method if they wish. This is where we get the versioning benefits.

How on earth does adding a new method to an interface would break an existing client? How on earth does adding a new method to an interface require a compilation from client code. Now, to be clear, when I am talking about changing the interface I am talking solely about adding new methods (or overloads), not about changing a method signature or removing a method. Those are hard breaking changes no matter what approach you take. By adding a method to an interface? That is mostly harmless.

The only thing that require that is if you are implementing this interface, not if you are using it. It is possible that Phil is using the term clients in a far wider meaning than I would put on this, but this is not the general use of the term as I see it.

Prefer abstract classes to interfaces, use internals and no virtual by defaults are three of the main issues that I have with the framework design guidelines. I have major issues with them because they are actively harming the users of the framework. Sure, it might make my work as a framework developer harder, but guess what, that is my job as a framework developer, to give the best experience that I can to my users. [Framework Design Guidelines rant over]

Now, I do believe that I have some little experience in shipping large frameworks and versioning them through multiply releases and through a long period of time. I also believe that some of those frameworks are significantly larger and more complex than what the MS MVC is going to be. (Hint, if MS MVC even seems, by an illiterate drunken sailor on a rainy day, to approach NHibernate's complexity, it is time to hit the drawing board again).

And those frameworks are using frameworks to do pretty much everything. And I cannot recall off hand any breaking change that resulted from that. In some cases, where the interface is an extension point into the framework, we have gone with interface + base class with default functionality. If you use the base class, you are guaranteed no breaking change. The reasoning for an interface is that it is giving the user more choice, you aren't limiting the options that the user have when it comes to use this (by taking out the single inheritance slot, for example).

Now, if we analyze the expected usage of IHttpContext for a moment, who is going to be affected by changing this interface? Only implementers. Who is going to implement IHttpHandler? I can think of only two scenarios. Hand rolled test fakes and extending the Http Context in some interesting ways, perhaps by adding proxies or decorators to it.

In the first case, that is no something that I would worry about. The second is far rarer but also the much more interesting case, but those are generally not done by hand (I wouldn't want to type all the methods of IHttpContext, that is for sure). Even if it was, I still have no issue with it. New framework version, add a method. It is not a significant change. A significant change would require me to rework large swathes of my application.

Now, why do I care for that?

The reason is very simple. It is a pain to me, personally, when I end up running into those warts. It is annoying, frustrating and aggravating. I like to be happy, because otherwise I am not happy, so I try to minimize my exposure to the afore mentioned warts. Hopefully, I can make them go away entirely. And not just by pulling the blanket over my head.

time to read 1 min | 121 words

Okay, I am impressed. Really impressed. I installed a non-web-edition of Win 2008, and now everything works as it should.

I really like what Microsoft did here. Both in terms of the UI experience that you get, the guidance that they keep popping up (which is not annoying), the speed, and overall feeling.

What really made me happy was this. Now I can actually drill down and see what is going on so easily.

image

Something that certainly cheered me up was the file-copy test:

image

It started instantly, and it looks like we get information that is actually useful. Unlike... that other OS.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. RavenDB 7.1 (7):
    11 Jul 2025 - The Gen AI release
  2. Production postmorterm (2):
    11 Jun 2025 - The rookie server's untimely promotion
  3. Webinar (7):
    05 Jun 2025 - Think inside the database
  4. Recording (16):
    29 May 2025 - RavenDB's Upcoming Optimizations Deep Dive
  5. RavenDB News (2):
    02 May 2025 - May 2025
View all series

Syndication

Main feed ... ...
Comments feed   ... ...
}