Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,565
|
Comments: 51,185
Privacy Policy · Terms
filter by tags archive
time to read 1 min | 186 words

Probably the best part at being in DevTeach was meeting and talking with a lot of people. That was also the biggest frustration. The moment that there were more than three of us, the discussion split into several interesting avenues, leaving me really wishing that I could take part in at least three conversations at once.

I can (usually) take part in two discussions at once, but it is confusing (both for me and participant in the discussions), and there is some prioritzation that is going on which decide who I am most interested in listening to right now. Part of the reason that I like email for communication over IM/Voice in many cases is that it allows me to handle concurrent discussions better.

That said, there is nothing better to communicate ideas than a face to face meeting where you can easily interact with the person(s) you are talking to. A lot of the discussions that we had at DevTeach would be simply impossible to take to a written form and keep the same tone (which is what made it interesting in the first place.)

time to read 3 min | 531 words

In the comments to my OR/M Smackdown post Adam Tybor noted:  

Don't we all know that performance is the thing you tweak last?

To which Ted Neward has replied:

Well, if you don't think about perf until the very end of the project, you usually find yourself having to either just shrug your shoulders and say, "Well, faster hardware will make it run fast", or backtrack and refactor significant chunks of the application in order to cut out round trips from the system as a whole.

Which reminds me of a conversation that I had with Udi Dahan recently, which we concluded with this great quote from him:

In order to design performance domain models, you need to know the kinds of data volumes you’re dealing with. It affects both internals and the API of the model – when can you assume cascade, and when not. It’s important to make these kinds of things explicit in the Domain Model’s API.

All of which brings me to the following conclusion, performance tuning in the microseconds is a waste of time until you have profiling data in place, but that doesn't meant that you should think about performance from the get go. You can code your way out of a lot of things, but an archtiecture that is inherently slow (for instance, chatty on the network) is going to be very hard to modify later on.

Udi had an example of a Customer that has millions of orders, in this case, the performance consideration has a direct affect on the domain model (Yes, I know about filters). From a desing perspective, it basically means that the entity contains too much data and need to be broken. From a performance perspective, that is making it explicit that a potentially very costly call is made (and obviously filtered for the need).

A good rule of the thumb for performance is that you should consider an order of magnitude increase in the number of users/transactions before you need to consider a significant change in the archtiecture of the application.

That is absolutely not to say that you should consider everything in advance, and I had my greatest performance success by simply batching a few thousands remote calls into a single one, but architecutre matters, and that should be considered in advance, and built accordingly. (And no, it doesn't necessiate a Big Architecture Up Front either, although where I would need to scale very high I would spent a while thinking about the way I am going to build the app in advance. Probably with some IT/DBA/Network guys as well, to get a good overview.

Oh, nevertheless, premature optimization...

time to read 3 min | 576 words

Ted Neward responded to the comments in my post about the OR/M Smackdown.

A few of the comments to my post had an unpleasant tinge to them with regard to Ted, and after listening (just today!) to the podcast, I have to say that I disagree with them.

I think that the debate took an unforeseen approach (which may have made it less exciting to listen to) in that it didn't rehash all the old OR/M vs. SP debates*, but instead focused on stuff (that at least to me) that is not really discussed much. As a result, much of the discussion focused on points of failure for each approach in most applications.

I think that such a discussion cannot really be settled to one side or the other, especially since it looks like the major differences between Ted an me is where we would draw the line in where to move from one approach to the other. There is a big difference there, but it is a matter of a value judgement, more than anything.

As someone who is deeply involved with OR/M, it was interesting to hear about the approaches from db4o and the other 2nd generation OODBS, even though I still have my doubts about such systems. You can check here for some of the details. I would be interested in learning how versioning, refactoring, deployment, scaling, optimization, etc applies to an OODBS project, but I am already slicing time to minutes, I don't really have the time to do more. (If you are interested in hiring my company for a project that uses an OODB system, taking into account that I have 0 experiance with them, I would be delighted to here about it :-) ).

When I listened to the podcast today, I kept think ,"Oh, I wish that I said XYZ", and then I listened to myself saying something to that affect. Overall, I am very pleased with the "smackdown", although it may have been less of  a smackdown than anticipated.

A few things about what I have to term "logistics" because I can find no better words for it. I don't like how I sound when I speak English, I think much clearer than I can speak, and I am sorry for all those whom I subjected to my English. Both Ted and I have interrupted each other when we had something that had to be said, but Ted sounds so much smoother when he does it... I can understand (although I disagree) why it was thought that he tried to highjack the conversation.

As Ted mentioned, barely an hour before that, I was reminded that gesturing with the hand that holds the mike is not very productive for good sound quality, I listened to that advice, switched hand, and immediately started gesturing with that hand. I probably need more experience there as well.

To conclude, I had great time participating in the debate, and I would like to thanks Carl & Richard for supplying us with the chance to make it.

P.S: And for those who wanted solid truths, I have the consultant answer to you: It Depends.

P.S.S: I have seen the object models that Ted talks about them, that make my 31,438 tables DB looks almost (but not quite) ordinary.

* Sorry Ken, ain't going to bite this one again.

time to read 1 min | 78 words

It only took solving something I was dreading of dealing with, with a much easier approach that I believed possible. The problem right now is convincing myself that I don't really want to finish this feature right now. I can't believe how well success can turn a day that was to be rotten.

Anyway, I am off for home now.

I need help preventing the hero syndrome, anyone?

time to read 2 min | 226 words

It looks likes I still have a lot to learn about NHibernate, today I took a foray into criteria's projections, which are new and exciting to me, but what excites me right now is this little puppy. It is not that the query is terribly complicated, it is that this is traversing six tables as easily as it can be. From the Multi Query reference, you can probably understand why I am excited, I need to get ~6 results sets for this to work, and it is looking like this is going to be very easy to do it in a performant way.

(Image from clipboard).png

You have to realize, I am at the point when I am running passing tests in the debugger, just to check out how cool this is. In the time that it took me to write this post, I have implemented the bare bones of what is most probably the hardest part of the system. This particular functionality requires data from 30(!) tables. The final query is over 5Kb in size, and goes on for 3 pages. (I am talking about the final query, not the one above.)

And to forestall the obvious questions, yes, I use this color scheme sometimes, I amuses me.

time to read 19 min | 3754 words

I am teaching beginners right now, and we started out from:

using (SqlConnection connection = new SqlConnection(Settings.Default.Database))

{

 connection.Open();

 using (SqlTransaction tx = connection.BeginTransaction())

 {

     using (SqlCommand command = connection.CreateCommand())

     {

         command.CommandText = "SELECT COUNT(*) FROM Books;";

         result = (int)command.ExecuteScalar();

     }

     tx.Commit();

 }

}

But they said that they already know this stuff and I am boring, so we moved to this:

return With.Transaction<int>(delegate(SqlCommand command)

  {

      command.CommandType = System.Data.CommandType.Text;

      command.CommandText = "SELECT COUNT(*) FROM Books;";

      return (int)command.ExecuteScalar();

  });

Where the implementation is:

 

    public static void Transaction(Proc exec)

    {

        using (SqlConnection connection = new SqlConnection(Settings.Default.Database))

        {

            connection.Open();

            SqlTransaction tx = connection.BeginTransaction();

            try

            {

                using (SqlCommand command = connection.CreateCommand())

                {

                    command.Transaction = tx;

                    exec(command);

                }

                tx.Commit();

            }

            catch

            {

                tx.Rollback();

                throw;

            }

            finally

            {

                tx.Dispose();

            }

        }

    }

}

They didn't think it was boring any longer :-)

That made for some interesting diagrams, just to show the flow of the code. The piece of code above covers a lot of topics, next step is to introduce Unit of Work.

They are beginners, so they need to understand the basics well before they can do anything with frameworks, but I just find it so frustrating to work on the naked CLR. It is like a construction worker that can arrive on a building site with an anvil and a hammer, and then need to build all the tools before they can start working.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. Production Postmortem (52):
    07 Apr 2025 - The race condition in the interlock
  2. RavenDB (13):
    02 Apr 2025 - .NET Aspire integration
  3. RavenDB 7.1 (6):
    18 Mar 2025 - One IO Ring to rule them all
  4. RavenDB 7.0 Released (4):
    07 Mar 2025 - Moving to NLog
  5. Challenge (77):
    03 Feb 2025 - Giving file system developer ulcer
View all series

RECENT COMMENTS

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}