Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,598
|
Comments: 51,232
Privacy Policy · Terms
filter by tags archive
time to read 2 min | 245 words

Close to a year ago, I posted about RavenMQ for the first time, discussing the ideas behind it and what we intended to do with it. We even produced a private beta group to started testing it, but we never had enough resources to be able to make this more than an interesting project.

What I wanted is to make this a product, but that requires a lot more effort. Recently I started looking at SignalR as a way to undercut the amount of work that would be required to build RavenMQ. And the more that I looked at SignalR, the more uncomfortable I felt. SignalR seems to be a pretty good fit for the sort of things that I was thinking about for RavenMQ.

More than that, the longer that I dug into it, the more I liked it. The problem is that I feel that with SignalR in existence, the main reason for RavenMQ is now no longer there. Oh sure, we can build it, we can even base it on SignalR for wire compatibility, but I don’t feel that we can really generate a good enough value to create a viable product. At least, not one that I would feel comfortable charging for.

Given that, I am going to discontinue development on RavenMQ. If you were part of the RavenMQ beta group, you are strongly encouraged to look at SignalR and check whatever you can use that instead.

time to read 6 min | 1099 words

After reading this post from Scott Hanselman, I decided that I really want to take a look at SignalR. This is part two of my review.

I cloned the repository and started the build, which run cleanly, so the next step was to take a look at the code. I didn’t notice any tests in the build, and I am really interested in knowing how SignalR is tested. Testing websites is notoriously hard, if only because you need to spin up IIS/WebDev to do so.

image

No tests… as I said, I can understand why, but it is worrying, because it means that if I wanted to use SignalR, I would have to come up with my own testing strategy, I hoped that there would be something there out of the box.

With RavenDB, we put special attention to making it as easy as possible to run in a unit test context:

using(var documentStore = new EmbeddableDocumentStore { RunInMemory = true }.Initialize())
{
    // run tests here
}

For the sake of doing something different, I decided to start looking at things by reading the JS scripts first. It would be interesting to look at things from that angle first.

The first interesting thing that I run into was this:

image

That is some funky syntax. I actually had to go to the docs to figure it out, and I got side tracked with the comma operator, but it seems like a typical three variable initialization, although I’ll admit that the initialize just showing up there screwed me up for a while. The rest of the code in the jquery.signalR.js is pretty straightforward, using web sockets or long polling, let us move on to hubs.js.

The next step for me was to figure out that I wanted to debug through this, so I got the SignalR-Chat sample app and tried it. It worked, perfectly, I just couldn’t figure out why it was working.

My main frustration was actually with this piece of code:

image

SignalR doesn’t have any chat property there, and I don’t think that JS allows you to define things on the fly. I was expecting this to fail, but it worked! That was quite annoying, so I set out to figure what was going on.

Looking at the network, it was making a call to http://chatapp.apphb.com/signalr/hubs, and that has this:

image

Where did this come from?! And how come that this thing got a response?

That really annoyed me, I could see no writing whatsoever for /signalr/hubs in the Chat application. I check the web.config, I check for .ashx files, I checked the global.asax, nothing.

Finally I went back to the SignalR code and figured out that it was using a PreApplicationStart hook to inject an http module, which would hook this up for you. Now I had a better understanding of what is going on, but I also needed to figure out where the chat references came from.

That is where the high level API comes from. SignalR Hub is going to process itself and generate the appropriate proxy on the client. Thinking about this, it is really nice, it was just surprising to to get there, and getting lost in web.config along the way didn’t help my state of mind.

Okay, now that I understand how the client side works, let us go and actually look at what is going on over the network. I am testing this using Chrome 13.0.782.220 against the Chat sample.

The first interesting thing is:

POST http://chatapp.apphb.com/signalr/negotiate HTTP/1.1
Host: chatapp.apphb.com
Connection: keep-alive
Referer: http://chatapp.apphb.com/
Content-Length: 0
Origin: http://chatapp.apphb.com
X-Requested-With: XMLHttpRequest
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/13.0.782.220 Safari/535.1
Accept: */*
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Cookie: userid=26c2b4a1-f5d1-439c-8ad5-f598b7bd0644

I marked the pieces of interest. The cookie indicate that we can identify a user for a long period, which is good.

HTTP/1.1 200 OK
Server: nginx/1.0.2
Date: Tue, 06 Sep 2011 07:15:49 GMT
Content-Type: application/json; charset=utf-8
Connection: keep-alive
Cache-Control: private
Content-Length: 68

{"Url":"/signalr","ClientId":"53d681cf-08d9-4afe-8357-7918f64e7a60"}

Note that the userid and client id are different. Maybe a new one is generated on every negotiation? Yep, that seems to be the case. I can see why, and why you would also want to have it persisted. Not really interesting now.

I dug into the the actual implementation, and it is using long polling. In order to do that, SignalR uses an interesting delay messages system that I find extremely interesting, mostly because it is very similar to some of the core concepts of RavenMQ. At any rate, the code seems pretty solid, and I have gone through most of it.

It is still somewhat in a state of flux, with things like API naming and conventions needing to be settled done, but so far, I think that I like what I am seeing.

One thing that bugs me about it is that I think that there is actually two different things happening at the same time. There is SignalR, which is all about persistent connections, and then there is the Hubs, which is a much higher level API. I would like to SignalR.Hubs as a separate assembly, if only to make sure that the core concepts can be used on their own, without the Hubs.

time to read 2 min | 302 words

After reading this post from Scott Hanselman, I decided that I really want to take a look at SignalR.

There aren’t any docs beyond a few code snippets, so I created a new project and downloaded the nuget version, and started exploring the API in Object Browser. My intent was to write a small sample application to see how it works. But I run into a road block very rapidly:

image

Huh?!

Why on earth would it return dynamic for the list of clients. That seems highly suspicious design decision, and then I looked at this:

image

Hm… combine that with code snippets like this (from Scott’s post):

image

If it was someone else, I would put that receive call down to someone who doesn’t know/care about the design guidelines and just didn’t name the method right. But I assume that Scott does, so it is probably something else.

What seems to be happening here is that the Hub is indeed a very high level API, it allows you to directly invoke the JS API on the client. This is still unconfirmed as of yet, but it seems reasonable from the code I have so far, I intend to go deeper into the SignalR codebase and look for further details, look for a few posts on the topic.

time to read 9 min | 1739 words

Wow! The last time I had a post that roused that sort of reaction, I was talking about politics and religion. Let us see if I can clarify my thinking and also answer many of the people who commented on the previous post.

One of the core values that I look for when getting new people is their passion for the profession. Put simply, I think about what I do as a hobby that I actually get paid for, and I am looking for people who have the same mentality as I do. There are many ways to actually check for that. We recently made an offer to a candidate simply because her eyes lighted up when she spoke about how she build games to teach kids how to learn Math.

There were a lot of fairly common issues with my approach.

Indignation – how dare you expect me to put my own personal time into something that looks like work. That also came with a lot of explanation about family, kids and references to me being a slave driver bastard.

Put simply, if you can’t be bothered to improve your own skills, I don’t want you.

Hibernating Rhinos absolutely believes that you need to invest in your developers, and I strongly believe that we are doing that. It starts from basic policies like if you want a tech book, we’ll buy it for you. Sending developers to courses and conferences, sitting down with people and making sure that they are thinking about their career properly. And a whole lot of other things aside.

Personally, my goal is to keep everyone who works for us right now for at least 5 years, and hopefully longer. And I think that the best way of doing that is to ensure that developers are appreciated, they have a place to grow in professionally and are having to deal with fun, complex and interesting problems for significant parts of their time at work. This does not remove any obligations on your part to maintain your own skills.

imageIn the same sense that I would expect a football player to be playing on a regular basis even in the off session, in the same sense that I wouldn’t visit a doctor who doesn’t spend time getting updated on what is changing in his field, in the same sense that I wouldn’t trust a book critic that doesn’t read for fun – I don’t believe that you can abjurate your own responsibility to keeping yourself aware of what is actually going on out there.

And I am sorry, I don’t care if you read a blog or two or ten. If you want to actually learn new stuff in development, you actually have to sit down and write some code. Anything that isn’t code isn’t really meaningful in our profession. And it is far too easy to talk the talk without being able to walk the walk. My company have absolutely no intention of doing anything with Node.js in the future, I still wrote some code in Node.js, just to be able to see how it feels like to actually do that. I still spend time writing code that is never going to reach production or be in any way useful, just for me to see if I can do something.

If you are a developer, your output is code, and that is what prospective employees will look for. From my perspective, it is like hiring a photographer without looking at any of their pictures. Like getting a cook without tasting anything that he made.

And yes, it is your professional responsibility to make sure that you are hirable. That means that you keep your skills up to date and that you have something to show to someone that will give them some idea about what you are doing.

Time – I can’t find none.

There are 168 hours in a week, if you can put 4 – 6 hours a week to hone your own skills, to try things, to just explore… well, that probably indicate something about your priorities. I would like to hire people who think about what they do as a hobby. I usually work 8 – 10 hours days, 5 – 6 days a week. I am married, we’ve got two dogs, I usually read / watch TV for at least 1 – 3 hours a day.

I have been at the Work Work Work Work All The Time Work Work Work And Some More Work parade. I got the T Shirt, I got the scary Burn Out Episode. I fully understand that working all the time is a Bad Idea. It is Bad Idea for you, it is Bad Idea for the company. This isn’t what I am talking about here.

Think about the children – I have kids, I can’t spend any time out of work doing this.

That one showed up a lot, actually. I am thinking about the children. I think it is absolutely irresponsible for someone with kids not to make damn sure that he is hirable. I am not talking about spending 8 hours at the office, 8 hours doing pet projects and 1.5 minutes with your children (while you got some code compiling). And updating your skills and maintaining a portfolio of projects is something that I think is certainly part of that.

I read professionally, but I don’t code  - this is a variation on all of the other excuses, basically. Here is a direct quote: “I often find that well written blog entry/article will provide more education that can be picked up in a few minutes reading than several hours coding. And I can do that in my lunch break.”

That is nice, I also like to read a lot of Science Fiction, but I am a terrible writer. If you don’t actually practice, you aren’t any good. Sure, reading will teach you the high level concepts, but it doesn’t teach you how to apply them. You can read about WCF all day long, but it doesn’t teach you how to handle binding errors. Actually doing things will teach you that. You need actual practice to become good at something. In theory, there is no difference between reality and theory, and all of that.

I legally can’t - You signed a contract that said that you can’t do any pet projects, or that all of your work is owned by the company.

I sure do hope that you are well compensated for that, because it is going to make it harder for you to get hired.

You have a life – therefor you can’t spend time on pet projects.

So do I, I managed. If you can’t, I probably don’t want you.

Wrong this to do - I shouldn’t want someone who is on Stack Overflow all the time, or will spend work time on pet projects.

This is usually from someone who think that the only thing that I care about is lines of code committed. If someone is on Stack Overflow a lot, or reading a lot of blogs, or writing a lot of blogs. That is awesome, as long as they manages to complete their tasks I a reasonable amount of time. I routinely write blog posts during work. It helps me think, it clarify my thinking, and it usually gets me a lot of feedback on my ideas. That is a net benefit for me and for the company.

Some people hit a problem and may spin on that for hours, and VS will be the active window for all of that time. That isn’t a good thing!  Others will go and answer totally unrelated questions on Stack Overflow while they are thinking on the problem, they come back to VS and resolve the problem in 2 minutes. As long as they manage to do the work, I don’t really care. In fact, having them in Stack Overflow probably means that answers about our products will be answered faster.

As for working on their own personal projects during work. The only thing that you need to do is somehow tie it to actual work. For example, that pet project may be converted to be a sample application for our products. Or it can be a library that we will use, or any of a number of options that you can use to make sure things interesting.

You should note as well that I am speaking here from our requirements from the candidate, not from what I consider to be our responsibilities toward our employees, I’ll talk about those in another post in more detail.

Then there was this guy, who actively offended me.

The author is a selfish ego maniac who only cares about himself. As an employer, you can choose to consume a resource (employee), get all you can out of it, then discard it. Doing so adds to the evil and bad in the world.

This is spoken like someone who never actually had to recruit someone, or actually pay to recruit someone. It costs a freaking boatload of money and take a freaking huge amount of time to actually find someone that you want to hire. Treating employees as disposable resources is about as stupid as you can get, because we aren’t talking about getting someone that can flip burgers at minimum wage here.

We are talking about 3 – 6 months training period just to get to the point where you can get good results out of a new guy. We are talking about 1 – 3 months of actually looking for the right person before that. I consider employees to be a valuable resource, something that I actively need to encourage, protect and grow. Absolutely the last thing that I want is to try to have a chain of disposable same-as-the-last-one developers in my company.

I have kicked people out of the office with instructions to go home and rest, because I would like to have them available tomorrow and the next day and month and year. Doing anything else is short sighted, morally repugnant and stupid.

time to read 3 min | 439 words

I recently got a question about the cost of try/catch, and whatever it was prohibitive enough to make you want to avoid using it.

That caused some head scratching on my part, until I got the following reply:

But, I’m still confused about the try/catch block not generating an overhead on the server.

Are you sure about it?

I learned that the try block pre-executes the code, and that’s why it causes a processing overhead.

Take a look here: http://msdn.microsoft.com/en-us/library/ms973839.aspx#dotnetperftips_topic2

Maybe there is something that I don’t know? It is always possible, so I went and checked and found this piece:

Finding and designing away exception-heavy code can result in a decent perf win. Bear in mind that this has nothing to do with try/catch blocks: you only incur the cost when the actual exception is thrown. You can use as many try/catch blocks as you want. Using exceptions gratuitously is where you lose performance. For example, you should stay away from things like using exceptions for control flow.

Note that the emphasis is in the original. There is no cost to try/catch the only cost is when an exception is thrown, and that is regardless of whatever there is a try/catch around it or not.

Here is the proof:

var startNew = Stopwatch.StartNew();
var mightBePi = Enumerable.Range(0, 100000000).Aggregate(0d, (tot, next) => tot + Math.Pow(-1d, next)/(2*next + 1)*4);
Console.WriteLine(startNew.ElapsedMilliseconds);

Which results in: 6015 ms of execution.

Wrapping the code in a try/catch resulted in:

var startNew = Stopwatch.StartNew();
double mightBePi = Double.NaN;
try
{
    mightBePi = Enumerable.Range(0, 100000000).Aggregate(0d, (tot, next) => tot + Math.Pow(-1d, next)/(2*next + 1)*4);
}
catch (Exception e)
{
    Console.WriteLine(e);
}
Console.WriteLine(startNew.ElapsedMilliseconds);

And that run in 5999 ms.

Please note that the perf difference is pretty much meaningless (only 0.26% difference) and is well within the range of derivations for tests runs.

time to read 3 min | 556 words

I am busy hiring people now, and it got me thinking a lot about the sort of things that I want from my developers. In particular, I was inundated in CVs, and I used the following standard reply to help me narrow things down.

Thank you for your CV. Do you have any projects that you wrote that I can review? Have you done any OSS work that I can look at?

The replies are fairly interesting. In particular, I had a somewhat unpleasant exchange with one respondent. In reply for my question, the reply was:

My employer doesn’t allow any sharing of code. I can find some old projects that I did a while ago and send them to you, I guess.

Obviously, I don’t want to read any code that belong to someone without that someone’s explicit authorization. Someone sending me their current company code is about as bad manner as someone setting up an invite for a job interview on their work calendar (the later actually happened today).

After getting the projects and looking them over a bit, I replied that I don’t think this would be the appropriate position for this respondent. I got the following reply:

Wait a minute…

Can I know why? I took the trouble to send you stuff that I have done, maybe not the highest quality and caliber, but what I could send right now. You didn’t even interview me.

How exactly did you reach the unequivocal conclusion that I am not a good fit for this job?

My response to that was:

Put simply, we are looking for a .NET developer and one of the most important things that we look for is passion. In general, we have found that people that care and are interested in what they are doing tend to do other stuff rather than just their work assignments.

In other words, they have their own pet projects, it can be a personal site, a project for a friend, or just some code written to get familiar with some technology.

When you tell me that your only projects outside of work are 5+ years old, that is a bad indication for us.

There is more, but it gets to the details and not really relevant for this discussion.

Let me try to preempt the nitpickers. Not having pet projects doesn’t mean that you are a bad developer, nor vice versa.

But I don’t really care about experience, and assuming that you already know the syntax and has some basic knowledge in the framework, we can use you. But the one thing that I learned you can’t give people is the passion for the field. And that is critical. Not only because it means that they are either already good or going to be good (it is pretty hard to be passionate about something that you sucks at), but because it means that they care.

And if they care, it means two very important things:

  • The culture of the company is about caring for doing the appropriate thing.
  • The end result is going to be as awesome as we can get.

Now, if you’ll excuse me, I am going to check out SignalR, because I don’t feel like doing any more RavenDB work today.

time to read 18 min | 3538 words

badge1If you thought that map/reduce was complex, wait until we introduce the newest feature in RavenDB:

Multi Maps / Reduce Indexes

Okay, to be frank, they aren’t complex at all, they are actually quite simple, when you sit down to think about them. Again, I have to credit to Frank Schwieterman, who came up with the idea.

Wait! Let us back track a bit and try to explain what the actual problem is that we are trying to solve. The problem with Map/Reduce is that you can only gather information from a single set of documents. Let us look at the following documents as an example:

{// users/ayende 
   "Name": "Ayende Rahien" 
} 

{ // posts/1234 
  "Title": "Why RavenDB?", 
  "Author": "users/ayende" 
} 
{ // posts/1235 
  "Title": "It is awesome!", 
  "Author": "users/ayende" 
} 

We want to get an list of users with the count of posts that they made. That is trivially easy, as shown in the following map/reduce index:

from post in docs.Posts
select new { post.Author, Count = 1 }

from result in results
group result by result.Author into g
select new
{
   Author = g.Key,
   Count = g.Sum(x=>x.Count)
}

The output of this index would be something like this:

{ Author: "users/ayende", Count: 2 }

And we can load it efficiently using Includes:

session.Query<AuthorPostStats>("Posts/ByUser/Count")
     .Include(x=>x.Author)
     .ToList();

This will load all the users statistics, and also load all of the associated users, in a single query to the database. So far, fairly simple and routine.

badge5The problem begins when we want to be able to query this index using the user’s name. As you can deduce from the documents shown previously, the user name isn’t available on the post document, which means that we can’t index it. That, in turn, means that we can’t search it.

We are left with several bad options:

  • De-normalize the User’s Name property to the Post document, solely for indexing purposes.
  • Don’t implement this feature.
  • Write the following scary query:
from doc in docs.WhereEntityIs("Users","Posts") 
let user = doc.IfEntityIs("Users") 
let post = doc.IfEntityIs("Posts") 
select new 
{ 
  Count = user == null ? 1 : 0, 
  Author = user.Name, 
  UserId = user.Id ?? post.Author 
} 

from result in results 
group result by result.UserId into g 
select new 
{ 
   Count = g.Sum(x=>x.Count), 
   Author = g.FirstNotNull(x=>x.Author), 
   UserId = g.Key 
} 

This is actually pretty straightforward, when you sit down and think about it. But there is a whole lot of ceremony involved, and it is actually more than a bit hard to figure out what is going on in more complex scenarios.

This is where Frank’s suggestion came in:

…if I were try to support linq-based indexes that can map multiple types, it might look like:

public class OverallOpinion : AbstractIndexCreationTask<?>
{
   public OverallOpinion()
   {
       Map<Foo>(docs => from doc in docs select new { Id = doc.Id, LastUpdated = doc.LastUpdated }
       Map<OpinionOfFoo>(docs => from doc in docs select new { Id = Doc.DocId, Rating = doc.Rating, Count = 1}
       Reduce = docs => from doc in docs
                        group doc by doc.Id into g
                        select new {
                           Id = g.Key,
                           LastUpdated = g.Values.Where(f => f.LastUpdated != null).FirstOrDefault(),
                           Rating = g.Values.Rating.Sum(),
                           Count = g.Values.Count.Sum()
                        }
   }
}

It seems like some clever code could combine the different map expressions into one.

badge7This is part of a longer discussion, but basically, it got me thinking about how we can implement multi maps, and I came up with the following:

// Map from posts
from post in docs.Posts
select new { UserId = post.Author, Author = (string)null, Count = 1 }

// Map from users
from user in docs.Users
select new { UserId = user.Id, Author = user.Name, Count = 0 }

// Reduce takes results from both maps
from result in results
group result by result.UserId into g
select new
{
   Count = g.Sum(x=>x.Count),
   Author = g.Where(x=>x!=null).First(),
   UserId = g.Key
}

The only thing to understand now is that we have multiple map functions, getting data from multiple sources. We can then take those sources and reduce them together. The only requirements that we have is that the output of all of the map functions would be identical (and obviously, match the output of the reduce function). Then we can just treat this information as normal map/reduce index, which means that all of the usual RavenDB features kick in. Let us see what this actually means, using code. We have the following classes:

public class User
{
    public string Id { get; set; }
    public string Name { get; set; }
}

public class Post
{
    public string Id { get; set; }
    public string Title { get; set; }
    public string AuthorId { get; set; }
}

public class UserPostingStats
{
    public string UserName { get; set; }
    public string UserId { get; set; }
    public int PostCount { get; set; }
}

And we have the following index:

public class PostCountsByUser_WithName : AbstractMultiMapIndexCreationTask<UserPostingStats>
{
    public PostCountsByUser_WithName()
    {
        AddMap<User>(users => from user in users
                              select new
                              {
                                  UserId = user.Id,
                                  UserName = user.Name,
                                  PostCount = 0
                              });

        AddMap<Post>(posts => from post in posts
                              select new
                              {
                                  UserId = post.AuthorId,
                                  UserName = (string)null,
                                  PostCount = 1
                              });

        Reduce = results => from result in results
                            group result by result.UserId
                            into g
                            select new
                            {
                                UserId = g.Key,
                                UserName = g.Select(x => x.UserName).Where(x => x != null).First(),
                                PostCount = g.Sum(x => x.PostCount)
                            };

        Index(x=>x.UserName, FieldIndexing.Analyzed);
    }
}

badge8As you can see, we are getting the values from two different collections. We need to make sure that they are actually using the same output, which is what caused us the null casting for posts and the filtering that we need to do on the reduce.

But that is it! It is ridiculously easy compared to the previous alternative. Moreover, it follows quite naturally from both the exposed API and the internal implementation inside RavenDB. It took me roughly half a day to make it work, and some of that was dedicated to lunch Smile. In truth, most of that time is actually just handling the error conditions nicely, but… anyway, you get the point.

Even more interesting than the rest is the fact that for all intents and purposes, what we have done here is a join between two different collections. We were never able to really resolve the problems associated with joins before, update notifications were always too complex to figure out, but going the route of multi map makes things so easy.

Just for fun, you might have noticed that we marked the UserName property as analyzed, which means that we can issue full text queries against it. Let us assume that we want to provide users with the following UI:

image

It is now just a matter of writing the following code:

using (var session = store.OpenSession())
{
    var ups= session.Query<UserPostingStats, PostCountsByUser_WithName>()
        .Where(x => x.UserName.StartsWith("rah"))
        .ToList();

    Assert.Equal(1, ups.Count);

    Assert.Equal(5, ups[0].PostCount);
    Assert.Equal("Ayende Rahien", ups[0].UserName);
}

So you can do a cheap full text search over joins quite easily. For that matter, joins are cheap now, because they are computed on the background and queried directly from the pre-computed index.

Okay, enough blogging for now, going to implement all the proper error handling and then push an awesome new build.

Oh, and a final thought, Multi Map was shown in this blog only in the context of Multi Maps/Reduce, but we also support just the ability to use multi map on its own. This is quite useful if you want to enable search over a large number of entities that reside in different collections. I’ll just drop a bit of code here to show how it works:

public class CatsAndDogs : AbstractMultiMapIndexCreationTask
{
    public CatsAndDogs()
    {
        AddMap<Cat>(cats => from cat in cats
                         select new {cat.Name});

        AddMap<Dog>(dogs => from dog in dogs
                         select new { dog.Name });
    }
}

[Fact]
public void CanQueryUsingMutliMap()
{
    using (var store = NewDocumentStore())
    {
        new CatsAndDogs().Execute(store);

        using(var documentSession = store.OpenSession())
        {
            documentSession.Store(new Cat{Name = "Tom"});
            documentSession.Store(new Dog{Name = "Oscar"});
            documentSession.SaveChanges();
        }

        using(var session = store.OpenSession())
        {
            var haveNames = session.Query<IHaveName, CatsAndDogs>()
                .Customize(x => x.WaitForNonStaleResults(TimeSpan.FromMinutes(5)))
                .OrderBy(x => x.Name)
                .ToList();

            Assert.Equal(2, haveNames.Count);
            Assert.IsType<Dog>(haveNames[0]);
            Assert.IsType<Cat>(haveNames[1]);
        }
    }
}

All together, a great day’s work.

time to read 1 min | 93 words

We now have the first webinar posted on YouTube. All details here:

http://blog.hibernatingrhinos.com/8193/ravendb-webinar-1-now-available-on-youtube

Unfortunately YouTube doesn't allow us to upload videos longer than 15 minutes, so we have to split all webinars and talks before uploading them. That is annoying for you as our users, and takes us a lot of time in pre-processing. You can help us both by participating - commenting and rating videos. The more positive action we have, the faster we can get that limitation removed. We will appreciate your help with this.

See you in future webinars.

time to read 2 min | 348 words

All of a sudden, my code started getting a lot of TaskCancelledException. It took me a while to figure out what was going on. We can imagine that the code looked like this:

var unwrap = Task.Factory.StartNew(() =>
{
    if (DateTime.Now.Month % 2 != 0)
        return null;

    return Task.Factory.StartNew(() => Console.WriteLine("Test"));
}).Unwrap();

unwrap.Wait();

The key here is that when Unwrap is getting a null task, it will throw a TaskCancelledException, which was utterly confusing to me. It make sense, because if the task is null there isn’t anything that the Unwrap method can do about it. Although I do wish it would throw something like ArgumentNullException with a better error message.

The correct way to write this code is to have:

var unwrap = Task.Factory.StartNew(() =>
{
    if (DateTime.Now.Month % 2 != 0)
    {
        var taskCompletionSource = new TaskCompletionSource<object>();
        taskCompletionSource.SetResult(null);
        return taskCompletionSource.Task;
    }

    return Task.Factory.StartNew(() => Console.WriteLine("Test"));
}).Unwrap();

unwrap.Wait();

Although I do wish that there was an easier way to create a completed task.

time to read 1 min | 124 words

Wow! The RavenDB Webinar has been a great success, and a wonderful trial run. I know that a lot of people haven’t been able to get in, and I apologize for that, we absolutely did not expect to have so many people in.

The session was recorded, and we will upload it soon so everyone can watch it.

As part of experimenting with the format, and since we want to give anyone another chance, we will do another Webinar tomorrow, you can register here: https://www2.gotomeeting.com/register/398501658

Unlike the one we just had, we will have this one as a Q & A where we open the phone lines and start chatting with you about RavenDB, demoing things on the fly.

Should be fun…

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. RavenDB 7.1 (7):
    11 Jul 2025 - The Gen AI release
  2. Production postmorterm (2):
    11 Jun 2025 - The rookie server's untimely promotion
  3. Webinar (7):
    05 Jun 2025 - Think inside the database
  4. Recording (16):
    29 May 2025 - RavenDB's Upcoming Optimizations Deep Dive
  5. RavenDB News (2):
    02 May 2025 - May 2025
View all series

Syndication

Main feed ... ...
Comments feed   ... ...
}