Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,499
Comments: 51,069
Privacy Policy · Terms
filter by tags archive
time to read 2 min | 373 words

You might have noticed that during the last day, we had duplicated posts. When I first saw that, I was shocked, there was absolutely no reason for this to have happened, and it seems that somehow the index or the database were corrupted.

We started investigating this, and we were able to reproduce this locally by using the database export. That was a great relief, because it meant that at least we could debug this. Once we were able to do that, we found out what the problem was. And basically, there was no problem. It was a configuration error (actually, two of them) that caused the problem.

We accidently enabled versioning for the blog’s database, you can read more about RavenDB’s versioning bundle if you want, but basically, it keep copies of modified documents so you have an audit trail around. So far, so good, and no one noticed anything. But then we removed versioning from the blog’s database, since after all, we didn’t need it there in the first place.

The problem is that in the meantime, the versioning bundle created all of those additional documents. While it was in operation, it would hide those documents (since they were there for historical purposes only), but once we removed it, all those historical documents showed up. The reason for the duplicate posts was that we had duplicate posts, it was just that they were duplicate posts of the same post (the historical review).

Why did we have so many? Whenever you comment, we update the CommentCount field, so we had as many historical copies for most posts as we had comments.

Fixed now, and I apologize for the trouble.

As a general point of interest, bundles allows great flexibility in the database, but they are design to be with the database. Removing and adding bundles to a database on the fly is not something that is going to just work, as we have just re-learned.

I apologize for the problem, it was a simple misconfiguration error that caused all the historical records to show up when they didn’t need to. Nothing to see here, move along Smile

time to read 1 min | 156 words

One of the fun parts about RavenDB is that it will self optimize itself for you depending on how you are using your data.

With this blog, I decided when going live with RavenDB that I would not follow the best practices of ensuring static indexes for everything, but would let it figure it out on its own.

Today, I got curious and decided to check up on that:


What you see is pretty interesting.

  • The first three indexes were automatically created by RavenDB in response to queries made on the database.
  • The Raven/* indexes are created by RavenDB itself, for the Raven Studio.
  • The MapReduce indexes are for statistics on the blog, and are the only two that were actually created by the application explicitly.
time to read 4 min | 630 words

The following piece of code is responsible for the CSS generation for this blog:

public class CssController : Controller
     public ActionResult Merge(string[] files)
         var builder = new StringBuilder();
         foreach (var file in files)
             var pathAllowed = Server.MapPath(Url.Content("~/Content/css"));
             var normalizeFile = Server.MapPath(Url.Content(Path.Combine("~/Content/css", file)));
             if (normalizeFile.StartsWith(pathAllowed) == false)
                 return HttpNotFound("Path not allowed");
             if (System.IO.File.Exists(normalizeFile))

         Response.Cache.VaryByParams["files"] = true;

         var css = dotless.Core.Less.Parse(builder.ToString(), new DotlessConfiguration());

         return Content(css, "text/css");

There are a lot of things going on in a very short amount of code. The CSS for this blog is defined as:

<link rel="stylesheet" type="text/css" href="/blog/css?files=ResetCss.css&files=custom/ayende.settings.less.css&files=base.less.css&files=custom/ayende.less.css">

This means that while we have multiple CSS files that make maintaining things easier, we only make a single request to the server to get all of them.

Next, and important, we have a security check that ensures that only files from the appropriate path can be served. If you aren’t in the CSS directory, you won’t be returned.

Then we have a lot of code that is related to caching, which basically means that we will rely on the ASP.Net output cache to do everything for us. The really nice thing is that unless the files change, we will not be executing this code again, rather, it would be served directly from the cache, without any computation on our part.

All in all, I am more than happy with this code.

time to read 2 min | 297 words

One of the things that people kept bugging me about is that I am not building applications any longer, another annoyance was with my current setup using this blog.

Basically, my own usage pattern is quite different than most people. I tend to post to the blog in batches, usually of five or ten blog posts in a short amount of time. That means that future posts are pretty important to me, and that scheduling posts is something that I care deeply about. I managed to hack Subtext to do my bidding in that regard, but it was never perfect, and re-scheduling posts was always a pain in the ass.

Then there is RavenDB, and my desire to run on it…

What this means, in short, is that we now have the Raccoon Blog, which is a blog application suited specifically for my own needs, running on MVC 3 and with a RavenDB backend.

By the time you read it, it will already be running our company blog and it is scheduled to run ayende.com/blog this week.

What is Raccoon Blog?

It is a blog application running on top of RavenDB store and tailored specifically for our needs.

  • Strong scheduling features
  • Strong re-scheduling features
  • Support multiple authors in a single blog
  • Single blog per site (no multi blog support)
  • Recaptcha support
  • Markdown support for comments (you’ll be able to post code!)
  • Easy section support (for custom sidebar content)
  • Smart tagging support

And just for fun:

  • Fully HTML 5 compliant
time to read 1 min | 72 words

Originally posted at 5/5/2011

The last time that I updated ayende.com was 2009, and I don’t see a lot of value in keeping it there. I am considering doing something drastic about that, and simply moving the blog to ayende.com.

Do you have anything there that you really care about?

Just to be clear, the blog will still be here, we are talking about the site available at ayende.com, not ayende.com/blog.

time to read 1 min | 114 words

As you know, I uses future posting quite heavily, which is awesome, as long as I keep to the schedule. Unfortunately, when you have posts two or three weeks in advance, it is actually quite common for you need to post things in a more immediate sense.

And that is just a pain. I just added smart re-scheduling to my fork of Subtext. Basically, it is very simple. If I post now, I want the post now. If I post it in the future, move everything one day ahead. If I post with no date, put it as the last item on the queue.

This is the test for this feature.

time to read 3 min | 441 words

I am well aware that I am… outside the curve for bloggers. For a long while I handled that by simply dumping the posts as soon as I wrote them, but that turned out to be quite a burden for some readers, and pieces that I think deserve more attention were skipped, because they were simply drowning in the noise of so many blog posts.

I am much happier with the future posting concept. It make things more predictable, both for me and for the readers. The problem happen when you push this to its logical conclusion. At the time of this writing, I have a month of scheduled posts ahead of me, and this is the third or forth blog post that I wrote in the last 24 hours.

In essence, I created a convoy for my own blog. At some point, if this trend progresses, it will be a problem. But I kinda like the fact that I can relax for a month and the blog will function on auto pilot. There is also the nice benefit that by the time that the blog post is published, I forgot what it said (I use the write & forget method), so I need to read the post again, which helps, a lot.

But there are some disadvantages to this as well. My current system will simply schedule a post on the next day after the last day. This works, great, if I have posts that are not time sensitive. But what actually happen is that there are lot of scenarios in which I want to set the date of the post to the near future. I still try to keep it to one post a day, so that means that I need to shuffle the rest of the items in the queue, though. This is especially troubling when you consider that I usually write a series of posts that interconnect to a full story.

So I can’t just take one of them and bump it to the end, I might have to do rearranging of the entire timeline. And there is no support for that, I have to go and manually update the timing for everything else.

It is pretty clear why this feature is missing, it is an outlier one. But it probably means that i am going to fork SubText and add those things. And the real problem is that I would really like to avoid doing any UI work there. So I need to think about a system that would let me do that without any UI work from my part.

time to read 2 min | 263 words

I got a request in email to add something like Disqus to my blog, which would allow a richer platform for the commenting that goes on here. I think that the request and my reply are interesting enough to warrant this blog post.

My comment system is the default subtext one, but there are several advantages to the way it works. You can read the full explanation in Joel on Software post about the matter, but basically, threading encourages people to go off in tangents, single thread of conversation make it significantly easier to have only one conversation.

There is another reason, which is personally important to me, which is that I want to "own" the comments. Not own in terms of copyright, but own in terms of having control of the data itself. Having the comments (a hugely important part of the blog) being managed by a 3rd party which might shut down and take all the comments with it is not acceptable.

That is probably a false fear, but it is something that I take under consideration. The reasoning about the type of interaction going on in the comments is a lot more important. There is also something else to consider, if a post gets too hot (generating too many comments), I am either going to close comments on it, or open a new post with summary of what went on in the previous post comment thread anyway, so it does have some checks & balances that keep a comment thread from growing too large.

time to read 1 min | 82 words

Recently all my technorati feeds started to give me stuff like this:


It looks like someone managed to crack the way that technorati is searching feeds, and I am getting what amounts to spammed search results. If this continues, it looks like I’ll just have to give up on it completely.

Any good alternatives?


No future posts left, oh my!


  1. Recording (13):
    05 Mar 2024 - Technology & Friends - Oren Eini on the Corax Search Engine
  2. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
  3. Production postmortem (51):
    12 Dec 2023 - The Spawn of Denial of Service
  4. Challenge (74):
    13 Oct 2023 - Fastest node selection metastable error state–answer
  5. Filtering negative numbers, fast (4):
    15 Sep 2023 - Beating memcpy()
View all series


Main feed Feed Stats
Comments feed   Comments Feed Stats