Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,598
|
Comments: 51,229
Privacy Policy · Terms
filter by tags archive
time to read 1 min | 162 words

Next on the schedule of problems we had an issue with again, with everything working fine remotely and not working when running locally. This time, it didn’t take as long to figure out what was going on, the root of the problem was “Account failed to log on (0xc000006d)”, and the explanation for that is here:

http://code-journey.com/2009/account-failed-to-log-on-0xc000006d-unable-to-load-website-from-local-server/

The basic idea is that you can’t authenticate against the local server using anything except the local server name.

And it still didn’t work.

The server is Windows 2008 R2, and the documentation said that I had to set a registry key at: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0

That did not work, but what did work was setting the DisableLoopbackCheck at: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa

No idea why, and if there is a problem at the documentation for Windows 2008 R2, but this is how I got it working on my end. I would probably like it more if I was able to use the recommended solution, and not the solution explicitly not recommended.

Any ideas?

time to read 2 min | 302 words

The following is quite annoying. I am trying to make an authenticated request to a web server. In this instance, we are talking about communication from one IIS application to another IIS application (the second application is RavenDB hosted inside IIS).

The part that drives me CRAZY is that I am getting this error when I am trying to make an authenticated request, but using the exact same code and credentials, from another machine, I can make this work just fine.

image

It took me a while to figure out that most important part, when running locally, I was running under my own account, when running remotely, the application run under IIS account. The really annoying part was that even when running on IIS locally, it still worked locally and failed remotely.

It took me even longer to figure out that the local IIS was configure to run under my own account as well Sad smile

Once that discovery was made, I was able to figure out what is wrong and implement a work around (run the remote IIS site under a custom user name). I know that the actual problem is something relating to permissions on certificate store, but I have no idea how, or, for that matter, which certificate?

This is plain old HTTP auth, no SSL, client certs, etc. I am assuming that this is raised because we are using Windows Auth, but I am not sure what is going on here with regards to which certificate should I grant to which user, or even how to do this.

Any ideas?

Concurrent Max

time to read 2 min | 286 words

Can you think of a better way to implement this code?

private volatile Guid lastEtag;
private readonly object lastEtagLocker = new object();
internal void UpdateLastWrittenEtag(Guid? etag)
{
    if (etag == null)
        return;

    var newEtag = etag.Value.ToByteArray();

    // not the most recent etag
    if (Buffers.Compare(lastEtag.ToByteArray(), newEtag) <= 0)
    {
        return;
    }

    lock (lastEtagLocker)
    {
        // not the most recent etag
        if (Buffers.Compare(lastEtag.ToByteArray(), newEtag) <= 0)
        {
            return;
        }

        lastEtag = etag.Value;
    }
}

We have multiple threads calling this function, and we need to ensure that lastEtag value is always the maximum value. This has the potential to be called often, so I want to make sure that I chose the best way to do this. Ideas?

time to read 4 min | 630 words

The following piece of code is responsible for the CSS generation for this blog:

public class CssController : Controller
{
     public ActionResult Merge(string[] files)
     {
         var builder = new StringBuilder();
         foreach (var file in files)
         {
             var pathAllowed = Server.MapPath(Url.Content("~/Content/css"));
             var normalizeFile = Server.MapPath(Url.Content(Path.Combine("~/Content/css", file)));
             if (normalizeFile.StartsWith(pathAllowed) == false)
             {
                 return HttpNotFound("Path not allowed");
             }
             if (System.IO.File.Exists(normalizeFile))
             {
                 Response.AddFileDependency(normalizeFile);
                 builder.AppendLine(System.IO.File.ReadAllText(normalizeFile));
             }
         }

         Response.Cache.VaryByParams["files"] = true;
         Response.Cache.SetLastModifiedFromFileDependencies();
         Response.Cache.SetETagFromFileDependencies();
         Response.Cache.SetCacheability(HttpCacheability.Public);

         var css = dotless.Core.Less.Parse(builder.ToString(), new DotlessConfiguration());

         return Content(css, "text/css");
     }
}

There are a lot of things going on in a very short amount of code. The CSS for this blog is defined as:

<link rel="stylesheet" type="text/css" href="/blog/css?files=ResetCss.css&files=custom/ayende.settings.less.css&files=base.less.css&files=custom/ayende.less.css">

This means that while we have multiple CSS files that make maintaining things easier, we only make a single request to the server to get all of them.

Next, and important, we have a security check that ensures that only files from the appropriate path can be served. If you aren’t in the CSS directory, you won’t be returned.

Then we have a lot of code that is related to caching, which basically means that we will rely on the ASP.Net output cache to do everything for us. The really nice thing is that unless the files change, we will not be executing this code again, rather, it would be served directly from the cache, without any computation on our part.

All in all, I am more than happy with this code.

time to read 3 min | 522 words

This is a story from the time (all those week or two ago) when we were building RaccoonBlog. The issue of managing CSS came up. We use the same code base for multiple sites, and mostly just switch around the CSS values for colors, etc.

It looks something like this:

image

We are using the less syntax to generate a more reasonable development experience with CSS. We started with the Chirpy extension, which provides build time CSS generation from less files inside Visual Studio.

Well, I say started because when I heard that we were using an extension I sort of freaked out a bit. I don’t like extensions, I especially don’t like extensions that I have to install now on every machine that I use to work on RaccoonBlog.  The answer to that was that it takes 2 minutes or less, and is completely managed within the VS AddIn UI, so it is really painless.

More than that, I was upset that it ruined my F5 scenario. That is, I could no longer make a change to the CSS file, hit F5 in the browser and see the changes. The answer to that is that I could still hit F5, I would just have to do that inside of Visual Studio, which is where I usually edit CSS anyway.

I didn’t like the answers very much, and we went with a code base solution that preserved all of the usual niceties of plain old CSS files (more on that in my next post).

The problem with tooling over code is that you need to have the tooling with you. And that is often an unacceptable burden. Consider the case of me wanting to make a modification on production to the way the site looks. Leave aside for a bit the concept of making modifications on productions, taking away the ability to do so is a Bad Thing, because if you need to do so, you need to do so Very Much.

Tooling are there to support and help you, but if tooling impact the way you work, it has better not have any negative implications. For example, if one member of the team is using the NHibernate Profiler, no one else on the team is impacted, they can go on and develop the application without any issue. But when using a tool like Chirpy, if you don’t have Chirpy installed, you can’t really work with the application CSS, and that isn’t something that I was willing to tolerate. Other options were raised as well, “we can do this in the build, you don’t need to do this via Chirpy”, but that just raised the same problem of breaking the Modify-CSS-Hit-F5-In-Browser cycle of work.

The code to solve this problem was short, to the point and really interesting, and it preserved all the usual CSS properties without creating dependencies on tools which might not be there when we need them.

time to read 2 min | 365 words

One of the things that Fitzchak is working on right now is the RavenDB in Practice series. During the series, he run into an issue with RavenDB failing if the configured port is already taken. Since by default we use 8080, it wasn’t very surprising that on his machine, the 8080 port was already taken. Because he run into this problem, he set out to document how to fix this issue if you run into it.

My approach was different, I don’t like documenting things. More to the point, documenting a problem is a last ditch effort, something that you do only if you have no other choice. It is usually much easier to figure out a way to solve the problem. In the case of RavenDB and the busy port, we start out at port 8080, and if it is busy, we find another port that is open that we can use. That means that the initial experience is much easier, since you can immediately make use of the database without dealing with how to change the port it is listening on.

Nitpicker corner: this feature kicks in only if you have explicitly stated that you wanted it in the configuration. This is part of the default config for RavenDB, and is meant for the initial process only. It is not recommended for production use.

Why is it so much better in code than in the documentation? Surely it takes a lot less time to explain how to modify an App.config value than write the code for auto detecting an open port and binding to it… and you need to document the behavior anyway…

The answer is that it saves on support and increase success rate. The saves on support are pretty obvious. If it works Out Of The Box, everyone is happy. But what is this Success Rate that I am talking about?

Unless you are someone like Oracle, you have a very narrow window of time when a user is willing to try whatever it is that you are providing. And you had better make that window of time a Success Story, rather than “Reading the Known Issues” document.

time to read 1 min | 184 words

We have been working on the profilers (NHibernate Profiler, Entity Framework Profiler, Linq to SQL Profiler, LLBLGen Profiler and Hibernate Profiler) for close to three years now. And we have been running always as 1.x, so we didn’t have a major release (although we have continual releases, we currently have close to 900 drops of the 1.x version).

The question now becomes, what is going to happen in the next version of the profiler?

  • Production Profiling, the ability to setup your application so that you can connect to your production application and see what is going on right now.
  • Error Analysis, the ability to provide you with additional insight and common solution to recurring problems.
  • Global Query Analysis, the ability to take all of your queries, look at their query plans and show your potential issues.

Those are the big ones, we have a few others, and a surprise in store Smile

What would you want to see in the next version of the profiler?

FUTURE POSTS

  1. The role of junior developers in the world of LLMs - about one day from now

There are posts all the way to Aug 20, 2025

RECENT SERIES

  1. RavenDB 7.1 (7):
    11 Jul 2025 - The Gen AI release
  2. Production postmorterm (2):
    11 Jun 2025 - The rookie server's untimely promotion
  3. Webinar (7):
    05 Jun 2025 - Think inside the database
  4. Recording (16):
    29 May 2025 - RavenDB's Upcoming Optimizations Deep Dive
  5. RavenDB News (2):
    02 May 2025 - May 2025
View all series

Syndication

Main feed ... ...
Comments feed   ... ...
}