Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,592
|
Comments: 51,225
Privacy Policy · Terms
filter by tags archive
time to read 1 min | 91 words

I am doing the NHibernate Course right now, and we go to the stage where everyone is tired and need some comic relief, at which point I decided to show them the true meaning of an SLA.

image

If you commit this without telling anyone, you get to have so much fun when the computer suddenly start screaming at those horribly inefficient pages.

time to read 3 min | 447 words

In my previous post, I discussed the server side implementation of lazy requests / Multi GET. The ability to submit several requests to the server in a single round trip. RavenDB has always supported the ability to perform multiple write operations in a single batch, but now we have the reverse functionality the ability to make several reads at one. (The natural progression, the ability to make several read/write operations in a single batch will not be supported).

As it turned out, it was actually pretty hard to do, and require us to do some pretty deep refactoring to the way we were executing our requests, but in the end it is here and it is working. Here are a few examples of how it looks line from the client API point of view:

var lazyUser = session.Advanced.Lazily.Load<User>("users/ayende");
var lazyPosts = session.Query<Posts>().Take(30).Lazily();

And up until now, there is actually nothing being sent to the server. The result of those two calls are Lazy<User> and Lazy<IEnumerable<Post>>, respectively.

The reason that we are using Lazy<T> in this manner is that we want to make it very explicit when you are actually evaluating the lazy request. All the lazy requests will be sent to the server in a single roundtrip the first time that any of the lazy instances are evaluated, or you can force this to happen by calling:

session.Advanced.Eagerly.ExecuteAllPendingLazyOperations();

In order to increase the speed even more, on the server side, we are actually going to process all of those queries in parallel. So you get several factors that would speed up your application:

  • Only a single remote call is made.
  • All of the queries are actually handled in parallel on the server side.

And the best part, you usually don’t really have to think about it. If you use the Lazy API, it just works Smile.

time to read 3 min | 557 words

One of the annoyances of HTTP is that it is not really possible to make complex queries easily. To be rather more exact, you can make a complex query fairly easily, but at some point you’ll reach the URI limit, and worse, there is no easy way to make multiple queries in a single round trip.

I have been thinking about this a lot lately, because it is a stumbling block for a feature that is near and dear to my heart, the Future Queries feature that is so useful when using NHibernate.

The problem was that I couldn’t think of a good way of doing this. Well, I could think of how to do this quite easily, to be truthful. I just couldn’t think of a good way to make this work nicely with the other features of RavenDB.

In particular, it was hard to figure out how to deal with caching. One of the really nice things about RavenDB’s RESTful nature is that caching is about as easy as it can be. But since we need to tunnel requests through another medium for it to work, I couldn’t figure out how to make this work in a nice fashion. And then I remembered that REST didn’t actually have anything to do with HTTP itself, you can do REST on top of any transport protocol.

Let us look at how requests are handled in RavenDB over the wire:

GET http://localhost:8080/docs/bobs_address

HTTP/1.1 200 OK

{
  "FirstName": "Bob",
  "LastName": "Smith",
  "Address": "5 Elm St."
}

GET http://localhost:8080/docs/users/ayende

HTTP/1.1 404 Not Found

As you can see, we have 2 request / reply calls.

What we did in order to make RavenDB support multiple requests in a single round trip is to build on top of this exact nature using:

POST http://localhost:8080/multi_get

[
   { "Url": "http://localhsot:8080/docs/bobs_address", "Headers": {} },
   { "Url": "http://localhsot:8080/docs/users/ayende", "Headers": {} },
]

HTTP/1.1 200 OK

[
  { "Status": 200, "Result": { "FirstName": "Bob", "LastName": "Smith", "Address": "5 Elm St." }},
  { "Status": 404 "Result": null },
]

Using this approach, we can handle multiple requests in a single round trip.

You might not be surprised to learn that it was actually very easy to do, we just needed to add an endpoint and have a way of executing the request pipeline internally. All very easy.

The really hard part was with the client, but I’ll touch on that in my next post.

time to read 2 min | 286 words

After spending so much time talking about how important Warrants are, it is actually surprising to see the UI for a warrant:

image

It is pretty simple, because from a data entry perspective, there isn’t really much to it. It is the effects of the Warrants that make it such an interesting concept. One thing to note here is that the date that we care about for the Warrant isn’t the date it was issued, but when the guy was actually arrested, that is where the clock starts ticking.

Adding a Warrant is also not that complex from a data entry perspective:

image

image

image

As you can see, thus far, it is pretty simple. But when you click the Finish button the complexity starts.

We need to check that the Warrant is valid (issued by someone with the authority to do so), and then calculate the new duration for the Inmate.

And that is enough for today, I just wrote ~10 posts on Macto in the last 24 hours, it is time to do something else.

time to read 12 min | 2372 words

In RavenDB, we had this piece of code:

        internal T[] LoadInternal<T>(string[] ids, string[] includes)
        {
            if(ids.Length == 0)
                return new T[0];

            IncrementRequestCount();
            Debug.WriteLine(string.Format("Bulk loading ids [{0}] from {1}", string.Join(", ", ids), StoreIdentifier));
            MultiLoadResult multiLoadResult;
            JsonDocument[] includeResults;
            JsonDocument[] results;
#if !SILVERLIGHT
            var sp = Stopwatch.StartNew();
#else
            var startTime = DateTime.Now;
#endif
            bool firstRequest = true;
            do
            {
                IDisposable disposable = null;
                if (firstRequest == false) // if this is a repeated request, we mustn't use the cached result, but have to re-query the server
                    disposable = DatabaseCommands.DisableAllCaching();
                using (disposable)
                    multiLoadResult = DatabaseCommands.Get(ids, includes);

                firstRequest = false;
                includeResults = SerializationHelper.RavenJObjectsToJsonDocuments(multiLoadResult.Includes).ToArray();
                results = SerializationHelper.RavenJObjectsToJsonDocuments(multiLoadResult.Results).ToArray();
            } while (
                AllowNonAuthoritiveInformation == false &&
                results.Any(x => x.NonAuthoritiveInformation ?? false) &&
#if !SILVERLIGHT
                sp.Elapsed < NonAuthoritiveInformationTimeout
#else 
                (DateTime.Now - startTime) < NonAuthoritiveInformationTimeout
#endif
                );

            foreach (var include in includeResults)
            {
                TrackEntity<object>(include);
            }

            return results
                .Select(TrackEntity<T>)
                .ToArray();
        }

And we needed to take this same piece of code and execute it in:

  • Async fashion
  • As part of a batch of queries (sending multiple requests to RavenDB in a single HTTP call).

Everything else is the same, but in each case the marked line is completely different.

I chose to address this by doing a Method Object refactoring. I create a new class, and moved all the local variables to fields, and moved each part of the method to its own method. I also explicitly gave up control on executing, deferring that to whoever it calling us. We ended up with this:

    public class MultiLoadOperation
    {
        private static readonly Logger log = LogManager.GetCurrentClassLogger();

        private readonly InMemoryDocumentSessionOperations sessionOperations;
        private readonly Func<IDisposable> disableAllCaching;
        private string[] ids;
        private string[] includes;
        bool firstRequest = true;
        IDisposable disposable = null;
        JsonDocument[] results;
        JsonDocument[] includeResults;
                
#if !SILVERLIGHT
        private Stopwatch sp;
#else
        private    DateTime startTime;
#endif

        public MultiLoadOperation(InMemoryDocumentSessionOperations sessionOperations, 
            Func<IDisposable> disableAllCaching,
            string[] ids, string[] includes)
        {
            this.sessionOperations = sessionOperations;
            this.disableAllCaching = disableAllCaching;
            this.ids = ids;
            this.includes = includes;
        
            sessionOperations.IncrementRequestCount();
            log.Debug("Bulk loading ids [{0}] from {1}", string.Join(", ", ids), sessionOperations.StoreIdentifier);

#if !SILVERLIGHT
            sp = Stopwatch.StartNew();
#else
            startTime = DateTime.Now;
#endif
        }

        public IDisposable EnterMultiLoadContext()
        {
            if (firstRequest == false) // if this is a repeated request, we mustn't use the cached result, but have to re-query the server
                disposable = disableAllCaching();
            return disposable;
        }

        public bool SetResult(MultiLoadResult multiLoadResult)
        {
            firstRequest = false;
            includeResults = SerializationHelper.RavenJObjectsToJsonDocuments(multiLoadResult.Includes).ToArray();
            results = SerializationHelper.RavenJObjectsToJsonDocuments(multiLoadResult.Results).ToArray();

            return    sessionOperations.AllowNonAuthoritiveInformation == false &&
                    results.Any(x => x.NonAuthoritiveInformation ?? false) &&
#if !SILVERLIGHT
                    sp.Elapsed < sessionOperations.NonAuthoritiveInformationTimeout
#else 
                    (DateTime.Now - startTime) < sessionOperations.NonAuthoritiveInformationTimeout
#endif
                ;
        }

        public T[] Complete<T>()
        {
            foreach (var include in includeResults)
            {
                sessionOperations.TrackEntity<object>(include);
            }

            return results
                .Select(sessionOperations.TrackEntity<T>)
                .ToArray();
        }
    }

Note that this class doesn’t contain two very important things:

  • The actual call to the database, we gave up control on that.
  • The execution order for the methods, we don’t control that either.

That was ugly, and I decided that since I have to write another implementation as well, I might as well do the right thing and have a shared implementation. The key was to extract everything away except for the call to get the actual value. So I did just that, and we got a new class, that does all of the functionality above, except control where the actual call to the server is made and how.

Now, for the sync version, we have this code:

internal T[] LoadInternal<T>(string[] ids, string[] includes)
{
    if(ids.Length == 0)
        return new T[0];

    var multiLoadOperation = new MultiLoadOperation(this, DatabaseCommands.DisableAllCaching, ids, includes);
    MultiLoadResult multiLoadResult;
    do
    {
        using(multiLoadOperation.EnterMultiLoadContext())
        {
            multiLoadResult = DatabaseCommands.Get(ids, includes);
        }
    } while (multiLoadOperation.SetResult(multiLoadResult));

    return multiLoadOperation.Complete<T>();
}

This isn’t the most trivial of methods, I’ll admit, but it is ever so much better than the alternative, especially since now the async version looks like:

/// <summary>
/// Begins the async multi load operation
/// </summary>
public Task<T[]> LoadAsyncInternal<T>(string[] ids, string[] includes)
{
    var multiLoadOperation = new MultiLoadOperation(this,AsyncDatabaseCommands.DisableAllCaching, ids, includes);
    return LoadAsyncInternal<T>(ids, includes, multiLoadOperation);
}

private Task<T[]> LoadAsyncInternal<T>(string[] ids, string[] includes, MultiLoadOperation multiLoadOperation)
{
    using (multiLoadOperation.EnterMultiLoadContext())
    {
        return AsyncDatabaseCommands.MultiGetAsync(ids, includes)
            .ContinueWith(t =>
            {
                if (multiLoadOperation.SetResult(t.Result) == false)
                    return Task.Factory.StartNew(() => multiLoadOperation.Complete<T>());
                return LoadAsyncInternal<T>(ids, includes, multiLoadOperation);
            })
            .Unwrap();
    }
}

Again, it isn’t trivial, but at least the core stuff, the actual logic that isn’t related to how we execute the code is shared.

time to read 2 min | 347 words

Well, so far we have looked at the main screen and the Counting screen, it is time we introduce ourselves to the Inmate screen as well:

image

As you can see, there isn’t a lot going on here. We have the Inmate’s Id, name and location (including noting where he is located, if he is known to be outside the prison). We don’t usually care for things like detailed information in the normal course of things (the Inamte’s national id, for example). That information is important, but usually not relevant, we relegate it to a separate screen.

The dates for incarceration and scheduled release are also important, but they aren’t available for editing, they are there only for information purposes.

The note is there to make sure that highly important information (such as whatever the Inmate is suicidal / flight risk) would be clearly available. The same is true for the Cliff Notes version of the Record.

It is there not for casual use, but to ensure that pertinent information is available. Most importantly, note the last line there. Which means that if this Inmate is about to be released, we have to notify someone and get their approval for that. Well, approval is a strong word, we notify them that they need to give us a new Warrant for the Inmate, but we can delay releasing him (say, until midnight the day he is to be released) waiting for that Warrant.

Warrants are important, and you can see the ones that this guy have listed here. The last Warrant is the important one, the others are shown for completion sake and to ensure that we have continuous Warrants.

There are also several actions that we can do to an Inmate. We can Transfer him to another prison, Release him or Add a Warrant. Each of those is a complex process of its own, and I’ll discuss them later on.

time to read 3 min | 408 words

When you get this sort of an email, you almost always know that this is going to be bad:

image

Let us start with: Which product? What license key? What order? What do you expect me to do about it?

At least he is polite.

image

image

Hm, I wonder what is going on in here…

This error can occur because of a trial that has expired or a subscription that has not been renewed.

image

image

He attached a Trial licensed to this email.

image

image

It is like a Greek tragedy, you know that at some point this is going to arrive at the scene.

image

I mean, we explicitly added the notion of subscriptions to handle just such cases, of people who want to use the profiler just for a few days and don’t want to pay the full version price. And you can cancel that at any time, incurring no additional charges.

image

Sigh…

time to read 2 min | 263 words

I might have mentioned before that Counting is somewhat important in prison. That is about as accurate as saying that you would somewhat prefer it to keep breathing. Counting is the heartbeat of the prison, the thing that the entire operation revolves around.

Here we can see the counting screen:

image

You can see that we have an Open Count here, that is a count that is still in progress. Some cell blocks have reported their counts, and some are still in the process of making the count.

A Count is Closed when we know where are the Inmates are. (When the two numbers line up properly). You probably noticed that there are two ways to report a count, the first is for Inmates who are outside the prison (court, hospital, etc). Because they are outside the prison, we track them by name. For internal counts, we don’t really care about names, just the numbers.

There is another type of a Count, a Named Count, which is a process that happen very rarely, but is usually used to reconcile what is in the computer and what is actually in the cells.

It is important to understand the “Officer in charge” field, basically, it is the guy who has the legal responsibility and is signing off on those numbers. In other words, if there is something wrong, that guy is going to take a hard fall.

time to read 5 min | 964 words

People always love to start with CRUD, but it is almost never that simple. In this post, we will review the process required to accept an Inmate into the prison.

That process is composed of the following parts:

  1. Identification
    • Who is the guy?
    • Id number
    • Names
    • Photo
    • If we can’t identify / refuse to identify, we still need to be able to accept him.
  2. Lawful chain of incarceration
    • Go over all the documents for his arrest
    • Ensure that they are all in order
    • Ensure that they are continuous and valid
    • Check if there are any urgent things to do with him. For example, he may need to be at court today or the next day.
  3. Medical exam
    • Is it okay to hold the guy in prison?
    • If he is not healthy, can we take care of him in prison?
    • Does he require hospitalization?
    • Does he require medicine / treatment?
    • Are there any medical consideration into where to put him?
  4. Intelligence
    • Interviewing the guy
    • Report interesting details that are known about him
  5. Acceptability
    • Does he fit the level of Inmates we can accept? We usually don’t put murderers in minimum security prisons, for example.
    • Does he have any medical reason to reject him?
    • Are there any problems with the incarceration documents?
    • Is there any intelligence warning about the guy?
  6. Placement
    • Decide where to put the Inmate
    • What type of an Inmate is he? (Just arrested, sentenced, sentenced for a long period, etc)
    • Why is he in prison for?
    • What kind is he? (You want to avoid Inmate infighting, it creates paperwork, so you avoid putting them in conflict if possible)
    • Where there is room available?

Another important aspect to remember is that while we are allowed to reject invalid input (for example, we are allowed to say that the id number has to consist of only numeric characters), we are not allowed to reject input that is wrong.

What do I mean by that. Let us say that we have an Inmate at the door, and he doesn’t have his incarceration paperwork in order (well, not he, whoever brought him in, but you get the point). That means that legally, we can’t hold him. But Macto isn’t where things are actually happening, it is merely a support system that tracks what is going on in the real world. And the prison commander can decide to accept that guy anyway (say, because the paperwork in en route), and we have to allow for that. If we try to stop people from doing this, it is going to be worked around, and we don’t want that. The system is allowed, even encouraged, to warn the users when they are doing something wrong, but it cannot block it.

The first part, Identification, is actually pretty easy, all told. This is fairly simple data entry process. We’ll want to do some checkups on the data, such as that the id number is valid or to check the id against the name, etc. But we basically have to have some level of trust in the documents that we have. You usually don’t have an arrest warrant for “tall guy in brown shirt”. If we find any problems there, we can Flag the Dossier as a potentially fraudulent name. This is also the stage where we want to check if the Inmate is a returned visit, and bring the life the old Dossier.

The second part is more complex, because there are many different types of Warrants, each with their own implications. Arrest Warrant is valid for 24 hours, Remand Warrant is good until sentencing, etc. We need to input all of those Warrants, ensure that they are consistent, valid and continuous. If there is a problem with that, we need to Flag the Dossier, but we can’t reject it. We will discuss this in more detail in the next post.

The third part is basically external to Macto, we may need to provide the Inmate ID, for correlation purposes, but nothing beyond that. We do need to get approval from the doctor that the Inmate is in an OK condition to be held in the prison. That does get recorded in Macto.

The forth part is again, external to us. Usually any information is classified and wouldn’t appear in Macto. We may get some intelligence brief about the guy, but usually we won’t.

The fifth part is important, this is where we actually take legal and physical ownership for the Inmate. Up until that point, we had him in our hands, but we weren’t responsible for him. Accepting the Inmate is a simple matter if everything is good, but if the Dossier was Flagged, we might need approval from the officer in charge. Accepting an Inmate means that he is added to the prison’s Roster.

The sixth part is pretty much “where do I have a spare bed”, after which he is added to the Roster of the cell block he is now in care of.

It is important to note that Placement always happens. Even if immediately after Accepting an Inmate you rushed him to the hospital, that Inmate still has to be assigned to a cell block, because that assignment means that the cell block commander is in charge of him. That way we avoid potential mishaps when an Inmate is assigned to no one, doesn’t get Counted.

Okay, I think that this is enough for now, in the next post, we will discuss what exactly goes on in the second part, it is a pretty complex piece, and it deserve its own post.

time to read 13 min | 2572 words

While working on the RavenDB server is usually a lot of fun, there is a part of RavenDB client that I absolutely abhor. Every time that I need to touch the part of the client API that talks to the server, it is a pain. Why is that?

Let us take a the simple example, loading a document by id. On the wire, it looks like this:

GET /docs/users/ayende

It can’t be simpler than that, except that internally in RavenDB, we have three implementation for that:

  • Standard sync impl
  • Standard async impl
  • Silverlight async impl

The problem is that each of those uses different API, and while we created a shared abstraction for async / sync, at least, it is a hard task making sure that they are all in sync with one another if we need to make a modification.

For example, let us look at the three implementation of Get:

public JsonDocument DirectGet(string serverUrl, string key)
{
    var metadata = new RavenJObject();
    AddTransactionInformation(metadata);
    var request = jsonRequestFactory.CreateHttpJsonRequest(this, serverUrl + "/docs/" + key, "GET", metadata, credentials, convention);
    request.AddOperationHeaders(OperationsHeaders);
    try
    {
        var requestString = request.ReadResponseString();
        RavenJObject meta = null;
        RavenJObject jsonData = null;
        try
        {
            jsonData = RavenJObject.Parse(requestString);
            meta = request.ResponseHeaders.FilterHeaders(isServerDocument: false);
        }
        catch (JsonReaderException jre)
        {
            var headers = "";
            foreach (string header in request.ResponseHeaders)
            {
                headers = headers + string.Format("\n\r{0}:{1}", header, request.ResponseHeaders[header]);
            }
            throw new JsonReaderException("Invalid Json Response: \n\rHeaders:\n\r" + headers + "\n\rBody:" + requestString, jre);
        }
        return new JsonDocument
        {
            DataAsJson = jsonData,
            NonAuthoritiveInformation = request.ResponseStatusCode == HttpStatusCode.NonAuthoritativeInformation,
            Key = key,
            Etag = new Guid(request.ResponseHeaders["ETag"]),
            LastModified = DateTime.ParseExact(request.ResponseHeaders["Last-Modified"], "r", CultureInfo.InvariantCulture).ToLocalTime(),
            Metadata = meta
        };
    }
    catch (WebException e)
    {
        var httpWebResponse = e.Response as HttpWebResponse;
        if (httpWebResponse == null)
            throw;
        if (httpWebResponse.StatusCode == HttpStatusCode.NotFound)
            return null;
        if (httpWebResponse.StatusCode == HttpStatusCode.Conflict)
        {
            var conflicts = new StreamReader(httpWebResponse.GetResponseStreamWithHttpDecompression());
            var conflictsDoc = RavenJObject.Load(new JsonTextReader(conflicts));
            var conflictIds = conflictsDoc.Value<RavenJArray>("Conflicts").Select(x => x.Value<string>()).ToArray();

            throw new ConflictException("Conflict detected on " + key +
                                        ", conflict must be resolved before the document will be accessible")
            {
                ConflictedVersionIds = conflictIds
            };
        }
        throw;
    }
}
 

This is the sync API, of course, next we will look at the same method, for the full .NET framework TPL:

public Task<JsonDocument> GetAsync(string key)
{
    EnsureIsNotNullOrEmpty(key, "key");

    var metadata = new RavenJObject();
    AddTransactionInformation(metadata);
    var request = jsonRequestFactory.CreateHttpJsonRequest(this, url + "/docs/" + key, "GET", metadata, credentials, convention);

    return Task.Factory.FromAsync<string>(request.BeginReadResponseString, request.EndReadResponseString, null)
        .ContinueWith(task =>
        {
            try
            {
                var responseString = task.Result;
                return new JsonDocument
                {
                    DataAsJson = RavenJObject.Parse(responseString),
                    NonAuthoritiveInformation = request.ResponseStatusCode == HttpStatusCode.NonAuthoritativeInformation,
                    Key = key,
                    LastModified = DateTime.ParseExact(request.ResponseHeaders["Last-Modified"], "r", CultureInfo.InvariantCulture).ToLocalTime(),
                    Etag = new Guid(request.ResponseHeaders["ETag"]),
                    Metadata = request.ResponseHeaders.FilterHeaders(isServerDocument: false)
                };
            }
            catch (WebException e)
            {
                var httpWebResponse = e.Response as HttpWebResponse;
                if (httpWebResponse == null)
                    throw;
                if (httpWebResponse.StatusCode == HttpStatusCode.NotFound)
                    return null;
                if (httpWebResponse.StatusCode == HttpStatusCode.Conflict)
                {
                    var conflicts = new StreamReader(httpWebResponse.GetResponseStreamWithHttpDecompression());
                    var conflictsDoc = RavenJObject.Load(new JsonTextReader(conflicts));
                    var conflictIds = conflictsDoc.Value<RavenJArray>("Conflicts").Select(x => x.Value<string>()).ToArray();

                    throw new ConflictException("Conflict detected on " + key +
                                                ", conflict must be resolved before the document will be accessible")
                    {
                        ConflictedVersionIds = conflictIds
                    };
                }
                throw;
            }
        });
}

And here is the Siliverlight version:

public Task<JsonDocument> GetAsync(string key)
{
    EnsureIsNotNullOrEmpty(key, "key");

    key = key.Replace("\\",@"/"); //NOTE: the present of \ causes the SL networking stack to barf, even though the Uri seemingly makes this translation itself

    var request = url.Docs(key)
        .ToJsonRequest(this, credentials, convention);

    return request
        .ReadResponseStringAsync()
        .ContinueWith(task =>
        {
            try
            {
                var responseString = task.Result;
                return new JsonDocument
                {
                    DataAsJson = RavenJObject.Parse(responseString),
                    NonAuthoritiveInformation = request.ResponseStatusCode == HttpStatusCode.NonAuthoritativeInformation,
                    Key = key,
                    LastModified = DateTime.ParseExact(request.ResponseHeaders["Last-Modified"].First(), "r", CultureInfo.InvariantCulture).ToLocalTime(),
                    Etag = new Guid(request.ResponseHeaders["ETag"].First()),
                    Metadata = request.ResponseHeaders.FilterHeaders(isServerDocument: false)
                };
            }
            catch (AggregateException e)
            {
                var webException = e.ExtractSingleInnerException() as WebException;
                if (webException != null)
                {
                    if (HandleWebExceptionForGetAsync(key, webException))
                        return null;
                }
                throw;
            }
            catch (WebException e)
            {
                if (HandleWebExceptionForGetAsync(key, e))
                    return null;
                throw;
            }
        });
}

Did I mention it is annoying?

All of those methods are doing the exact same thing, but I have to maintain 3 versions of them. I thought about dropping the sync version (it is easy to do sync on top of async), which would mean that I would have only 2 implementations, but I don’t like it. The error handling and debugging support for async is still way below what you can get for sync code.

I don’t know if there is a solution for this, I just know that this is a big pain point for me.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. RavenDB 7.1 (7):
    11 Jul 2025 - The Gen AI release
  2. Production postmorterm (2):
    11 Jun 2025 - The rookie server's untimely promotion
  3. Webinar (7):
    05 Jun 2025 - Think inside the database
  4. Recording (16):
    29 May 2025 - RavenDB's Upcoming Optimizations Deep Dive
  5. RavenDB News (2):
    02 May 2025 - May 2025
View all series

Syndication

Main feed ... ...
Comments feed   ... ...
}