A different view on creation
The following code shows several increasingly complex way to create a shared instance of an object:
What are the differences? The _requestExecuters value is a concurrent dictionary. And the major difference is the kind of allocations that happen in each call.
In the first scenario, we’ll create a new RequestExecuter each time that we call this line. We’ll still use only a single instance, of course, but we still create (and discard) an instance per call.
In the second scenario, we are passing a delegate, so we’ll only create the RequestExecuter once. Or so it seems. The problem is that under concurrent load, it is possible that we’ll have two RequestExecuters created, only one of which will be used. If we have any unmanaged resources in the RequestExecuter, that can cause a leak. Another issue is that we are using the method parameter, which forces the compiler to capture it, and allocate a new delegate instance per call.
The third scenario is the same as the second one, but we aren’t capturing the parameter, so the compiler will not need to create a new delegate instance per call.
The forth one is using a lazy value. This way, we avoid the race in creating the RequestExecuter, but we still create a new lazy instance per call.
And the fifth one is using a lazy instance and a cached delegate version, so there are no extra allocations there. There is still the race to create the lazy instance, but that should happen rarely, and it isn’t holding any expensive resources.
Comments
That almost suggests there's a space for a specialized class to take the place of the
ConcurrentDictionary<TKey, Lazy<TValue>>
, perhaps? Once you pay the cost of both thread safety mechanisms I start getting curious how the performance of a simple lock compares.Joseph, Note that this is a cache, so most of the time, you are only paying the cost for dictionary lookup and the already initialized lazy. The cost is pretty trivial on the happy path, and there are no allocations.
The downside of some of the latter approaches is the fact that Lazy, by default, tends to cache exceptions. So if the first call to RequestExecuter throws an exception, subsequent uses of it will also throw an exception.
Do you have any tips for handling that? We have solved it with a custom Lazy implementation - but keen to hear if you have a better idea.
Matthew, Yes, that is an issue. In this case, we don't have a ctor failing case, so that is fine, but it is something to be aware of
Matthew, Oren,
Lazy<T>
has a ctor acceptingLazyThreadSafetyMode
which can help.Comment preview