Ayende @ Rahien

My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:


+972 52-548-6969

, @ Q c

Posts: 6,026 | Comments: 44,842

filter by tags archive

What is the cost of try/catch

time to read 3 min | 439 words

I recently got a question about the cost of try/catch, and whatever it was prohibitive enough to make you want to avoid using it.

That caused some head scratching on my part, until I got the following reply:

But, I’m still confused about the try/catch block not generating an overhead on the server.

Are you sure about it?

I learned that the try block pre-executes the code, and that’s why it causes a processing overhead.

Take a look here: http://msdn.microsoft.com/en-us/library/ms973839.aspx#dotnetperftips_topic2

Maybe there is something that I don’t know? It is always possible, so I went and checked and found this piece:

Finding and designing away exception-heavy code can result in a decent perf win. Bear in mind that this has nothing to do with try/catch blocks: you only incur the cost when the actual exception is thrown. You can use as many try/catch blocks as you want. Using exceptions gratuitously is where you lose performance. For example, you should stay away from things like using exceptions for control flow.

Note that the emphasis is in the original. There is no cost to try/catch the only cost is when an exception is thrown, and that is regardless of whatever there is a try/catch around it or not.

Here is the proof:

var startNew = Stopwatch.StartNew();
var mightBePi = Enumerable.Range(0, 100000000).Aggregate(0d, (tot, next) => tot + Math.Pow(-1d, next)/(2*next + 1)*4);

Which results in: 6015 ms of execution.

Wrapping the code in a try/catch resulted in:

var startNew = Stopwatch.StartNew();
double mightBePi = Double.NaN;
    mightBePi = Enumerable.Range(0, 100000000).Aggregate(0d, (tot, next) => tot + Math.Pow(-1d, next)/(2*next + 1)*4);
catch (Exception e)

And that run in 5999 ms.

Please note that the perf difference is pretty much meaningless (only 0.26% difference) and is well within the range of derivations for tests runs.


Ken Egozi

One might say that your try-catch wrapper is unfair as it only wraps the whole thing once, instead of wrapping each step of the iteration.

Of course it does not change anything.

On my machine, the first code runs in 8046ms, while wrapping the lambda in try-catch yeilds a 8141ms run time.

(running Snow Leopard on my 2.3mhz MBP, using Mono 2.10.5 csharp REPL)


I guess a lot of the performance cost in throwing exceptions is in generating the stack trace?

Boris Yankov

It has always been the case that THROWING exceptions is expensive. Wrapping code with try/catch blocks isn't.

And expensive is quite a relative term. I am pretty sure one can catch tens or hundreds of thousands of exception per second.

The rule is to use try/catch for exceptions - uncommon execution paths. It will not be wise to use it instead of if/else blocks.


Interestingly, I see a pretty consistent 5-10% increase in execution time when wrapping each step of the iteration in a try-catch block (using the average of 100 runs of 1000000 iterations each).

There might be something else going on under the hood. Will have a look at the IL.

Fabian Wetzel

@Niklas: have you tried a Release build without attaching the debugger?

David Fauber

I avoid excessive try/catch mainly because of an irrational fear of indentation anyway, but its interesting seeing that there's relatively little overhead. Its always been my understanding that generating the stack trace was the biggest perf-hit but I guess I always figured there'd be some inherent minor hit associated with the try/catch blocks as well.

        decimal aux = 0;
        Stopwatch watch = Stopwatch.StartNew();
        for (var j = 0; j < 10000000; j++)
            try { aux += (decimal)Math.Sqrt(Math.PI * j * j / 10 + Math.E * Math.E); }
            catch (Exception e) { Console.WriteLine(e); }
        Console.WriteLine("   With try/catch {0}", watch.Elapsed);
        watch = Stopwatch.StartNew();
        aux = 0;
        for (var j = 0; j < 10000000; j++)
            aux += (decimal)Math.Sqrt(Math.PI * j * j / 10 + Math.E * Math.E);
        Console.WriteLine("Without try/catch {0}", watch.Elapsed);

"Maybe there is something that I don’t know? It is always possible..." - lol


This reminds me of how you need to peek messages from msmq.

try { var mq = new MessageQueue(..) }
catch(MessageQueueException e) { if (e.MessageQueueErrorCode == MessageQueueErrorCode.IOTimeout) { // EMPTY QUEUE ?!?!?!?! } }

horrible, horrible


edit: ofcourse with the mq.Peek(timeout) call.


By massaging away various compiler optimizations, I'm looking at the following snippets of code in IPSpy:

Without try-catch: double num = double.NaN; this.noop(num); num = this.Iteration(total, next); this.noop(num); return num;

With try-catch: double num = double.NaN; this.noop(num); try { num = this.Iteration(total, next); } catch (Exception arg210) { Exception value = arg210; Console.WriteLine(value); } this.noop(num); return num;

The code without try-catch consistently runs ~7% faster than the other. I might be fudging up the test though, so here's the code: http://pastebin.com/AKeMZNFx


My understanding is that while there's certainly an associated JIT time cost, at actual runtime there is no additional cost because it's not until an exception is thrown that an attempt is made to determine whether a handler was in place for the faulting frame. This topic has been pretty well treated in the past by folks like Chris Brumme (http://blogs.msdn.com/b/cbrumme/archive/2003/10/01/51524.aspx).


And Matt Pietrek (http://msdn.microsoft.com/en-us/magazine/cc301714.aspx).


Rule of thumb: Exceptions should be exceptional.


@Johannes: Rule of thumb: Don't base your opinions on ambiguous and legacy terminology.


@Johannes: Rule of of thumb: Don't make me laugh when I'm eating :)

Igor Ostrovsky

Clearing up a few things:

  • A try-catch does have an overhead to set up, but of course the overhead of a single try-catch will be drowned out by a billion operations. Ayende's test performs some billion operations, but only contains one try-catch.

  • A try-catch is cheap, but not free. For example:

[MethodImpl(MethodImplOptions.NoInlining)] static int F(int a) { // try { return a * 3 + 1; // } catch(Exception e) { return 0; } }

When I call F() in a loop on X86 JIT (release, no debugger), the test takes 230ms as is, and 630ms if I uncomment the try-catch.

Also, if you don't believe the perf numbers, you can look at the disassembly of the F() method body in the two cases and see the extra instructions that will execute due to the try-catch.

  • An additional cost of a try-catch is that it can interfere with compiler optimizations. If I remove the MethodImpl attribute from my example, the test takes 37ms without the try-catch, but still 630 ms with the try-catch. The compiler won't inline a method if it contains a try-catch.

  • So, do you care? Rarely. Try-catch is still cheap - e.g., throwing an exception is far, far more expensive. But, if you have a try-catch in the innermost loop of a program, removing the try-catch may result in non-trivial perf benefits.


Ayende, thanks for the post. After the "whether overhead or not" question, the point of our conversation was: is it "nice" to validate business rules throwing exceptions? I still don't see it pretty clear, but I'd rather go with an ActionResult (or something like it) return than throwing exceptions when restrictions are not fulfilled.

Steve Py

Business logic should not be determing itself based on exeptions, use return values for that. I will use assert-style validations with exceptions to guard against "worse" exceptions such as NullReferenceException. (I've had a play with CodeContracts and frankly I hate them:) These are guards against implementation errors, not validations. For business validations (I.e. valid parameters for a given action) I will create a validator and have it return a meaningful response before deciding whether to continue.

Johannes is sort of bang on the money in a hillarious way. Exceptions should be the exception. Generally there are only a few things an exception handler should ever be required to do: (by no means a complete list, but ones I follow) 1) Log an error. 2) Roll back a transaction. 3) Transform an exception into a more meaningful form. Such as cases where the user will be presented with exception details. (I.e. Customer could not be saved. [Inner exception being the SQLException or what-have-you]) This incurs the full cost of exception handling. 4) Ignore an error. (Very, very, very rare case)

1 & 2 sit as high in the code as I can get away with, 3 & 4 tend to dive a bit deeper. (3 used with caution, 4 used only when absolutely necessary.)


Hi Ayende, great post, really piqued my interest! I did some tests myself and found that a try-catch block itself does seem to add a tiny bit of overhead but the big hit comes from throwing exceptions and the depth of the stacktrace seems to increase the performance cost linearly. http://LNK.by/fkag


Egh, makes me think of Lucene.Net's god-awful exception-driven query parser.

Harry Steinhilber

Here are the results I got:

No Try: 10386ms Try inside the loop: 10257ms Try outside the loop: 10330ms

So I am not seeing any of the overhead others are claiming to get. (This was done using an x86 release build without the debugger attached, but I got similar results with the debugger and in debug builds.


maybe I should have put a smiley in there, so that everybody gets the irony :P

Sony Mathew

Good to know, Never gave it much thought before. I used to recommended using Checked Exceptions for alternate business flows, yet another reason why that was not a good thing. Now-a-days, I only recommend RuntimeExceptions and that too for exceptional flows (rather than alternate flows). This was for Java by the way - C# doesn't have Checked Exceptions I believe.

Comment preview

Comments have been closed on this topic.


No future posts left, oh my!


  1. Technical observations from my wife (3):
    13 Nov 2015 - Production issues
  2. Production postmortem (13):
    13 Nov 2015 - The case of the “it is slow on that machine (only)”
  3. Speaking (5):
    09 Nov 2015 - Community talk in Kiev, Ukraine–What does it take to be a good developer
  4. Find the bug (5):
    11 Sep 2015 - The concurrent memory buster
  5. Buffer allocation strategies (3):
    09 Sep 2015 - Bad usage patterns
View all series


Main feed Feed Stats
Comments feed   Comments Feed Stats