Production postmortemThe case of the man in the middle
One of the most frustrating things when you dealing with production issues is when the problem is not in our product, but elsewhere. In particular, this post is dedicated to the hard work done by many anti virus products, in particular, to make our life harder.
Let us take a look at the following quote, taken from the ESET NOD32 Anti Virus knowledge base (emphasis mine):
By default, your ESET product automatically detects programs that are used as web browsers and email clients, and adds them to the list of programs that the internal proxy scans. This can cause loss of internet connectivity or other undesired results with applications that use network features but are not web browsers/email clients.
Yes, it can. In fact, it very often does.
Previously, we looked at a similar issue with Anti Virus slowing down I/O enough to cause us to slowly die. But in this case, the issue is a lot more subtle.
Because it is doing content filtering, it tends to put a much higher overhead on the system resources, which means that as far as the user is concerned, RavenDB is slow. We actually developed features specifically to handle this scenario. The traffic watch mode will tell you how much time you spend on the server side, and we have added a feature that will make RavenDB account for the internal work each query is doing, so we can tell where the actual cost is.
You can enable that by issuing:
GET databases/Northwind/debug/enable-query-timing
And one that is setup, you can get a good idea about what is costly in the query, as far as RavenDB is concerned. Here is an example of a very slow query:
You can see that the issue is that we are issuing a very wide range query, so most of the time is spent in inside Lucene. Other examples might be ridicilously complex queries, which result in high parsing time (we have seen queries in the hundreds of KB range). Or loading a lot of big documents, or… you get the drift. If we see that the server thinks that a query is fast, but the overall time is slow, we know to blame the network.
But an even more insidious issue is that this would drop requests, consistently and randomly (and yes, I know that those are contradictions, it was consistently dropping requests in a random pattern that seemed explicitly designed to thwart figuring out what is going on). Leading to things breaking, and escalated support calls. “RavenDB is broken” leads to a lot of headache, and a burning desire to hit something when you figure out that not only isn’t it your fault, but the underlying reason is actively trying to prevent you from figuring it out (I assume it is to deal with viruses that try to shut it off), which lead to really complex find facting sessions.
That is more annoying because it seems that the issue there was a bug in respecting keep alive sessions for authenticated requests under some scenarios, in the AV product in question! Absolutely not fun!
More posts in "Production postmortem" series:
- (12 Dec 2023) The Spawn of Denial of Service
- (24 Jul 2023) The dog ate my request
- (03 Jul 2023) ENOMEM when trying to free memory
- (27 Jan 2023) The server ate all my memory
- (23 Jan 2023) The big server that couldn’t handle the load
- (16 Jan 2023) The heisenbug server
- (03 Oct 2022) Do you trust this server?
- (15 Sep 2022) The missed indexing reference
- (05 Aug 2022) The allocating query
- (22 Jul 2022) Efficiency all the way to Out of Memory error
- (18 Jul 2022) Broken networks and compressed streams
- (13 Jul 2022) Your math is wrong, recursion doesn’t work this way
- (12 Jul 2022) The data corruption in the node.js stack
- (11 Jul 2022) Out of memory on a clear sky
- (29 Apr 2022) Deduplicating replication speed
- (25 Apr 2022) The network latency and the I/O spikes
- (22 Apr 2022) The encrypted database that was too big to replicate
- (20 Apr 2022) Misleading security and other production snafus
- (03 Jan 2022) An error on the first act will lead to data corruption on the second act…
- (13 Dec 2021) The memory leak that only happened on Linux
- (17 Sep 2021) The Guinness record for page faults & high CPU
- (07 Jan 2021) The file system limitation
- (23 Mar 2020) high CPU when there is little work to be done
- (21 Feb 2020) The self signed certificate that couldn’t
- (31 Jan 2020) The slow slowdown of large systems
- (07 Jun 2019) Printer out of paper and the RavenDB hang
- (18 Feb 2019) This data corruption bug requires 3 simultaneous race conditions
- (25 Dec 2018) Handled errors and the curse of recursive error handling
- (23 Nov 2018) The ARM is killing me
- (22 Feb 2018) The unavailable Linux server
- (06 Dec 2017) data corruption, a view from INSIDE the sausage
- (01 Dec 2017) The random high CPU
- (07 Aug 2017) 30% boost with a single line change
- (04 Aug 2017) The case of 99.99% percentile
- (02 Aug 2017) The lightly loaded trashing server
- (23 Aug 2016) The insidious cost of managed memory
- (05 Feb 2016) A null reference in our abstraction
- (27 Jan 2016) The Razor Suicide
- (13 Nov 2015) The case of the “it is slow on that machine (only)”
- (21 Oct 2015) The case of the slow index rebuild
- (22 Sep 2015) The case of the Unicode Poo
- (03 Sep 2015) The industry at large
- (01 Sep 2015) The case of the lying configuration file
- (31 Aug 2015) The case of the memory eater and high load
- (14 Aug 2015) The case of the man in the middle
- (05 Aug 2015) Reading the errors
- (29 Jul 2015) The evil licensing code
- (23 Jul 2015) The case of the native memory leak
- (16 Jul 2015) The case of the intransigent new database
- (13 Jul 2015) The case of the hung over server
- (09 Jul 2015) The case of the infected cluster
Comments
I'm confused by the anti-virus related posts. Why does someone run anti-virus software on server infrastructure? In the *NIX world, what gets installed on servers is generally heavily regulated via package and configuration managers.
Maybe you could add a "RavenPing" feature where you fire 1M nop requests to the server and measure timing profile and loss. Or, send one ping every second forever and leave it running for a day. This might be very handy.
Maybe you can even send suspicious content to make the AV bite. Include some dirty words or fake credit card numbers.
Hi Oren, thanks for the deep dive, looking forward to (some day) upgrading so that I can leverage these timing features.
Remember that not all anti-virus products are created equal, and that all require tuning. You run anti-virus on server infrastructure so that when your public tier, or the workstation of some user with elevated privileges is compromised, you've got some protection on your critical infrastructure. Malicious software doesn't ask politely, it exploits un-patched (possibly previously unknown) vulnerabilities to install itself, and even if it takes days for the definitions to get caught up, it's great to have AV come along and remove it. In the Windows world, what gets installed on servers is generally heavily regulated by fine grained access control, group policy objects, and automated software deployment tools controlled by policies and templates. It's not an accident that my servers run AV, and in fact, I find it rather alarming that someone wouldn't. Microsoft best practices tell you to run it on Domain Controllers, Exchange servers and SQL servers; with the caveat that you tune it so that it doesn't interfere with the applications the server is hosting.
How did you find it this time? With ETW or by shutting down the AV solution?
Orbitz, Some of those were actually encountered on developers machine, where they are testing things out. But I usually think about it like this: http://imgs.xkcd.com/comics/voting_machines.png
This is a blog post (admittedly from an AV company) that discuss this topic: https://blogs.sophos.com/2013/12/09/do-you-need-antivirus-on-linux-servers/
And from Microsoft: https://support.microsoft.com/en-us/kb/309422
A lot of that depend on the position of the server, if it is externally accessible, etc.
Honestly, a lot of the time that is done to CYA.
See a great example of the details why here: http://serverfault.com/questions/643099/run-antivirus-software-on-linux-dns-servers-does-it-make-sense?answertab=votes#tab-top
Mark, RavenPing would work just fine, it would be a small request and not trigger additional work. It is when you send big request / responses that you see those.
Robert, I don't have a problem with running AV on servers. I have big problems with running AV on servers, not excluding the database, then opening critical support calls for performance / stability issues. That happens all too often, I'm afraid.
Just to give you some idea, here is the MS recommended practice for running AV on SQL server machine: https://support.microsoft.com/en-us/kb/309422
That is ten pages of text to explain that. Conscientious admins would follow that, but a lot of the time, this is ignored. And there are problems
Alois, The first thing we do when we find an AV solution is to ask to shut it down, then see if the problem goes away :-)
@oren: the GET request to enable the option was a typo, right? I mean, it is a POST request, correct?
njy, No, this is a POST. We want to make it easy to enable that via the browser directly
I highly suggest to avoid changing a system's state with a GET request. I mean, standard compliance, security (CSRF just to name one) and some other reasons http://stackoverflow.com/questions/705782/why-shouldnt-data-be-modified-on-an-http-get-request
njy, That isn't changing the system state. It modify no behavior. It simple cause us to track additional data for a period of time. Note that pretty much everything in your link refers to standard web apps, RavenDB isn't such.
@Oren: A couple of point: - you said "It cause us to track additional data for a period of time" and that means flipping a switch and that is state change, even if a "lite" one; - you said "We want to make it easy to enable that via the browser directly" and that means someone using a browser, and that is basically a webapp (behaviour). I mean, if a user can open a browser and do a GET on a url, that can be done via a hidden iframe or an img's src or something else; - one of the point was standard compliance, which seems pretty important; - i personally have never ever saw anywhere a command for a webapp/webservice/webwhatever launched via a GET request; Cheers.
Next idea: Make Raven be able to create a "system summary" consisting of number of CPUs, RAM etc. and presence of AV. Ask customers to send in that string with every support call. Maybe you can even make the raven server interact with the major AV vendors and check their configuration at runtime to make sure raven is excluded. Otherwise, make the dashboard issue a warning.
Mark, We actually do that, we have a Gather Debug Info which collects all major data points we need. AV is a problem, because there is no standard way to detect it (especially on server OS).
Njy, If you have a user that can use a hidden frame to go to RavenDB, you have other issues.
We have a whole set of commands that are used for information gathering which are often invoked from production machines, where the ability to invoke REST commands (except via browser GET) is limited. Yes, this is stupid. Yes, there is command line, but we found that it creates a big barrier to solving problems, rather than just sending URLs over the browser.
And again, RavenDB is not exposed to users in any way. This isn't a web ap
I am with njy on this one (about GET request). I was equally surprised. If the request has side effects like "cause us to track additional data", it means it IS changing some state. GET requests should not do that. Think of a search engine accidentally finding a link to your GET request and visiting it. Normal search engines will never do POSTs, but GETs are fair game.
Ivan, A search engine shouldn't have access to ravendb. It is a db, not a web app that is exposed to the world
Comment preview