I just merged a Pull Request from one of our guys. This is a pretty important piece of code, so it went through two rounds of code reviews before it actually hit the PR stage.
That was the point where the tests run (our test suite takes over an hour to run, so we run a limited test frequently, and leave the rest for later), and we got three failing tests. The funny thing was, it wasn’t a functional test that failed, it was the code quality gateways tests.
The RavenDB team has grown quite a lot, and we are hiring again, and it is easy to lose knowledge of the “unimportant” things. Stuff that doesn’t bite you in the ass right now, for example. But will most assuredly will bite you in the ass (hard) at a later point in time.
In this case, an exception class was introduced, but it didn’t have the proper serialization. Now, we don’t actually use AppDomains all that often (at all, really), but our users sometimes do, and getting a “your exception could not be serialized” makes for an annoying debug session. Another couple of tests related to missing asserts on new configuration values. We want to make sure that we have a new configuration value is actually connected, parsed and working. And we actually have a test that would fail if you aren’t checking all the configuration properties.
There are quite a few such tests, from making sure that we don’t have void async methods to ensuring that the tests don’t leak connections or resources (which could harm other tests and complicate root cause analysis).
We also started to make use of code analyzers, for things like keeping all complex log statements in conditionals (to save allocations) to validating all async calls are using ConfigureAwait(false).
Those aren’t big things, but getting them right, all the time, give us a really cool product, and a maintainable codebase.