Pitfalls
Mike Scott has pointed out a bug in this code, relating to the use of DateTime.Now vs. DateTime.UtcNow, which can cause inaccurate wait durations during daylight change time.
That made me think for a bit, there are things that just naturally make me highly suspicious, because they are a common source for bugs.
Using a WCF service in a using block, creating a session factory, a foreach loop with database calls, a select without a limit clause.
What are the things that cause a flashing neon sign to go through your head when you see them in code? The stuff that you know will have to be re-read twice to make sure it is bug free.
Comments
Very deadly one I find regularly:
LINQ's .Single() is very prone to failure if it doesn't find anything. I tend to suggest people use .SingleOrDefault() and explicitly handle the null case.
Single() is /supposed/ to fail if it doesn't find anything. The idea is the underlying schema (whether explicit in your DB or implicit in whatever else is behind your LINQ) should guarantee that, assuming your data is not corrupt, Single() will succeed. Therefore when Single() fails, it's disastrous and can't be easily handled. Use SingleOrDefault() where there is no such guarantee. Both methods have legitimate uses.
It depends on project, but there are general things that "causes a flashing neon", like:
String comparison using == operator when case is irrelevant, instead of String.Compare(str1, str2, true)
Complex dynamic SQL statements for searching "Google-like" results, like "WHERE col1 LIKE '%x%' OR col2 LIKE '%y%' " etc
Incorrect usage of data structures, for example List<> when O(1) access is required etc
@Chris
If you don't want it to fail you probably want to be using .First() or .FirstOrDefault()
I don't ensure code is bug free by simply reading it. Code without tests is code which sets off alarm bells.
First and FirstOrDefault have different intentions from Single and SingleOrDefault.
Lots of stuff...
Any code that creates a thread, starts a timer, uses a lock or accesses shared state.
Classes that implement IDisposable or store disposable resources in fields.
Classes that can exist in a partially initialized state (eg. have a separate Initialize method).
Finalizers.
Recursive deletion of directories. (Did the code handle symlinks? Were the paths correctly constructed?)
Retry loops and compensatory actions.
Anything "clever."
Not using frameworks like xVal and duplicating validation logic in both the model and the views.
Well DateTime.Now has its own problems like you can't rely on it for measuring micro-benchmarks as its inaccurate and you need to use Stopwatch() instead.
Other than that I think the biggest problems actually have to do with how most architecture is centred around proposed Microsoft technology which overtime has proved to be inferior time after time. e.g.
DataSets/DataTables (as opposed to ORM)
ASP.NET webforms (as opposed to MVC)
Encourages building coupled (spaghetti-style) / stateful / non-testable websites.
SOAP 1.x / WS-* (as opposed to REST + POX / JSON)
Operation / ServiceContracts
ServiceContracts encourages the use of chatty RPC-style service calls instead of SOA document-centric-style ones. This is a problem because whenever you cross a process-boundary you want to reduce the chatter as much as possible (Note: My opensource web-services stack effectively encourages building SOA-style services: http://code.google.com/p/servicestack).
No Distributed Hash Table (starting now with Velocity)
Basically SQL Server is positioned as the hammer to handle all your persistence means, which effectively limits your performance and scalability unless you've implemented your own sharding solution from the start.
Strong emphasis on web services and poor MQ technology (and no pub/sub).
Web services are not always the best technology to communicate between services, they have no-reliability, high coupling between endpoints, poor support for long running transactions etc. Certain tasks like offline and processor intensive tasks, extensible and low coupling between services, long running tasks, high payloads (well not MSMQ anyways) etc, are just better handled with a MQ technology and MSMQ is very lacking in this area.
Unfortunately the ease of use and productivity of Microsoft products can lead projects down the wrong architecture path for their application if they're solely relying on Microsoft technology (and documentation) for their applications architecture.
Downcasting
"str == String.Empty" instead of "String.IsNullOrEmpty(str)"
Calling "str.Substring()" method without checking the string length.
Implementing Dispose() method without the Dispose(bool) virtual method in non-sealed classes.
Creating finalizers in classes without unmanaged code/expensive resources.
Using "lock(typeof(MyClass))" instead of having a separate syncRoot object can cause deadlocks.
Returning IEnumerable <t instead of IQueryable <t when working with ORMs.
I will be writing a series of blog posts about this soon. I'm moving my blog, so I'll come back and post when I do. Here's a hint of things to come:
I second Neil's downcasting comment. Here's what I'll add:
1) Enums - if I see enums, I immediately start looking for the switch statement and/or if-elseif-else statements.
2) Switches/if-elses based on types - self-evident
3) Inheritance structures longer than, say, three deep
4) Extremely dumb objects, e.g. data-holders - DTOs are fine, but when you have "business objects" that have no functions because anything that is done to them is stored in some static class, you're going to have trouble with that. This points straight to anemia of the domain model.
5) Close to Jeff Brown's number 3, when a separate class has to validate that a certain class is currently in a valid state - For example, I recently saw code where a person had a class that had a generic List in it. This class was like number 4 above, and as such, when another class had to use it, the other class had to first check to see if that list was null, and if it was, instantiate it! That just about made me vomit on my keyboard.
6) Parameter lists longer than two - 99% of the functions you write probably don't need more than two parameters passed in. If more than that are necessary, I think that points to a lack of OO thinking in the design.
I don't know if this is a pitfall, but it always makes me pause and pay closer attention to what I am looking at.
if(Call.ToSomethingByName("Name") == "foo") {
if(Call.ToSomethingByName("Name") !="bar")
else
var someotherthing = Call.ToSomethingByName("Name");
}
Just seeing this kind of construct tells me the person who wrote it was not thinking about anything other than "make it work". My current codebase is littered with gems like this. I try to refactor them to a single call to the by name method when ever I find them.
@NotMyself,
I think that's the definition of spaghetti code which is certainly a huge pitfall.
-doing math with dates
-representing too much data with strings (e.g. comma-separated lists of things, including dates or numbers)
-any interaction with legacy databases
Parsing DateTime/decimal withou using CultureInfo
Checking for empty strings using == "" instead of String.IsNullOrEmpty
@Dmitry - I'm glad you mentioned checking the length of a string before calling Substring(). Run time errors due to poor substring calls were such a common occurrence in a project I was working on that I finally broke down and created an extension method on the String class that will always succeed and doesn't care about starting offset or length. Now, whenever I see Substring() used at all in this codebase without a comment about why an exception due to offset/length is legit, it's a code smell.
The use of an object that implements IDisposable, but not properly calling Dispose or make use of "using". It is (or should be) an obvious bug, but happens a lot for example with ResponseStreams and RequestStreams of WebRequests, because you often can come away without Dispose()-ing. And, according to Eccl 8:11, "When the sentence for a crime is not quickly carried out, the hearts of the people are filled with schemes to do wrong."
I don't think dumb objects are pitfalls, if anything they are just overly safe, they don't try and do anything knowing that any functionality they give would be contextual.. and saying static classes work to assist them is just trying to make them sound terrible.. dumb data objects certainly never imply static helpers.
I think one of the biggest challenges for a developer is picking when to embed functionality vs having stood off functionality..
When anyone I work with explains a solution to a problem and it takes more than a few sentences. If that happens, I'll then go look at the code expecting the worst.
Anytime I see a #region within a method call, that's always scary because that means the method is probably huge and over-engineered.
Overuse of exceptions
Lots of duplicate cut&paste style code with minor differences
Assigning a constructor argument to an static variable
Having a single static database connection in ASP.NET
Roll your own locks, such as using ASP.NET's cache to object to indicate you are doing something, e.g. Cache["rebalancingTables"].
lock (new object())
Serious code style issues:
commented code
overuse of abbreviations
spelling errors (even in comments)
All are signs of rushed out code, lack of attention to detail, and poor maintainabilty.
1) Too many if statements that tries to handle business concerns. Usually it means that the class knows too much and tries to handle to much and violates SRP.
2) Using WCF - It is never as simple just creating a client proxy and calling it as a local object. It abstracts too many default values and ignores the fallacies of distributed computing by design and in order to handle this u have to resort to downcasting to ICommunicationObject in order to handle faulted state etc.
One more thing - class which their names are prefixed with the word "Manager" - usually a sign for a god class.
I meant suffixed...
@Rik Hemsley
Agree entirely about tests. Unfortunately our environment is "business driven design" which basically means anything which doesn't appear on the screen isn't being productive (that includes test cases). Mitigating risk happens after the thing collapses. I shit you not.
Regarding .Single() - if it wasn't for above I'd agree but it doesn't promote awareness that "that might blow up".
(Yes i am losing my hair rapidly)
Any usage of DateTime for measuring a real-time duration, even with the modified code using DateTime.UtcNow, breakage is still possible if the clock is changed.
Using Stopwatch is the way to go when measuring any duration, moreover using a monotonic clock insensitive to any clock/timezone change, it allows more concise code :
private class OpenState : CircuitBreakerState
{
}
The over use of String Concatenation vs. StringBuilder.Append specially in a loop block
Methods that are more than 20 lines long.
Classes more than 100 lines long.
Untested code.
Non-infrastructure code that reaches into an IOC container. There are legit uses, but it always makes me stop and think for several minutes about whether I'm doing something wrong.
Comments that describe what the code does. It makes me worry about the readability of the code itself, because apparently whoever wrote it didn't think it was self-explanatory at the time.
I get a little antsy when I see inheritance from List <t (or other collection classes/interfaces). I much prefer that the inheriting class implement it's own set of methods that are specific to it's usage. Too often I see inheritors that want to control how they are used, but then go and inherit from List <t and don't realize the big backdoor.
Of course, when they truly want to write a list class with additional semantics, that's fine, but often that's not the case.
//open the database
database.Open();
//create a command
var command = database.CreateCommand("GetCustomers");
//execute the command
command.Execute();
agggghhhhhhhhhhhhh!
The recent fad for using the so-called "safe" type cast with the "as" keyword instead of using a regular type cast, e.g...
var foo = bar as Foo;
...
foo.SomeMethod();
instead of:
var foo = (Foo) bar;
...
foo.SomeMethod();
I'm seeing it more and more in sample code both from Microsoft and from bloggers. I think the people doing this believe that it's "safer" because "as" won't throw an exception.
The problem, of course, is that it's not safe at all when used without a null check, and so it replaces a fail fast with a fail slow, where your code will blow up some time later with a useless and confusing null reference exception instead of a helpful invalid cast exception at the point of failure.
Comment preview