When jumping into an ongoing software project, you usually have no idea what you’re getting in to. Will the code be a daydream… or a night terror? I use Microsoft’s Code Metrics Power Tool to quickly analyze .NET projects I take over so I have an idea of what’s coming. It’s easy to do, and here’s how to try it yourself!
What is the Visual Studio Code Metrics Power Tool?
Microsoft has a little-known command-line utility called the Code Metrics Power Tool. It’s built in to Visual Studio 2012, and you can download extensions to integrate it in to previous version of Visual Studio. It’s basically a one-click analysis in Visual Studio. It gives you total metrics for the solution, and you can drill down by project, namespace, class, and even function to help identify problem areas. It calculates several different metrics:
- Maintainability Index – The relative ease of overall code maintenance
- Cyclomatic Complexity – The structural complexity of the code based on control flow
- Depth of Inheritance – The depth of the class hierarchy used
- Class Coupling – The amount of classes and level of class interdependency
- Lines Of Code – The approximate number of lines in the code, based on the MSIL (Microsoft intermediate language) translation
How do I interpret the metrics?
When evaluating code, I look at one metric only: Maintainability Index. MI is a normalized version of a well-known formula created at Carnegie Mellon. In part, it uses Halstead complexity measures, which are software metrics used to compute the overall difficulty, effort and time required to maintain code. Microsoft’s Code Metrics Power Tool gives you a MI value from 0 to 100, where 100 is the most maintainable code. MI is a composite formula of Halstead volume, cylomatic complexity, and lines of code.
The Code Metrics Power Tool has a stoplight coding system, with pre-defined ranges for green, yellow and red. I’m not sure why they picked the ranges they did… Way too much code ends up green. If I see yellow or red, I consider running away from the project if a rewrite isn’t allowed.
Since I don’t like the provided scale, I use my own:
|90-100||A||This code is written by a true pro. The programmer took care when architecting it, and prides himself or herself of doing things the “right” way. It has been well maintained and follows best practices.|
|80-89||B||This code is good. It’s well designed, but not too strict. It may have some specific modules that are badly coded, but overall it has a very good design and architecture.|
|70-79||C||This code is average. Parts are repetitive and need refactoring, but it’s likely to have a workable foundation. It’s simple enough that you could maintain and fix it fairly easily.|
|60-69||D||This code is below average. The programmer probably didn’t have a solid foundation in programming theory. It’s full of unstructured, verbose, and tangled spaghetti code.|
|20-59||F||This code is just plain bad. It’s probably D code that has been horribly maintained over a long time with band-aid fixes. It probably has old funtions that are no longer called, mixed in with live ones.|
|0-19||W/F||(W/F = Withdrawl with Failure) This code is beyond repair. A fix to one issue probably causes 10 new problems. The code is in desperate need of a complete rewrite. Little to nothing is salvageable.|
My preference is to actually work in B code. I find that A code locks me into someone else’s design paradigm. If it turns out I’m not a fan of the architecture pattern, I’m stuck with it. With B code I can work it up to B+ effortlessly, or continue to my own preferred flavor of A. With some planning and decent effort, code can generally be moved up two letter grades (other than W/F). Any more than two letter grades, and your time is better spent doing a rewrite.