I have always taken for granted that code with a higher cyclomatic complexity number (CCN) is more bug ridden then code with a lower CCN. This has been taught/assumed for many years. and apparently the old adage about assumptions (ass-u-me) holds to be true again.
I was doing some quick research to find a free tool I could use to measure the CCN of my project as part of its continuous integration environment. This is a best practice of many agile practitioners, and good software engineering to boot. According to this article however, it isn't as useful as it was assumed to be.
What the survey did not show, however, is that code complexity does not correlate directly to defect probability. Enerjy measured complexity via the cyclomatic complexity number (CCN), which is also known as McCabe. It counts the number of paths through a given chunk of code. Even though CCN has limitations (for example, every case statement is treated as equal to a new if-statement), it’s relied on as a solid gauge. What Enerjy found was that routines with CCNs of 1 through 25 did not follow the expected result that greater CCN correlates to greater probability of defects. Rather, it found that for CCNs of 1 through 11, the higher the CCN the lower the bug probability. It was not until CCN reached 25 that defect probability rose sufficiently to be equal that of routines with a CCN of 1.
There is no correlation between CCN and bug counts for code that has a CCN of 1-25! For software engineering wonks, and process improvement gurus this must come as a bit of a shock. So I ask the readers, is the measurement of CCN a sacred cow that should be slaughtered with the rest of those asinine bovine as we move to a more agile and lean software development process? Or is it still a useful tool that should be kept, just interpret the results pragmatically?
Subscribe to:
Post Comments (Atom)
4 comments:
Interesting but... I still believe we should value simple code even though the research does not prove it reduces bug count...
Complex code will bite you later once your original team is gone and that the support team is holding the fort...
I'm not shocked by this news, though I'm excited there was a study to prove it. I tend to agree with Sylvain as well - there may not be a correlation with the number of bugs when you deploy, but the time to fix those bugs that do come up is probably longer as you spend more time trying to make sense of complex code. Maybe they need a follow-up study on this :)
Perhaps on a one to one basis, the bugs will take some more time to fix. But is it more then the time it takes to keep refactoring the code to hit some arbitrary CCN?
Sure simple code should be preferred, but do you invest upfront to refactor code and make it simpler, or simply let it be? After all, there may never be a bug in that particular section of code.
I would not refactor code only on the basis of simplifying it. Hopefully this is caught at code review time.
When it is not broken...
Post a Comment