Tuesday, December 18, 2012

Don’t take the Technical Debt Metaphor too far

Because “technical debt” has the word “debt” in it, many people have decided that it makes sense to think and work with technical debt in monetary terms, and treat technical debt as a real financial cost. This is supposed to make it easier for technical people to explain technical debt to the business, and easier to make a business case for paying debt off.

Putting technical debt into financial terms also allows consultants and vendors to try to scare business executives into buying their tools or their help – like Gartner calculating that world wide “IT debt” costs will exceed $1.5 in a couple of more years, or CAST software’s assessment that the average enterprise is carrying millions of dollars of technical debt.

Businesses understand debt. Businesses make a decision to take on debt on and they track it, account for it and manage it. The business always knows how much debt they have, why they took it on, and when they need to pay it off. Businesses don’t accidentally take on debt – debt doesn't just show up on the books one day.

We don't know when we're taking technical debt on

But developers accidentally take on debt all of the time – what Martin Fowler calls “inadvertent debt”, due to inexperience and misunderstandings, everything from “What’s Layering?” to “Now we know how we should have done it” looking at the design a year or two later.

"The point is that while you're programming, you are learning. It's often the case that it can take a year of programming on a project before you understand what the best design approach should have been."
Taking on this kind of debt is inevitable – and you’ll never know when you’re taking it on or how much, because you don’t know what you don’t know.

Even when developers take on debt consciously, they don’t understand the costs at the time – the principal or the interest. Most teams don’t record when they make a trade-off in design or a shortcut in coding or test automation, never mind try to put a value on paying off their choice.

We don’t understand (or often even see) technical debt costs until long after we've taken the costs on. When you’re dealing with quality and stability problems; or when you're estimating out a change and you recognize that you made mistakes in the past or that you took shortcuts that you didn't realize before or shortcuts that you did know about but that turned out to be much more expensive than you expected; or once you understand that you chose the wrong architecture or the wrong technical platform. Or maybe you've just run a static analysis tool like CAST or SONAR which tells you that you have thousands of dollars of technical debt in your code base that you didn't know about until now.

Now try and explain to a business executive that you just realized or just remembered that you have put the company into debt for tens or hundreds of thousands of dollars. Businesses don’t and can’t run this way.

We don't know how much technical debt is really costing us

By expressing everything in financial terms, we’re also pretending that technical debt costs are all hard costs to the business and that we actually know how much the principal and interest costs are: we’re $100,000 in debt and the interest rate is 3% per year. Assigning a monetary value to technical debt costs give them a false sense of precision and accuracy.

Let's be honest. There aren't clear and consistent criteria for costing technical debt and modelling technical debt repayment – we don’t even have a definition of what technical debt is that we can all agree on. Two people can come up with a different technical debt assessment for the same system, because what I think technical debt is and what you think technical debt is aren't the same. And just because a tool says that technical debt costs are $100,000.00 for a code base, doesn't make the number true.

Any principal and interest that you calculate (or some tool calculates for you) are made-up numbers and the business will know this when you try to defend them – which you are going to have to do, if you want to talk in financial terms with someone who does finance for a living. You’re going to be on shaky ground at best – at worse, they’ll understand that you’re not talking about real business debt and wonder what you’re trying to pull off.

The other problem that I see is “debt fatigue”. Everyone is overwhelmed by the global government debt crisis and the real estate debt crisis and the consumer debt crisis and the fiscal cliff and whatever comes next. Your business may be already fighting its own problems with managing its financial debt. Technical debt is one more argument about debt that nobody is looking forward to hearing.

We don’t need to talk about debt with the business

We don’t use the term “technical debt” with the business, or try to explain it in financial debt terms. If we need to rewrite code because it is unstable, we treat this like any other problem that needs to be solved – we cost it out, explain the risks, and prioritize this work with everything else. If we need to rewrite or restructure code in order to make upcoming changes easier, cheaper and less risky, we explain this as part of the work that needs to be done, and justify the costs. If we need to replace or upgrade a platform technology because we are getting poor support from the supplier, we consider this a business risk that needs to be understood and managed. And if code should be refactored or tests filled in, we don’t explain it, we just do it as part of day-to-day engineering work.

We’re dealing with technical debt in terms that the business understands without using a phony financial model. We’re not pretending that we’re carrying off-balance sheet debt that the company needs to rely on technologists to understand and manage. We’re leaving debt valuation and payment amortization arguments to the experts in finance and accounting where they belong, and focusing on solving problems in software, which is where we belong.

Friday, December 14, 2012

SANS Application Security Survey

Frank Kim and I helped out with an industry-wide survey on application security practices. The results of the survey and our analysis can be found in the SANS Analyst Program Reading Room here.

Tuesday, December 11, 2012

Are bugs part of technical debt?

Everybody is talking about technical debt today: developers, testers, consultants, managers - even executives.

But the more that people talk about technical debt, the fuzzier the idea gets, and more watered down the meaning of “technical debt” becomes.

Philippe Kruchten is trying to solve this by suggesting a narrower definition of what technical debt is and what it isn't. He breaks out all of the work on a system into work that is visible or invisible, and that has positive or negative value:

Visible Invisible
Positive Value New Features Architecture and Structure
Negative Value Defects Technical Debt

In this model, defects that have been found but haven't been fixed yet aren't technical debt - they are just part of the work that everyone can see and that has to be prioritized with the rest of the backlog.

But what about defects that you've found, but that you've decided that you’re not going to fix – bugs that you think you can get away without fixing (or that the people before you thought they could get away without fixing)?

These bugs are technical debt – because you’re pretending that the bugs are invisible. You’re betting that you don’t have to take on the cost of fixing these problems in the short term at least, just like other kinds of technical debt: copy and paste code and conscious short-cuts in design and compounding complexity and automated tests that should have been written but weren't and refactoring that should have been done that wasn't and code that you wrote that you wished you hadn't because you didn't know the language well enough back then and anything else that you may have done or not done in the past that has a chance of slowing down your work and making it harder in the future.

The same rule applies to results from static analysis bug finding tools like Findbugs and Klocwork and Fortify which point out coding problems that could be real bugs and security vulnerabilities. After you filter out the false positives and motherhood, and the code that works but really should be cleaned up, you’re left with code that is wrong or broken – code that should be fixed.

Keep in mind that these are problems that haven’t been found yet in testing, or found in production by customers – or at least nobody knows that they've run across these bugs. These are problems that the team knows about, but that aren't visible to anybody else. Until they are fixed or at least added to the backlog of work that will be done soon, they are another part of the debt load that the team has taken on and will have to worry about paying back some day. This is why tools like SONAR include static analysis coding violations when calculating technical debt costs.

I agree that bugs aren't technical debt – unless you’re trying to pretend that the bugs aren't there and that you won’t have to fix them. Then it’s like any other technical debt trade off – you’ll need to see if your bet pays off over time.

Tuesday, December 4, 2012

Rule of 30 – When is a method, class or subsystem too big?

A question that constantly comes up from people that care about writing good code, is: what’s the right size for a method or function, or a class, or a package or any other chunk of code? At some point any piece of code can be too big to understand properly – but how big is too big?

It starts at the method or function level.

In Code Complete, Steve McConnell says that the theoretical best maximum limit for a method or function is the number of lines that can fit on one screen (i.e., that a developer can see at one time). He then goes on to reference studies from the 1980s and 1990s which found that the sweet spot for functions is somewhere between 65 lines and 200 lines: routines this size are cheaper to develop and have fewer errors per line of code. However, at some point beyond 200 lines you cross into a danger zone where code quality and understandability will fall apart: code that can’t be tested and can’t be changed safely. Eventually you end up with what Michael Feathers calls “runaway methods”: routines that are several hundreds or thousands of lines long and that are constantly being changed and that continuously get bigger and scarier.

Patrick Duboy looks deeper into this analysis on method length, and points to a more modern study from 2002 that shows that code with shorter routines has fewer defects overall, which matches with most people’s intuition and experience.

Smaller must be better

Bob Martin takes the idea that “if small is good, then smaller must be better” to an extreme in Clean Code:

The first rule of functions is that they should be small. The second rule of functions is that they should be smaller than that. Functions should not be 100 lines long. Functions should hardly ever be 20 lines long.
Martin admits that “This is not an assertion that I can justify. I can’t produce any references to research that shows that very small functions are better.” So like many other rules or best practices in the software development community, this is a qualitative judgement made by someone based on their personal experience writing code – more of an aesthetic argument – or even an ethical one – than an empirical one. Style over substance.

The same “small is better” guidance applies to classes, packages and subsystems – all of the building blocks of a system. In Code Complete, a study from 1996 found that classes with more routines had more defects. Like functions, according to Clean Code, classes should also be “smaller than small”. Some people recommend that 200 lines is a good limit for a class – not a method, or as few as 50-60 lines (in Ben Nadel’s Object Calisthenics exercise)and that a class should consist of “less than 10” or “not more than 20” methods. The famous C3 project – where Extreme Programming was born – had 12 methods per class on average. And there should be no more than 10 classes per package.

PMD, a static analysis tool that helps to highlight problems in code structure and style, defines some default values for code size limits: 100 lines per method, 1000 lines per class, and 10 methods in a class. Checkstyle, a similar tool, suggests different limits: 50 lines in a method, 1500 lines in a class.

Rule of 30

Looking for guidelines like this led me to the “Rule of 30” in Refactoring in Large Software Projects by Martin Lippert and Stephen Roock:

If an element consists of more than 30 subelements, it is highly probable that there is a serious problem:

a) Methods should not have more than an average of 30 code lines (not counting line spaces and comments).

b) A class should contain an average of less than 30 methods, resulting in up to 900 lines of code.

c) A package shouldn’t contain more than 30 classes, thus comprising up to 27,000 code lines.

d) Subsystems with more than 30 packages should be avoided. Such a subsystem would count up to 900 classes with up to 810,000 lines of code.

e) A system with 30 subsystems would thus possess 27,000 classes and 24.3 million code lines.

What does this look like? Take a biggish system of 1 million NCLOC. This should break down into:
  • 30,000+ methods
  • 1,000+ classes
  • 30+ packages
  • Hopefully more than 1 subsystem
How many systems in the real world look like this, or close to this – especially big systems that have been around for a few years?

Are these rules useful? How should you use them?

Using code size as the basis for rules like this is simple: easy to see and understand. Too simple, many people would argue: a better indicator of when code is too big is cyclomatic complexity or some other measure of code quality. But some recent studies show that code size actually is a strong predictor of complexity and quality – that

“complexity metrics are highly correlated with lines of code, and therefore the more complex metrics provide no further information that could not be measured simplify with lines of code”.
In "Beyond Lines of Code: Do we Need more Complexity Metrics" in Making Software, the authors go so far as to say that lines of code should be considered always as the "first and only metric" for defect prediction, development and maintenance models.

Recognizing that simple sizing rules are arbitrary, should you use them, and if so how?

I like the idea of rough and easy-to-understand rules of thumb that you can keep in the back of your mind when writing code or looking at code and deciding whether it should be refactored. The real value of a guideline like the Rule of 30 is when you're reviewing code and identifying risks and costs.

But enforcing these rules in a heavy handed way on every piece of code as it is being written is foolish. You don’t want to stop when you’re about to write the 31st line in a method – it would slow down work to a crawl. And forcing everyone to break code up to fit arbitrary size limits will make the code worse, not better – the structure will be dominated by short-term decisions.

As Jeff Langer points out in his chapter discussing Ken Beck’s four rules of Simple Design in Clean Code:

“Our goal is to keep our overall system small while we are also keeping our functions and classes small. Remember however that this rule is the lowest priority of the four rules of Simple Design. So, although it’s important to keep class and function count low, it’s more important to have tests, eliminate duplication, and express yourself.”
Sometimes it will take more than 30 lines (or 20 or 5 or whatever the cut-off is) to get a coherent piece of work done. It’s more important to be careful in coming up with the right abstractions and algorithms and to write clean clear code – if a cut-off guideline on size helps to do that, use it. If it doesn't, then don’t bother.

Site Meter