SMotW #22: IRR
Security Metric of the Week #22: Internal Rate of Return
IRR is one of a number of financial metrics in our collection. IRR measures the projected profitability of an investment, a proposed security implementation project for example. If the IRR is greater than the organization's cost of capital, the project may be worth pursuing (unless there are limited funds available, and other proposals with even higher IRR or intangible benefits).
Comparing IRR against other financial metrics is tricky. For starters, we are not accountants, economists or financiers by training, and this stuff is hard! Furthermore, different circumstances and different types of investment call for different metrics ... but arguably the most important factor is that organizations tend to rely on certain financial metrics to assess and monitor most of their projects. Regardless of any technical arguments for or against using IRR as a metric, if management routinely uses it, there is undoubtedly going to be pressure on security projects to follow suit.
Being PRAGMATIC about it:
Notice the 88% score for Cost: if IRR is going to be required anyway for investment appraisal, the marginal cost of using it as a security metric is almost nil. Finance probably has the requisite models/spreadsheets and expertise to calculate IRR for all proposed projects on an even footing ... but someone still has to provide the input parameters, so it is not totally free.
The low ratings for Accuracy and Genuinness reflect the underlying fact that virtually all investments are inherently uncertain. The metric depends on projections and estimations, and they in turn are influenced by the assumptions of whoever provides the raw data. Strong optimists and pessimists are likely to make unrealistic claims about the costs and benefits and may not even appreciate their own bias (we all secretly believe we know because we are the realists!). 'Calibrating' the people making the projections may help, and this tends to happen naturally with experience - in other words, IRR accuracy probably correlates with the number of years of experience at calculating investment returns. Another way to improve the accuracy is to persuade several competent and interested people to provide the requisite numbers for the factors used to calculate IRR. If their estimations cluster closely around the same values (i.e. low deviation from the mean, low variance), the numbers have more credibility than if they provide wildly differing estimates: exploring the reasons for those differences (for example, different assumptions or factors) can generate further insight and value from the metric, perhaps suggesting the need to control those factors more closely.
Being PRAGMATIC about it:
P | R | A | G | M | A | T | I | C | Score |
70 | 72 | 25 | 30 | 82 | 50 | 44 | 60 | 88 | 58% |
Notice the 88% score for Cost: if IRR is going to be required anyway for investment appraisal, the marginal cost of using it as a security metric is almost nil. Finance probably has the requisite models/spreadsheets and expertise to calculate IRR for all proposed projects on an even footing ... but someone still has to provide the input parameters, so it is not totally free.
The low ratings for Accuracy and Genuinness reflect the underlying fact that virtually all investments are inherently uncertain. The metric depends on projections and estimations, and they in turn are influenced by the assumptions of whoever provides the raw data. Strong optimists and pessimists are likely to make unrealistic claims about the costs and benefits and may not even appreciate their own bias (we all secretly believe we know because we are the realists!). 'Calibrating' the people making the projections may help, and this tends to happen naturally with experience - in other words, IRR accuracy probably correlates with the number of years of experience at calculating investment returns. Another way to improve the accuracy is to persuade several competent and interested people to provide the requisite numbers for the factors used to calculate IRR. If their estimations cluster closely around the same values (i.e. low deviation from the mean, low variance), the numbers have more credibility than if they provide wildly differing estimates: exploring the reasons for those differences (for example, different assumptions or factors) can generate further insight and value from the metric, perhaps suggesting the need to control those factors more closely.