SMotW #3: Unpatched vulnerabilities

This week's security metric is, at face value, straightforward: "simply count the number of technical/software vulnerabilities that remain unpatched."

In practice, as stated, the metric is quite ambiguous.  Are we meant to be counting the number of different vulnerabilities for which patches are not yet applied, or the number of systems that remain to be patched, or both?  If the organization is using a distributed vulnerability scanning utility that identifies missing patches, perhaps the management console gives us the metric directly as a number, or the average number of missing patches per machine.

The Predictability score for this metric would be higher if it also somehow addressed unknown and hence currently unpatchable vulnerabilities, plus nontechnical vulnerabilities (e.g. physical vulnerabilities and vulnerabilities in business processes).  Software vulnerabilities are important, but the organization needs to address risks, meaning other kinds of vulnerabilities as well, plus threats and impacts. 

The metric also disregards vulnerabilities for which no patch is currently available (so-called "oh-days").  If we have the capability to identify them (meaning some form of penetration testing or fuzzing), the metric could be expanded to cover oh-days as well as patchable vulnerabilities.  The problem here is that practically all software has bugs, some of which are security-relevant oh-days.  The number of oh-days we find is a compound function of the quality of the software design, coding and pre-release testing, and the quality of the vulnerability assessment/post-release testing.  The latter, in turn, depends on the amount of effort/resources applied to the testing and the expertise and tools the testers use.

If very carefully specified and rigorously measured so as to standardize or normalize for these variables, the metric might conceivably have value to track or compare different systems and test teams, but that's a lot of effort and hence Cost.  In short, there are so many variables that the more complicated metric would have little value for decision support: it would probably end up being a facile "Oh that's nice" or "That looks bad" metric.  Time lag may be an issue since it takes time to identify and characterize a vulnerability, put the corresponding signature into the scanning tools, scan the systems, examine and assess the output, and finally react appropriately to the findings - meaning a low Timeliness score.  The overall PRAGMATIC score for the relatively simple metric as stated works out at 68%.  Here's the detail:

P
R
A
G
M
A
T
I
C
Score
80
64
80
70
80
75
25
85
52
68%