Security metric #57: % of information assets classified
Security Metric of the Week #57: Proportion of information assets correctly classified
Patently, this metric relates to the classification of information, an important form of control.
The assumption underlying classification is that the majority of an organization's information is neither critical nor sensitive. It is therefore wasteful to secure all the information to the extent that is appropriate for the small amount that is highly critical or sensitive. Likewise, the basic or baseline controls that are appropriate for most information are unlikely to be sufficient for the more critical or sensitive stuff.
The classification process can be as simple or as complicated as you like, according to the number of classes. Taken to extremes:
- A single classification level such as "Corporate Classified" could be defined in which case everything would end up being protected to the same extent.
- More likely, certain important items of information would be deemed "Corporate Classified" with the remainder being "Corporate Unclassified", meaning a two-level classification scheme (OK, three if you count the information assets that have yet to be classified!).
- At the opposite end of the scale, the classification could be so granular in detail that many classes contain just a single information asset with a unique set of security controls for that specific asset.
- Classification is essentially a pointless exercise at both extremes. It's value increases in the middle ground where 'a reasonable number' of classes are defined, each containing 'a reasonable number' of information assets. It's up to you to determine what's reasonable!
The driver for classification is also a variable. Although we mentioned 'criticality' and 'sensitivity', those are not the only parameters. For example, picture a 3x3x3 Rubik's cube with low-medium-high categories for confidentiality, integrity and availability, or a classification scheme that depends on the value of the information, howsoever defined.
Military and government classification schemes appear quite simple in that they are largely or exclusively concerned with confidentiality (e.g. Secret, Top Secret, Ultra), but there are numerous wrinkles in practice such as subtly different definitions of the classes by different countries, and subsidiary markings identifying who is authorized to access the information.
Corporate classification schemes commonly distinguish personal information, trade secrets, other internal-use information and public information, but again there are numerous variations.
Classifying information involves two key steps:
- The information is assessed to determine the appropriate class using defined classification criteria.
- Information security controls deemed appropriate for the particular classification level are applied.
This week's example metric concerns step 1, and is only indicative of step 2 if we assume that a sound process is being followed religiously. Step 2 could be measured independently using a suitable compliance metric.
The illustrative graphic above shows an hypothetical organization systematically assessing and classifying its information assets, measuring and reporting the metric month-by-month. The graph plots "Proportion of information assets correctly classified" by month. The simple Red-Amber-Green color-coding makes it obvious that things have improved substantially since the start of the initiative, with two step-changes in the levels presumably representing discrete projects or stages that made significant progress.
Actually measuring this metric could be something of a mission if you insist on doing so accurately (more on that point below). First, since you are reporting a proportion, you need to determine the size of the whole, in other words how many information assets are there to be classified, in total? Answering that further requires clarity over what constitutes an information asset. Leaving aside the question of whether the term includes ICT hardware and storage media, or just the information/data content, the unit of analysis is also unclear. For instance, does a customer database containing 1,000 customer records each with 100 fields count as one information asset, or 100, or 1,000, or 100,000, or some other number? The answer is not immediately obvious.
In the same vein, the metric explicitly refers to assets being 'correctly' classified implying that, strictly speaking, someone should check the veracity of the classifications - potentially a huge amount of work and additional cost just for the sake of the metric.
On the other hand, clarity over 'information asset' and 'correctly classified' may have value to the organization's information security beyond the metric.
Anyway, let's pick up on that point about the accuracy requirement for this metric. Since we are reporting a proportion, the absolute numbers are less important than their relative quantities. Rather than accuracy, consistency of the measurement approach is the primary concern. With that in mind, it doesn't particularly matter how we define 'information asset' or 'correctly classified' just so long as the definition remains the same from month to month. For various other reasons, it may occasionally be necessary to alter the definitions, in which case we should probably re-base prior values in order to maintain consistency of the metric.
Another big advantage of reporting a proportion is that it is possible to select and measure a representative sample of the population - 'representative' being the crucial term. We're not going to discuss sampling methods today, though. If you need more, there are brief notes about sampling in PRAGMATIC Security Metrics, while any decent statistics text covers it in laborious detail.
The excellent PRAGMATIC ratings indicate this metric is a hit for Acme Enterprises Inc:
In discussing various candidate metrics, Acme's managers were particularly impressed with this one's Actionability and clarity of Meaning (notwithstanding the notes above - presumably they already had a clear picture in the areas mentioned). Driving up the proportion of information assets correctly classified was seen as a valid and viable goal to improve information security - not so much a goal in itself but a means of achieving a general security improvement for Acme as a whole, on the reasonable assumption that, following classification, security resources would be applied more rationally to implement more appropriate security controls.
The illustrative graphic above shows an hypothetical organization systematically assessing and classifying its information assets, measuring and reporting the metric month-by-month. The graph plots "Proportion of information assets correctly classified" by month. The simple Red-Amber-Green color-coding makes it obvious that things have improved substantially since the start of the initiative, with two step-changes in the levels presumably representing discrete projects or stages that made significant progress.
Actually measuring this metric could be something of a mission if you insist on doing so accurately (more on that point below). First, since you are reporting a proportion, you need to determine the size of the whole, in other words how many information assets are there to be classified, in total? Answering that further requires clarity over what constitutes an information asset. Leaving aside the question of whether the term includes ICT hardware and storage media, or just the information/data content, the unit of analysis is also unclear. For instance, does a customer database containing 1,000 customer records each with 100 fields count as one information asset, or 100, or 1,000, or 100,000, or some other number? The answer is not immediately obvious.
In the same vein, the metric explicitly refers to assets being 'correctly' classified implying that, strictly speaking, someone should check the veracity of the classifications - potentially a huge amount of work and additional cost just for the sake of the metric.
On the other hand, clarity over 'information asset' and 'correctly classified' may have value to the organization's information security beyond the metric.
Anyway, let's pick up on that point about the accuracy requirement for this metric. Since we are reporting a proportion, the absolute numbers are less important than their relative quantities. Rather than accuracy, consistency of the measurement approach is the primary concern. With that in mind, it doesn't particularly matter how we define 'information asset' or 'correctly classified' just so long as the definition remains the same from month to month. For various other reasons, it may occasionally be necessary to alter the definitions, in which case we should probably re-base prior values in order to maintain consistency of the metric.
Another big advantage of reporting a proportion is that it is possible to select and measure a representative sample of the population - 'representative' being the crucial term. We're not going to discuss sampling methods today, though. If you need more, there are brief notes about sampling in PRAGMATIC Security Metrics, while any decent statistics text covers it in laborious detail.
The excellent PRAGMATIC ratings indicate this metric is a hit for Acme Enterprises Inc:
P | R | A | G | M | A | T | I | C | Score |
75 | 75 | 97 | 85 | 90 | 80 | 80 | 80 | 80 | 82% |
In discussing various candidate metrics, Acme's managers were particularly impressed with this one's Actionability and clarity of Meaning (notwithstanding the notes above - presumably they already had a clear picture in the areas mentioned). Driving up the proportion of information assets correctly classified was seen as a valid and viable goal to improve information security - not so much a goal in itself but a means of achieving a general security improvement for Acme as a whole, on the reasonable assumption that, following classification, security resources would be applied more rationally to implement more appropriate security controls.