Showing posts with label Errors. Show all posts
Showing posts with label Errors. Show all posts

Tuesday 21 March 2023

Using AI/ML to draft policy

This week, I am preparing a new template for the SecAware policy suite covering the information risks and security, privacy, compliance, assurance and governance arrangements for Artificial Intelligence or Machine Learning systems. With so much ground to cover on this complex, disruptive and rapidly-evolving technology, it is quite a challenge to figure out the key policy matters and express them succinctly in a generic form.

Just for kicks, I set out by asking GPT-4 to draft a policy but, to be frank, it was more hindrance than help. The draft was quite narrowly focused, entirely neglecting several relevant aspects that I feel are important - the information risks arising from the use of commercial AI/ML services by workers, for instance, as opposed to AI/ML systems developed in-house.

The controls it espoused were quite vague and limited in scope, but that's not uncommon in policies. It noted the need for accountability, for instance, but didn't clarify the reasons nor explain how to achieve accountability in practice. It was not pragmatic.

Tuesday 29 November 2022

Information risks a-gurgling

There are clearly substantial information risks associated with the redaction of sensitive elements from disclosed reports and other formats, risks that the controls don't necessarily fully mitigate.

Yes, controls are fallible and constrained, leaving residual risks. This is hardly Earth-shattering news to any competent professional or enlightened infidel, and yet others are frequently shocked. 

A new report* from a research team at the University of Illinois specifically concerns failures in the redaction processes and tools applied to PDF documents. The physical size of redacted text denoted (covered or replaced) with a variable-length black rectangle may give clues as to the original content, while historically a disappointing number of redaction attempts have failed to prevent the original information being recovered simply by removing the cover images or selecting then pasting the underlying text. Doh!

Wednesday 11 May 2022

Data masking and redaction policy

 


Last evening I completed and published another SecAware infosec policy template addressing ISO/IEC 27002:2022 clause 8.11 "Data masking":

"Data masking should be used in accordance with the organization’s topic-specific policy on access control and other related topic-specific, and business requirements, taking applicable legislation into consideration."

The techniques for masking or redacting highly sensitive information from electronic and physical documents may appear quite straightforward. However, experience tells us the controls are error-prone and fragile: they generally fail-insecure, meaning that sensitive information is liable to be disclosed inappropriately. That. in turn, often leads to embarrassing and costly incidents with the possibility of prosecution and penalties for the organisation at fault, along with reputational damage and brand devaluation.

The policy therefore takes a risk-based approach, outlining a range of masking and redaction controls but recommending advice from competent specialists, particularly if the risks are significant.

The $20 policy template is available here.

Being a brand new policy, it hasn't yet had the benefit of the regular reviews and updates that our more mature policies enjoy ... so, if you spot issues or improvement opportunities, please get in touch.

As usual, I have masked/redacted the remainder of the policy for this blog and on SecAware.com by making an image of just the first half page or so, about one eigth of the document by size but closer to one quarter of the policy's information value. So I'm giving you about $5's worth of information, maybe $4 since the extract is just an image rather than an editable document. On that basis, similar partial images of the 80-odd security policy templates offered through SecAware.com are worth around $320 in total. It's an investment, though, a way to demonstrate the breadth, quality, style and utility of our products and so convince potential buyers like you to invest in them. 

Friday 13 March 2020

March 13 - COVID-19 information risk analysis

I'll kick off with a disclaimer: IANAV*. I have a scientific background in microbial genetics but left the field more than 3 decades ago. I have far more experience in information risk management, so what follows is my personal assessment of the information risks ('risks pertaining to information') associated with the Coronavirus pandemic.

Here's my initial draft of a Probability-Impact-Graphic showing what I see as the main information risk aspects right now, today, with a few words of explanation below:



Top left, the reported shortages of toilet rolls, facemasks, hand sanitiser and soap qualify as information incidents because they are the result of panic buying by people over-reacting to initial media coverage of shortages. The impacts are low because most people are just not that daft. 

Fear, Uncertainty and Doubt, however, is largely what drives those panic buyers. To an extent, I blame the media (mostly social media but also the traditional news media, desperate for their next headline) for frenziedly whipping up a storm of information. There are potentially significant personal and social consequences arising from FUD that I'll cover later.

In amongst the frenzied bad news, there are a few good things coming out of this incident. The global scientific, medical and public services communities are quietly sharing information about the virus, infections, symptoms, morbidity, treatments, contributory factors, social responses etc. There is excellent work going on to characterise the virus, understand its morphology and genetics, understand the disease progression, understand the modes of transmission etc. It's a shame this isn't as widely reported as the bad news but I think I understand why that is: scientists, generally, are reluctant to publish information they aren't reasonably sure about, and "reasonably sure" means if a reporter asks for a categorical statement of fact, most scientists will at least hesitate if not refuse. An example of this is the face mask issue: good quality face masks are designed to trap small particles but not as small as viruses. They help by impeding airborne particles and so reducing the spread of airborne viruses, but do not totally prevent them spreading, hence it would be inaccurate to claim that. The way masks are used also affects their effectiveness. In risk management terms, most controls are the same: they reduce but do not eliminate risk. The problem comes when people naively mistake a scientist's 'not totally effective' for 'ineffective', and then go on to make bad decisions and biased statements. It's much the same issue that leads to a fascinating social phenomenon known as outrage.

Another positive outcome is the flow of resources into scientific and medical research associated with virology, infectious disease, disease reduction, healthcare, public health management etc. In my own infinitesimal way, I'm investing a few brain cycles into this issue and spending a merry hour or three documenting and sharing my thoughts. It's an insignificant contribution but beats doing nothing. Allegedly.

Next comes a group of 4 risks all relating to the large volumes of information circulating right now. "Coronavirus Update" is the top search term on Google US at the momentReddit's coronavirus channel is replete with content from around the world, streaming forth like a snotty nose. Social media are overflowing with the stuff, and it's the topic of offline conversations everywhere. The information risks include:
  • Large volumes of poor or dubious quality information spreading rapidly like Chinese whispers;
  • Accidental misinformation and bad advice, spread inadvertently by naive if genuinely concerned people who misinterpret things, modify or elaborate on them, and pass them on**;
  • So much information, in fact, that it is crowding out other stuff - not literally (I'm reasonably sure the Internet and assorted media have more capacity although they too must be suffering from people falling sick, believing they have the virus, scared of interacting with work colleagues or just "pulling a sickie"), but rather diverting attention from other matters;
  • Smaller volumes of deliberately misleading information, promising miracle cures and priority access to limited resources, or opinion pieces and fake news promoting some agenda other than simply spreading factual information, exploiting the chaos to further hidden agendas.
And finally for today, there's one information risk which eclipses the others, that of the snowball effect as good, bad and ugly information about the pandemic spreads, leading people to worry and back off, reducing productivity and consumption, making investors fearful and sparking a stock market dive leading to yet another global recession. Globally, stock markets are inherently prone to overreacting to bad news. It looks, to me, like an example of a positive feedback control loop, with a curious bias to the negative. Whereas we seem to dive headlong into recessions, the journey back towards normality is a slow clamber, whereas market peaks tend to be short-lived. I rate this as a more significant risk because there are clear signs of it already happening (stock markets in freefall) and the impacts of past recessions have been widespread and dramatic (real-world social effects follow from the economics): in risk terms, that's a bad combination. The other information risks I've discussed vary in probability but, in comparison, their impacts are lower.

So, that's my information risk assessment of COVID-19 for now. What do you think? What important, relevant factors have I missed? Is there anything I have materially misreported or misinterpreted? I plan to update this assessment in due course and welcome further inputs and comments if you have anything to say - critical or constructive, I don't mind which. Perhaps next time I'll explore the threats, vulnerabilities and controls, again from the information risk perspective. But for now I have Things To Do, COVID or no COVID.


..............oooOOOooo..............


* I Am Not A Virologist

** I sincerely hope I am helping not harming by publishing this piece ... but it's up to YOU, dear reader, to consider my credentials and motivations as much as my words: read it critically and make of it what you will. And remember, IANAV. I'm also not a sociologist, medic, public policy or economics expert etc. Just an ordinary guy with a brain, a keyboard and an interest in information risk, security, metrics, resilience and all that jazz.

Wednesday 26 February 2020

A good day down the salt mine

The remaining items for the recycled Information Security 101 module are falling rapidly into place. It will be a bumper delivery with fifty (yes, 50) files already in the bag.

One of the regular end-of-month jobs involves matching up the awareness items - the files - with the contents listing and their descriptions in the train-the-trainer guide. Years back I came up with a simple numeric naming scheme to make it easier to get the files in order and link them with the listings. Good thing too: this afternoon I came across one listed item that I've decided to drop from the module, and about three additions that need to be listed and described. There's still a little time left before delivery to change things further and renumber, again, if we need to ... which emphasises the value of these final quality checks before packaging and despatch.

Another part of the quality assurance process is to open and review the content of all the files. This is our last chance to spot speling mishtakes, errror, omissons and half-finished

I've already made a couple of passes through the materials: the first pass often reminds me of things I've brought up in one item that ought to be repeated or reflected in others, so there's a bit of back-and-forth refinement ... but the looming deadline means eventually I have call a halt to the spit-n-polish phase. It's tough for me to stop when the materials are 'good enough' rather than 'perfect' but I console (or is it delude?) myself by thinking that nobody but me will spot most of what I consider to be the remaining errors, while it's unlikely I will ever find a further tranche of errors due to my inherent blind spots.

So I keep calm and carry on.

In risk terms, I'm consciously making a trade-off. I could carry on checking and refining the content indefinitely but I'd blow the delivery deadline. Alternatively I could stop right now and deliver the module as-is, but I'd be distraught to discover significant problems later on ... which does happen sometimes when I re-read stuff I have written, checked and published some months or years earlier. Some of the problems that catch my beady now are genuine boo-boos that I should really have spotted corrected at the time. Some are things I would put differently now because I've changed and the infosec world has moved on. Few are genuine factual errors, but to be honest that's more a case of me making the same mistakes repeatedly, than the perfection of my writing. Evidently I'm only human. I bleed.

Also in risk terms, I appreciate that despite my best efforts there will almost certainly be things wrong with the finished module, but what of the impacts? I'd be distinctly embarrassed to learn of obvious issues, and I might need to correct them at some cost for rework. Some costs are born by our customers for whom the awareness materials don't quite go to plan, although part of their regular activities on receipt of each new module is to check through and customise the content to suit their organization's specific awareness and training needs, their industry/business situation, their information risks etc. I think we can all live with that. Risk accepted.

Friday 15 November 2019

Risky business

Physical penetration testing is a worthwhile extension to classical IT network pentests, since most technological controls can be negated by physical access to the IT equipment and storage media. In Iowa, a pentest incident that led to two professional pentesters being jailed and taken to court illustrates the importance of the legalities for such work. 

A badly-drafted pentest contract and 'get out of jail free' authorization letter led to genuine differences of opinion about whether the pentesters were or were not acting with due authority when they broke into a court building and were arrested. 

With the court case now pending against the pentesters, little errors and omissions, conflicts and doubts in the contract have taken on greater significance than either the pentest firm or its client appreciated, despite both parties appreciating the need for the contract. They thought they were doing the right thing by completing the formalities. Turns out maybe they hadn't.

I hope common sense will prevail and all parties will learn the lessons here, and so should other pentesters and clients. The contract must be air-tight (which includes, by the way, being certain that the client has the legal authority to authorize the testing as stated), and the pentesters must act entirely within the scope and terms as agreed (in doubt, stay out!).  Communications around the contract, the scope and nature of work, and the tests themselves, are all crucial, and I will just mention the little matter of ethics, trust and competence.

PS  An article about the alleged shortage of pentesters casually mentions:
"The ideal pen tester also exhibits a healthy dose of deviancy. Some people are so bound by the rules of a system that they can’t think beyond it. They can’t fathom the failure modes of a system. Future penetration testers should have a natural inclination toward pushing the boundaries – especially when they are told, in no uncertain terms, not to do so."
Hmm. So pentesters are supposed to go beyond the boundaries in their testing, but remain strictly within the formally contracted scope, terms and conditions. 'Nuff said.

PPS  Charges against the duo were dropped ~4 months after the incident.

Friday 1 February 2019

Security awareness module on mistakes

Security awareness and training programs are primarily concerned with incidents involving deliberate or intentional threats such as hackers and malware. In February, we take a look at mistakes, errors, accidents and other situations that inadvertently cause problems with the integrity of information, such as:
  • Typos;
  • Using inaccurate data, often without realizing it;
  • Having to make decisions based on incomplete and/or out-of-date information;
  • Mistakes when designing, developing, using and administering IT systems, including those that create or expose vulnerabilities to further incidents (such as hacks and malware);
  • Misunderstandings, untrustworthiness, unreliability etc. harming the organization’s reputation and its business relationships.
Mistakes are far more numerous than hacks and malware infections but thankfully most are trivial or inconsequential, and many are spotted and corrected before any damage is done. However, serious incidents involving inaccurate or incomplete information do occur occasionally, reminding us (after the fact!) to be more careful about what we are doing. 
The awareness and training materials take a more proactive angle, encouraging workers to take more care with information especially when handling (providing, communicating, processing or using) particularly important business- or safety-critical information – when the information risks are greater.

Learning objectives

The latest security awareness and training module:
  • Introduces the topic, describing the context and relevance of 'mistakes' to information risk and security;
  • Expands on the associated information risks and typical information security controls to cut down on mistakes involving information;
  • Offers straightforward information and pragmatic advice, motivating people to think - and most of all act – so as to reduce the number and severity of mistakes involving information;
  • Fosters a corporate culture of error-intolerance through greater awareness, accountability and a focus on information quality and integrity.
Our subscribers are encouraged to customize the content supplied, adapting both the look-and-feel (the logo, style, formatting etc.) to suit their awareness program’s branding, and the content to fit their information risk, security and business situations. Subscribers are free to incorporate additional content from other sources, or to cut-and-paste selections from the awareness materials into staff newsletters, internal company magazines, management reports etc. making the best possible use of the awareness content supplied.

So what about your learning objectives in relation to mistakes, errors etc. Does your organization have persistent problems in this area? Is this an issue that deserves greater attention from staff and management, perhaps in one or more departments, sites/business units or teams? Have mistakes with information ever led to significant incidents? What have you actually done to address the risk?

HINT: Don't be surprised if the same methods lead to the same results. "The successful man will profit from his mistakes ... and try again in a different way" [Dale Carnegie]. 

Thursday 31 January 2019

Why so many IT mistakes?


Well, here we are on the brink of another month-end, scrabbling around to finalize and deliver February's awareness module in time for, errr, February.  

This week we've completed the staff and management security awareness and training materials on "Mistakes", leaving just the professional stream to polish-off today ... and I'm having some last-minute fun finding notable IT mistakes to spice-up the professionals' briefings. 

No shortage there!

Being 'notable' implies we don't need to explain the incidents in any detail - a brief reminder will suffice with a few words of wisdom to highlight some relevant aspect of the awareness topic. Link them into a coherent story and the job's a good 'un.

The sheer number of significant IT mistakes constitutes an awareness message in its own right: how come the IT field appears so extraordinarily error-prone? Although we don't intend to explore that question in depth through the awareness materials, our cunning plan is that it should emerge from the content and leave the audience pondering, hopefully chatting about it. Is IT more complex than other fields, making it harder to get right? Are IT pro's unusually inept, slapdash and careless? What are the real root causes underlying IT's poor record? Does the blame lay elsewhere? Or is the assertion that IT has a poor record false, a mistake? 

The point of this ramble is that we've teased out something interesting and thought-provoking, directly relevant to the topic, contentious and hence stimulating. In awareness terms, that's a big win. Our job is nearly done. Just a few short hours to go now before the module is packaged and delivered, and the fun begins for our customers. 

Monday 28 January 2019

Ceative technical writing

"On Writing and Reviewing ..." is a fairly lengthy piece written for EDPACS (the EDP Audit, Control, and Security Newsletter) by Endre Bihari. 

Endre discusses the creative process of writing and reviewing articles, academic papers in particular although the same principles apply more widely - security awareness briefings, for example, or training course notes. Articles for industry journals too. Even scripts for webcasts and seminars etc. Perhaps even blogs.

Although Endre's style is verbose and the language quite complex in places, I find his succinct bullet point advice to reviewers more accessible, for example on the conclusion section he recommends:
  • Are there surprises? Is new material produced?
  • How do the results the writer arrived at tie back to the purpose of the paper?
  • Is there a logical flow from the body of the paper to the conclusion?
  • What are the implications for further study and practice?
  • Are there limitations in the paper the reader might want to investigate? Are they pointed at sufficiently?
  • Does the writing feel “finished” at the end of the conclusion?
  • Is the reader engaged until the end?
  • How does the writer prompt the reader to continue the creative process?
I particularly like the way Endre emphasizes the creative side of communicating effectively. Even formal academic papers can be treated as creative writing. In fact, most would benefit from a more approachable, readable style. 

Interestingly, Endre points out that the author, reviewer and reader are key parties to the communication, with a brief mention of the editor responsible for managing the overall creative process. Good point!

Had I been asked to review Endre's paper, I might have suggested consolidating the bullet-points into a checklist, perhaps as an appendix or a distinct version of his paper. Outside of academia, the world is increasingly operating on Internet time due, largely, to the tsunami of information assaulting us all. Some of us want to get straight to the point, first, then if our interest has been piqued, perhaps explore in more detail from there which suggests the idea of layering the writing, more succinct and direct at first with successive layers expanding on the depth. [Endre does discuss the abstract (or summary, executive summary, precis, outline or whatever but I'm talking here about layering the entire article.]

Another suggestion I'd have made is to incorporate diagrams and figures, in other words using graphic images to supplement or replace the words. A key reason is that many of us 'think in pictures': we find it easier to grasp concepts that are literally drawn out for us rather than (just) written about. There is an art to designing and producing good graphics, though, requiring a set of competencies or aptitudes distinct from writing. 

Graphics are especially beneficial for technical documentation including security awareness materials, such as the our seminar presentations and accompanying briefing papers. We incorporate a lot of graphics such as:
  • Screen-shots showing web pages or application screens such as security configuration options;
  • Graphs - pie-charts, bar-charts, line-charts, spider or radar diagrams etc. depending on the nature of the data;
  • Mind-maps separating the topic into key areas, sometimes pointing out key aspects, conceptual links and common factors;
  • Process flow charts;
  • Informational and motivational messages with eye-catching photographic images;
  • Conceptual diagrams, often mistakenly called 'models' [the models are what the diagrams attempt to portray: the diagrams are simply representational];
  • Other diagrams and images, sometimes annotated and often presented carefully to emphasize certain aspects.
Also, by the way, we use buttons, text boxes, colors and various other graphic devices to pep-up our pieces, for example turning plain (= dull!) bullet point lists into structured figures like this slide plucked from next month's management-level security awareness and training seminar on "Mistakes":

So, depending on its intended purpose and audience, a graphical version of Endre's paper might have been better for some readers, supplementing the published version. At least, that's my take on it, as a reviewer and tech author by day. YMMV.

Sunday 27 January 2019

Streaming awareness content

As the materials fall into place for "Mistakes", our next security awareness module, it's interesting to see how the three content streams have diverged:
  • For workers in general, the materials emphasize making efforts to avoid or at least reduce the number of mistakes involving information such as spotting and self-correcting typos and other simple errors.
  • For managers, there are strategic, governance and information risk management aspects to this topic, with policies and metrics etc.
  • For professionals and specialists, error-trapping, error-correction and similar controls are of particular interest.
The 'workers' audience includes the other two, since managers and pro's also work (quite hard, usually!), while professional/specialist managers (such as Information Risk and Security Managers) belong to all three audiences. In other words, according to someone's position or role in the organization, there are several potentially relevant aspects to the topic.

That's what we mean by 'streaming'. It's not (just) about delivering content via streaming media: the audiences matter.

Monday 21 January 2019

Computer errors

Whereas "computer error" implies that the computer has made a mistake, that is hardly ever true. In reality, almost always it is us - the humans - who are mistaken:
  • Flaws are fundamental mistakes in the specification and design of systems such as 'the Internet' (a massive, distributed information system with seemingly no end of security and other flaws!). The specifiers and  architects are in the frame, plus the people who hired them, directed them and accepted their work. Systems that are not sufficiently resilient for their intended purposes are an example of this: the issue is not that the computers fail to perform, but that they were designed to fail due to mistakes in the requirements specification;
  • Bugs are coding mistakes e.g. the Pentium FDIV bug affecting firmware deep within the chip. Fingers point towards the software developers but again various others are implicated; 
  • Config and management errors are mistakes in the configuration and management of a system e.g. disabling controls such as antivirus, backups and firewalls, or neglecting to patch systems to fix known issues;
  • Typos are mistakes in the data entered by users including those who program and administer the systems;
  • Further errors are associated with the use of computers, computer data and outputs e.g. misinterpreting reports, inappropriately disclosing, releasing or allowing access to sensitive data, misusing computers that are unsuited for the particular purposes, and failing to control IT changes;
  • 'Deliberate errors' include fraud e.g. submitting duplicate or false invoices, expenses claims, timesheets etc. using accidents, confusion, ineptitude as an excuse. 
Set against that broad backdrop, do computers as such ever make mistakes? Here are some possible examples of true "computer errors":
  • Physical phenomena such as noise on communications links and power supplies frequently cause errors, the vast majority of which are automatically controlled against (e.g. detected and corrected using Cyclic Redundancy Checks) ... but some slip through due to limitations in the controls. These could also be categorized as physical incidents and inherent limitations of information theory, while limited controls are, again, largely the result of human errors;
  • Just like people, computers are subject to rounding errors, and the mathematical principles that underpin statistics apply equally to computers, calculators and people. Fully half of all computers make more than the median number of errors!;
  • Artificial intelligence systems can be misled by available information. They are almost as vulnerable to learning inappropriate rules and drawing false conclusions as we humans are. It could be argued that these are not even mistakes, however, since there are complex but mechanistic relationships between their inputs and outputs;
  • Computers are almost as vulnerable as us to errors in ill-defined areas such as language and subjectivity in general - but again it could be argued that these aren't even errors. Personally, I think people are wrong to use SMS/TXT  shortcuts and homonyms in email, and by implication email systems are wrong in neither expanding nor correcting them for me. I no U may nt accpt tht. 

Sunday 20 January 2019

Human error stats

Within our next awareness module on "Mistakes", we would quite like to using some headline statistics to emphasize the importance of human error in information security, illustrating and informing.

So what numbers should we use? 

Finding numbers is the easy part - all it takes is a simple Google search. However, it soon becomes apparent that many of the numbers in circulation are worthless. So far, I've seen figures ranging from 30 to 90% for the proportion of incidents caused by human error, and I've little reason to trust those limits!

Not surprisingly the approach favored by marketers is to pick the most dramatic figure supporting whatever it is they are promoting. Many such figures appear either to have been plucked out of thin air (with little if any detail about the survey methods) or generated by nonscientific studies deliberately constructed to support the forgone conclusion. I imagine "What do you want us to prove?" is one of the most important questions some market survey companies ask of their clients.

To make matters worse, there is a further systemic bias towards large numbers. I hinted at this above when I mentioned 'emphasize the importance' using 'headline statistics': headlines sell, hence eye candy is the name of the game. If a survey finds 51% of something, it doesn't take much for that to become "more than half" then "a majority", then "most", then, well, whatever. As these little nuggets of information pass through the Net, the language becomes ever more dramatic and eye-catching at each step. It's a ratchet effect that quite often ends up in "infographics": not only are the numbers themselves dubious but they are deliberately visually overemphasized. Impact trumps fact. 

So long as there is or was once (allegedly) a grain of fact in there, proponents claim to be speaking The Truth which brings up another factor: the credibility of the information sources. Through bitter experience over several years, I am so cynical about one particular highly self-promotional market survey company that I simply distrust and ignore anything they claim: that simple filter (OK prejudice!) knocks out about one third of the statistics in circulation. Tightening my filter (narrowing my blinkers) further to discount other commercial/vendor-sponsored surveyors discounts another third. At a stroke, I've substantially reduced the number of figures under consideration.

Focusing now on the remainder, it takes effort to evaluate the statistics. Comparing and contrasting different studies, for instance, is tricky since they use different methods and samples (usually hard to determine), and often ambiguous wording. "Cyber" and "breach" are common examples. What exactly is "cybersecurity" or a "cyber threat"? You tell me! To some, "breach" implies "privacy breach" or "breach of the defensive controls" or "breach of the defensive perimeter", while to others it implies "incidents with a deliberate cause" ... which would exclude errors.

For example, the Cyber SecurityBreaches Survey 2018 tells us: 
"It is important to note that the survey specifically covers breaches or attacks, so figures reported here also include cyber security attacks that did not necessarily get past an organisation’s defences (but attempted to do so)."
Some hours after setting out to locate a few credible statistics for awareness purposes, I'm on the point of either giving up on my quest, choosing between several remaining options (perhaps the 'least bad'), lamely offering a range of values (hopefully not as broad as 30 to 90%!) ... or taking a different route to our goal. 

It occurs to me that the situation I'm describing illustrates the very issue of human error quite nicely. I could so easily have gone with that 90% figure, perhaps becoming "almost all" or even "all". I'm not joking: there is a strong case to argue that human failings are the root cause of all our incidents. But to misuse the statistics in that way, without explanation, would have been a mistake.