Thursday 31 January 2019

Why so many IT mistakes?


Well, here we are on the brink of another month-end, scrabbling around to finalize and deliver February's awareness module in time for, errr, February.  

This week we've completed the staff and management security awareness and training materials on "Mistakes", leaving just the professional stream to polish-off today ... and I'm having some last-minute fun finding notable IT mistakes to spice-up the professionals' briefings. 

No shortage there!

Being 'notable' implies we don't need to explain the incidents in any detail - a brief reminder will suffice with a few words of wisdom to highlight some relevant aspect of the awareness topic. Link them into a coherent story and the job's a good 'un.

The sheer number of significant IT mistakes constitutes an awareness message in its own right: how come the IT field appears so extraordinarily error-prone? Although we don't intend to explore that question in depth through the awareness materials, our cunning plan is that it should emerge from the content and leave the audience pondering, hopefully chatting about it. Is IT more complex than other fields, making it harder to get right? Are IT pro's unusually inept, slapdash and careless? What are the real root causes underlying IT's poor record? Does the blame lay elsewhere? Or is the assertion that IT has a poor record false, a mistake? 

The point of this ramble is that we've teased out something interesting and thought-provoking, directly relevant to the topic, contentious and hence stimulating. In awareness terms, that's a big win. Our job is nearly done. Just a few short hours to go now before the module is packaged and delivered, and the fun begins for our customers. 

Monday 28 January 2019

Ceative technical writing

"On Writing and Reviewing ..." is a fairly lengthy piece written for EDPACS (the EDP Audit, Control, and Security Newsletter) by Endre Bihari. 

Endre discusses the creative process of writing and reviewing articles, academic papers in particular although the same principles apply more widely - security awareness briefings, for example, or training course notes. Articles for industry journals too. Even scripts for webcasts and seminars etc. Perhaps even blogs.

Although Endre's style is verbose and the language quite complex in places, I find his succinct bullet point advice to reviewers more accessible, for example on the conclusion section he recommends:
  • Are there surprises? Is new material produced?
  • How do the results the writer arrived at tie back to the purpose of the paper?
  • Is there a logical flow from the body of the paper to the conclusion?
  • What are the implications for further study and practice?
  • Are there limitations in the paper the reader might want to investigate? Are they pointed at sufficiently?
  • Does the writing feel “finished” at the end of the conclusion?
  • Is the reader engaged until the end?
  • How does the writer prompt the reader to continue the creative process?
I particularly like the way Endre emphasizes the creative side of communicating effectively. Even formal academic papers can be treated as creative writing. In fact, most would benefit from a more approachable, readable style. 

Interestingly, Endre points out that the author, reviewer and reader are key parties to the communication, with a brief mention of the editor responsible for managing the overall creative process. Good point!

Had I been asked to review Endre's paper, I might have suggested consolidating the bullet-points into a checklist, perhaps as an appendix or a distinct version of his paper. Outside of academia, the world is increasingly operating on Internet time due, largely, to the tsunami of information assaulting us all. Some of us want to get straight to the point, first, then if our interest has been piqued, perhaps explore in more detail from there which suggests the idea of layering the writing, more succinct and direct at first with successive layers expanding on the depth. [Endre does discuss the abstract (or summary, executive summary, precis, outline or whatever but I'm talking here about layering the entire article.]

Another suggestion I'd have made is to incorporate diagrams and figures, in other words using graphic images to supplement or replace the words. A key reason is that many of us 'think in pictures': we find it easier to grasp concepts that are literally drawn out for us rather than (just) written about. There is an art to designing and producing good graphics, though, requiring a set of competencies or aptitudes distinct from writing. 

Graphics are especially beneficial for technical documentation including security awareness materials, such as the our seminar presentations and accompanying briefing papers. We incorporate a lot of graphics such as:
  • Screen-shots showing web pages or application screens such as security configuration options;
  • Graphs - pie-charts, bar-charts, line-charts, spider or radar diagrams etc. depending on the nature of the data;
  • Mind-maps separating the topic into key areas, sometimes pointing out key aspects, conceptual links and common factors;
  • Process flow charts;
  • Informational and motivational messages with eye-catching photographic images;
  • Conceptual diagrams, often mistakenly called 'models' [the models are what the diagrams attempt to portray: the diagrams are simply representational];
  • Other diagrams and images, sometimes annotated and often presented carefully to emphasize certain aspects.
Also, by the way, we use buttons, text boxes, colors and various other graphic devices to pep-up our pieces, for example turning plain (= dull!) bullet point lists into structured figures like this slide plucked from next month's management-level security awareness and training seminar on "Mistakes":

So, depending on its intended purpose and audience, a graphical version of Endre's paper might have been better for some readers, supplementing the published version. At least, that's my take on it, as a reviewer and tech author by day. YMMV.

Sunday 27 January 2019

Streaming awareness content

As the materials fall into place for "Mistakes", our next security awareness module, it's interesting to see how the three content streams have diverged:
  • For workers in general, the materials emphasize making efforts to avoid or at least reduce the number of mistakes involving information such as spotting and self-correcting typos and other simple errors.
  • For managers, there are strategic, governance and information risk management aspects to this topic, with policies and metrics etc.
  • For professionals and specialists, error-trapping, error-correction and similar controls are of particular interest.
The 'workers' audience includes the other two, since managers and pro's also work (quite hard, usually!), while professional/specialist managers (such as Information Risk and Security Managers) belong to all three audiences. In other words, according to someone's position or role in the organization, there are several potentially relevant aspects to the topic.

That's what we mean by 'streaming'. It's not (just) about delivering content via streaming media: the audiences matter.

Friday 25 January 2019

Cyber risks in context

The World Economic Forum's latest Global Risks Report includes the following Probability Impact Graphic (yellow highlighting added):



So "cyber-attacks" are ranked in the the high-risk zone similar to "natural disasters", while "data fraud or theft" and "critical information infrastructure breakdown" are close-by. I find that quite remarkable: according to the survey, people are almost as concerned about information or IT security failures as they are about the increasingly extreme 'weather bombs' and natural disasters precipitated by climate change.   

The report also includes a forward-looking view of changing risks, including this level-headed assessment of the potential impact of quantum computing on present-day cryptography:
"When the huge resources being devoted to quantum research lead to large-scale quantum computing, many of the tools that form the basis of current digital cryptography will be rendered obsolete. Public key algorithms, in particular, will be effortlessly crackable. Quantum also promises new modes of encryption, but by the time new protections have been put in place many secrets may already have been lost to prying criminals, states and competitors. A collapse of cryptography would take with it much of the scaffolding of digital life. These technologies are at the root of online authentication, trust and even personal identity. They keep secrets—from sensitive personal information to confidential corporate and state data—safe. And they keep fundamental services running, from email communication to banking and commerce. If all this breaks down, the disruption and the cost could be massive. As the prospect of quantum code-breaking looms closer, a transition to new alternatives— such as lattice-based and hash-based cryptography—will gather pace. Some may even revert to low-tech solutions, taking sensitive information offline and relying on in-person exchanges. But historical data will be vulnerable too. If I steal your conventionally encrypted data now, I can bide my time until quantum advances help me to access it, regardless of any stronger precautions you subsequently put in place."
I distinctly remember raising this in a bank's risk workshop thirteen years ago. At the time, the risk was considered high impact but low probability: as the technology advances, the probability is increasing while, at the same time, so is the potential impact since we increasingly depend on cryptography. I wonder if the bank did anything about it, or merely dismissed it as 'Just another paranoid consultant's ramblings'?

Wednesday 23 January 2019

Infosec policies rarer than breaches

I'm in shock. While studying a security survey report, my eye was caught by the title page:


Specifically, the last bullet point is shocking: the survey found that less than a third of UK organizations have "a formal cyber security policy or policies". That seems very strange given the preceding two bullet points, firstly that more than a third have suffered "a cyber security breach or attack in the last 12 months" (so they can hardly deny that the risk is genuiine), and secondly a majority claim that "cyber security is a high priority for their organisation's senior management" (and yet they don't even bother setting policies??).

Even without those preceding bullets, the third one seems very strange - so strange in fact that I'm left wondering if maybe there was a mistake in the survey report (e.g. a data, analytical or printing error), or in the associated questions (e.g. the questions may have been badly phrased) or in my understanding of the finding as presented. In my limited first-hand experience with rather less than ~2,000 UK organizations, most have information security-related policies in place today ... but perhaps that's exactly the point: they may have 'infosec policies' but not 'cybersec policies' as such. Were the survey questions in this area worded too explicitly or interpreted too precisely? Was 'cyber security' even defined for respondents, or 'policy' for that matter? Or is it that, being an infosec professional, I'm more likely to interact with organizations that have a clue about infosec, hence my sample is biased?

Thankfully, a little digging led me to the excellent technical annex with very useful  details about the sampling and survey methods. Aside from some doubt about the way different sizes of organizations were sampled, the approach looks good to me, writing as a former research scientist, latterly an infosec pro - neither a statistician nor surveyor by profession. 

Interviewers had access to a glossary defining a few potentially confusing terms, including cyber security:
"Cyber security includes any processes, practices or technologies that organisations have in place to secure their networks, computers, programs or the data they hold from damage, attack or unauthorised access." 
Nice! That's one of the most lucid definitions I've seen, worthy of inclusion in our glossary. It is only concerned with "damage, attack or unauthorised access" to "networks, computers, programs or the data they hold" rather than information risk and security as a whole, but still it is quite wide in scope. It is not just about hacks via the Internet by outsiders, one of several narrow interpretations in circulation. Nor is it purely about technical or technological security controls.

"Breach" was not defined though. Several survey questions used the phrase "breach or attack", implying that a breach is not an attack, so what is it? Your guess is as good as mine, or the interviewers' and the interviewees'!

Overall, the survey was well designed, competently conducted by trustworthy organizations, and hence the results are sound. Shocking, but sound.

I surmise that my shock relates to a mistake on my part. I assumed that most organizations had policies in this area. As to why roughly two thirds of them don't, one can only guess since the survey didn't explore that aspect, at least not directly. Given my patent lack of expertise in this area, I won't even hazard a guess. Maybe you are willing to give it a go?

Monday 21 January 2019

Computer errors

Whereas "computer error" implies that the computer has made a mistake, that is hardly ever true. In reality, almost always it is us - the humans - who are mistaken:
  • Flaws are fundamental mistakes in the specification and design of systems such as 'the Internet' (a massive, distributed information system with seemingly no end of security and other flaws!). The specifiers and  architects are in the frame, plus the people who hired them, directed them and accepted their work. Systems that are not sufficiently resilient for their intended purposes are an example of this: the issue is not that the computers fail to perform, but that they were designed to fail due to mistakes in the requirements specification;
  • Bugs are coding mistakes e.g. the Pentium FDIV bug affecting firmware deep within the chip. Fingers point towards the software developers but again various others are implicated; 
  • Config and management errors are mistakes in the configuration and management of a system e.g. disabling controls such as antivirus, backups and firewalls, or neglecting to patch systems to fix known issues;
  • Typos are mistakes in the data entered by users including those who program and administer the systems;
  • Further errors are associated with the use of computers, computer data and outputs e.g. misinterpreting reports, inappropriately disclosing, releasing or allowing access to sensitive data, misusing computers that are unsuited for the particular purposes, and failing to control IT changes;
  • 'Deliberate errors' include fraud e.g. submitting duplicate or false invoices, expenses claims, timesheets etc. using accidents, confusion, ineptitude as an excuse. 
Set against that broad backdrop, do computers as such ever make mistakes? Here are some possible examples of true "computer errors":
  • Physical phenomena such as noise on communications links and power supplies frequently cause errors, the vast majority of which are automatically controlled against (e.g. detected and corrected using Cyclic Redundancy Checks) ... but some slip through due to limitations in the controls. These could also be categorized as physical incidents and inherent limitations of information theory, while limited controls are, again, largely the result of human errors;
  • Just like people, computers are subject to rounding errors, and the mathematical principles that underpin statistics apply equally to computers, calculators and people. Fully half of all computers make more than the median number of errors!;
  • Artificial intelligence systems can be misled by available information. They are almost as vulnerable to learning inappropriate rules and drawing false conclusions as we humans are. It could be argued that these are not even mistakes, however, since there are complex but mechanistic relationships between their inputs and outputs;
  • Computers are almost as vulnerable as us to errors in ill-defined areas such as language and subjectivity in general - but again it could be argued that these aren't even errors. Personally, I think people are wrong to use SMS/TXT  shortcuts and homonyms in email, and by implication email systems are wrong in neither expanding nor correcting them for me. I no U may nt accpt tht. 

Sunday 20 January 2019

Human error stats

Within our next awareness module on "Mistakes", we would quite like to using some headline statistics to emphasize the importance of human error in information security, illustrating and informing.

So what numbers should we use? 

Finding numbers is the easy part - all it takes is a simple Google search. However, it soon becomes apparent that many of the numbers in circulation are worthless. So far, I've seen figures ranging from 30 to 90% for the proportion of incidents caused by human error, and I've little reason to trust those limits!

Not surprisingly the approach favored by marketers is to pick the most dramatic figure supporting whatever it is they are promoting. Many such figures appear either to have been plucked out of thin air (with little if any detail about the survey methods) or generated by nonscientific studies deliberately constructed to support the forgone conclusion. I imagine "What do you want us to prove?" is one of the most important questions some market survey companies ask of their clients.

To make matters worse, there is a further systemic bias towards large numbers. I hinted at this above when I mentioned 'emphasize the importance' using 'headline statistics': headlines sell, hence eye candy is the name of the game. If a survey finds 51% of something, it doesn't take much for that to become "more than half" then "a majority", then "most", then, well, whatever. As these little nuggets of information pass through the Net, the language becomes ever more dramatic and eye-catching at each step. It's a ratchet effect that quite often ends up in "infographics": not only are the numbers themselves dubious but they are deliberately visually overemphasized. Impact trumps fact. 

So long as there is or was once (allegedly) a grain of fact in there, proponents claim to be speaking The Truth which brings up another factor: the credibility of the information sources. Through bitter experience over several years, I am so cynical about one particular highly self-promotional market survey company that I simply distrust and ignore anything they claim: that simple filter (OK prejudice!) knocks out about one third of the statistics in circulation. Tightening my filter (narrowing my blinkers) further to discount other commercial/vendor-sponsored surveyors discounts another third. At a stroke, I've substantially reduced the number of figures under consideration.

Focusing now on the remainder, it takes effort to evaluate the statistics. Comparing and contrasting different studies, for instance, is tricky since they use different methods and samples (usually hard to determine), and often ambiguous wording. "Cyber" and "breach" are common examples. What exactly is "cybersecurity" or a "cyber threat"? You tell me! To some, "breach" implies "privacy breach" or "breach of the defensive controls" or "breach of the defensive perimeter", while to others it implies "incidents with a deliberate cause" ... which would exclude errors.

For example, the Cyber SecurityBreaches Survey 2018 tells us: 
"It is important to note that the survey specifically covers breaches or attacks, so figures reported here also include cyber security attacks that did not necessarily get past an organisation’s defences (but attempted to do so)."
Some hours after setting out to locate a few credible statistics for awareness purposes, I'm on the point of either giving up on my quest, choosing between several remaining options (perhaps the 'least bad'), lamely offering a range of values (hopefully not as broad as 30 to 90%!) ... or taking a different route to our goal. 

It occurs to me that the situation I'm describing illustrates the very issue of human error quite nicely. I could so easily have gone with that 90% figure, perhaps becoming "almost all" or even "all". I'm not joking: there is a strong case to argue that human failings are the root cause of all our incidents. But to misuse the statistics in that way, without explanation, would have been a mistake.

Tuesday 15 January 2019

Mistaken awareness


Our next security awareness and training module concerns human error. "Mistakes" is its catchy title but what will it actually cover? What is its purpose? Where is it heading? 

[Scratches head, gazes vacantly into the distance]

Scoping any module draws on:
  • The preliminary planning, thinking, research and pre-announcements that led us to give it a title and a few vague words of description on the website;
  • Other modules, especially recent ones that are relevant to or touched on this topic with an eye to it being covered in February;
  • Preliminary planning for future topics that we might introduce or mention briefly in this one but need not cover in any depth - not so much a grand master plan covering all the awareness topics as a reasonably coherent overview, the picture-on-the-box showing the whole jigsaw;
  • Customer suggestions and feedback, plus conjecture about aspects or concerns that seem likely to be relevant to our customers given their business situations and industries e.g. compliance drivers;
  • General knowledge and experience in this area, including our understanding of good practices ... which reminds me to check the ISO27k and other standards for guidance and of course Google, an excellent way to dig out potentially helpful advice, current thinking in this area plus news of recent, public incidents involving human error;
  • Shallow and deep thought, day and night-dreaming, doodling, occasional caffeine-fueled bouts of mind-mapping, magic crystals and witchcraft a.k.a. creative thinking.

Scoping the module is not a discrete one-off event, rather we spiral-in on the final scope during the course of researching, designing, developing and finalizing the materials. Astute readers might have noticed this happen before, past modules sometimes changing direction and titles in the course of production. Maybe the planned scope turned out to be too ambitious or for that matter too limiting, too dull and boring for our demanding audiences, or indeed for us. Some topics are more inspiring than others.

So, back to "Mistakes": what will the awareness module cover? What we have roughly in mind at this point is: human error, computer error, bugs and flaws, data-entry errors and GIGO, forced and unforced accidents, errors of commission and omission. Little, medium and massive errors, plus those that change. Errors that are are immediately and painfully obvious to all concerned, plus those that lurk quietly in the shadows, perhaps forever unrecognized as such. Error prevention, detection and correction. Failures of all sizes and kinds, including failures of controls to prevent, mitigate, detect and recover from incidents. Conceptual and practical errors. Strategic, tactical and operational errors, particularly mistaken assumptions, poor judgement and inept decision making (the perils of management foresight given incomplete knowledge and imperfect information). Mistakes by various third parties (customers, suppliers, partners, authorities, regulators, advisers, investors, other stakeholders, journalists, social media wags, the Great Unwashed ...) as well as by management and staff. Cascading effects due to clusters and dependencies, some of which are unappreciated until little mistakes lead to serious incidents.

Hmmm, that's more than enough already, if an unsightly jumble!

Talking of incidents, we've started work on a brand new awareness module due for April about incident detection, hence we won't delve far into incident management in February, merely titillating our audiences (including you, dear blog reader) with brief tasters of what's to come, sweet little aperitifs to whet the appetite.  

Q: is an undetected incident an incident?  

A: yes. The fact that it hasn't (yet) been detected may itself constitute a further incident, especially if it turns out to be serious and late/non-detection makes matters even worse.

Tuesday 8 January 2019

Audit questions (braindump)


"What questions should an auditor ask?" is an FAQ that's tricky to answer since "It depends" is technically correct but completely unhelpful.  

To illustrate my point, here are some typical audit questions or inquiries:
  • What do you do in the area of X
  • Tell me about X
  • Show me the policies and procedures relating to X
  • Show me the documentation arising from or relating to X
  • Show me the X system from the perspectives of a user, manager and administrator
  • Who are the users, managers and admins for X
  • Who else can access or interact or change X
  • Who supports X and how good are they
  • Show me what happens if X
  • What might happen if X
  • What else might cause X
  • Who might benefit or be harmed if X
  • What else might happen, or has ever happened, after X
  • Show me how X works
  • Show me what’s broken with X
  • Show me how to break X
  • What stops X from breaking
  • Explain the controls relating to X
  • What are the most important controls relating to X, and why is that
  • Talk me through your training in X
  • Does X matter
  • In the grand scheme of things, is X important relative to, say, Y and Z
  • Is X an issue for the business, or could it be
  • Could X become an issue for the business if Y
  • Under what circumstances might X be a major problem
  • When might X be most problematic, and why
  • How big is X - how wide, how heavy, how numerous, how often ... 
  • Is X right, in your opinion
  • Is X sufficient and appropriate, in your opinion
  • What else can you tell me about X
  • Talk me through X
  • Pretend I am clueless: how would you explain X
  • What causes X
  • What are the drivers for X
  • What are the objectives and constraints relating to X
  • What are the obligations, requirements and goals for X
  • What should or must X not do
  • What has X achieved to date
  • What could or should X have achieved to date
  • What led to the situation involving X
  • What’s the best/worst thing about X
  • What’s the most/least successful or effective thing within, about or without X
  • Walk or talk me through the information/business risks relating to X
  • What are X’s strengths and weaknesses, opportunities and threats
  • What are the most concerning vulnerabilities in X
  • Who or what might threaten X
  • How many changes have been made in X
  • Why and how is X changed
  • What is the most important thing about X
  • What is the most valuable information in X
  • What is the most voluminous information in X
  • How accurate is X …
  • How complete is X …
  • How up-to-date is X …
    • … and how do you know that (show me)
  • Under exceptional or emergency conditions, what are the workarounds for X
  • Over the past X months/years, how many Ys have happened … how and why
  • If X was compromised in some way, or failed, or didn’t perform as expected etc., what would/might happen
  • Who might benefit from or be harmed by X 
  • What has happened in the past when X failed, or didn’t perform as expected etc.
  • Why hasn’t X been addressed already
  • Why didn’t previous efforts fix X
  • Why does X keep coming up
  • What might be done to improve X
  • What have you personally tried to address X
  • What about your team, department or business unit: what have they done about X
  • If you were the Chief Exec, Managing Director or god, what would you do about X
  • Have there been any incidents caused by or involving X and how serious were they
  • What was done in response – what changed and why
  • Who was involved in the incidents
  • Who knew about the incidents
  • How would we cope without X
  • If X was to be replaced, what would be on your wishlist for the replacement
  • Who designed/built/tested/approved/owns X
  • What is X made of: what are the components, platforms, prerequisites etc.
  • What versions of X are in use
  • Show me the configuration parameters for X
  • Show me the logs, alarms and alerts for X
  • What does X depend on
  • What depends on X
  • If X was preceded by W or followed by Y, what would happen to Z
  • Who told you to do ... and why do you think they did that
  • How could X be done more efficiently/effectively
  • What would be the likely or possible consequences of X
  • What would happen if X wasn’t done at all, or not properly
  • Can I have a read-only account on system X to conduct some enquiries
  • Can I have a full-access account on test system X to do some audit tests
  • Can I see your test plans, cases, data and  results
  • Can someone please restore the X backup from last Tuesday 
  • Please retrieve tape X from the store, show me the label and lend me a test system on which I can explore the data content
  • If X was so inclined, how could he/she cause chaos, or benefit from his/her access, or commit fraud/theft, or otherwise exploit things
  • If someone was utterly determined to exploit, compromise or harm X, highly capable and well resourced, what might happen, and how might we prevent them succeeding
  • If someone did exploit X, how might they cover their tracks and hide their shenanigans
  • If X had been exploited, how would we find out about it
  • How can you prove to me that X is working properly
  • Would you say X is top quality or perfect, and if not why not
  • What else is relevant to X
  • What has happened recently in X
  • What else is going on now in X
  • What are you thinking about or planning for the mid to long term in relation to X
  • How could X be linked or integrated with other things
  • Are there any other business processes, links, network connections, data sources etc. relating to X
  • Who else should I contact about X
  • Who else ought to know about the issues with X
  • A moment ago you/someone else told me about X: so what about Y
  • I heard a rumour that Y might be a concern: what can you tell me about Y
  • If you were me, what aspects of X would concern you the most
  • If you were me, what else would you ask, explore or conclude about X
  • What is odd or stands out about X
  • Is X good practice
  • What is it about X that makes you most uncomfortable
  • What is it about this audit that makes you most uncomfortable
  • What is it about me that makes you most uncomfortable
  • What is it about this situation that makes you most uncomfortable
  • What is it about you that makes me most uncomfortable
  • Is there anything else you’d like to say
I could go on all day but that is more than enough already and I really ought to be earning a crust! If I had more time, stronger coffee and thought it would help, I might try sorting and structuring that braindump ... but in many ways it would be better still if you did so, considering and revising the list to suit your purposes if you are planning an audit. 

Alternatively, think about the questions you should avoid or not ask. Are there any difficult areas? What does that tell you?

It's one of those situations where the journey trumps the destination. Developing a set of audit concerns and questions is a creative process. It's fun.

I’m deliberately not specifying “X” because that is the vital context. The best way I know of determining X and the nature of the questions/enquiries arising is risk analysis. The auditor looks at the subject area, considers the possibilities, evaluates the risks and picks out the ones that are of most concern, does the research and fieldwork, examines the findings … and re-evaluates the situation (possibly leading to further investigation – it’s an iterative process, hence all the wiggly arrows and loops on the process diagram). 

Auditing is not simply a case of picking up and completing a questionnaire or checklist, although that might be part of the audit preparation. Competent, experienced auditors feed on lists, books, standards and Google as inputs and thought-provokers for the audit work, not definitive or restrictive descriptions of what to do. On top of all that, the stuff they discover often prompts or leads to further enquiries, sometimes revealing additional issues or risks or concerns almost by accident. The real trick to auditing is to go in with eyes, ears and minds wide open – curious, observant, naïve, doubtful (perhaps even cynical) yet willing to consider and maybe be persuaded.