Posts

Showing posts from January, 2019

Why so many IT mistakes?

Image
Well, here we are on the brink of another month-end, scrabbling around to finalize and deliver February's awareness module in time for, errr, February.   This week we've completed the staff and management security awareness and training materials on "Mistakes", leaving just the professional stream to polish-off today ... and I'm having some last-minute fun finding notable IT mistakes to spice-up the professionals' briefings.  No shortage there! Being 'notable' implies we don't need to explain the incidents in any detail - a brief reminder will suffice with a few words of wisdom to highlight some relevant aspect of the awareness topic. Link them into a coherent story and the job's a good 'un. The sheer number of significant IT mistakes constitutes an awareness message in its own right: how come the IT field appears so extraordinarily error-prone? Although we don't intend to explore that question in depth through the awareness materials,

Ceative technical writing

Image
" On Writing and Reviewing ... " is a fairly lengthy piece written for EDPACS (the EDP Audit, Control, and Security Newsletter ) by Endre Bihari.  Endre discusses the creative process of writing and reviewing articles, academic papers in particular although the same principles apply more widely - security awareness briefings, for example, or training course notes. Articles for industry journals too. Even scripts for webcasts and seminars etc . Perhaps even blogs. Although Endre's style is verbose and the language quite complex in places, I find his succinct bullet point advice to reviewers more accessible, for example on the conclusion section he recommends: Are there surprises? Is new material produced? How do the results the writer arrived at tie back to the purpose of the paper? Is there a logical flow from the body of the paper to the conclusion? What are the implications for further study and practice? Are there limitations in the paper the reader might want to inves

Streaming awareness content

Image
As the materials fall into place for "Mistakes", our next security awareness module, it's interesting to see how the three content streams have diverged: For workers in general, the materials emphasize making efforts to avoid or at least reduce the number of mistakes involving information such as spotting and self-correcting typos and other simple errors. For managers , there are strategic, governance and information risk management aspects to this topic, with policies and metrics etc. For professionals and specialists , error-trapping, error-correction and similar controls are of particular interest. The 'workers' audience includes the other two, since managers and pro's also work (quite hard, usually!), while professional/specialist managers (such as Information Risk and Security Managers) belong to all three audiences. In other words, according to someone's position or role in the organization, there are several potentially relevant aspects to the topic

Cyber risks in context

Image
The  World Economic Forum's latest  Global Risks Report includes the following P robability I mpact G raphic (yellow highlighting added): So "cyber-attacks" are ranked in the the high-risk zone similar to "natural disasters", while "data fraud or theft" and "critical information infrastructure breakdown" are close-by. I find that quite remarkable: according to the survey, people are almost as concerned about information or IT security failures as they are about the increasingly extreme 'weather bombs' and natural disasters precipitated by climate change.    The report also includes a forward-looking view of changing risks, including this level-headed assessment of the potential impact of quantum computing on present-day cryptography: "When the huge resources being devoted to quantum research lead to large-scale quantum computing, many of the tools that form the basis of current digital cryptography will be rendered obsolete. Pub

Infosec policies rarer than breaches

Image
I'm in shock. While studying a security survey report , my eye was caught by the title page: Specifically, the last bullet point is shocking: the survey found that less than a third of UK organizations have "a formal cyber security policy or policies".  That seems very strange given the preceding two bullet points, firstly that more than a third have suffered "a cyber security breach or attack in the last 12 months" (so they can hardly deny that the risk is genuiine), and secondly a majority claim that "cyber security is a high priority for their organisation's senior management" (and yet they don't even bother setting policies??). Even without those preceding bullets, the third one seems very strange - so strange in fact that I'm left wondering if maybe there was a mistake in the survey report ( e.g. a data, analytical or printing error), or in the associated questions ( e.g. the questions may have been badly phrased) or in my understand

Computer errors

Image
Whereas "computer error" implies that the computer has made a mistake, that is hardly ever true. In reality, almost always it is us - the humans - who are mistaken: Flaws are fundamental mistakes in the specification and design of systems such as 'the Internet' (a massive, distributed information system with seemingly no end of security and other flaws!). The specifiers and  architects are in the frame, plus the people who hired them, directed them and accepted their work. Systems that are not sufficiently resilient for their intended purposes are an example of this: the issue is not that the computers fail to perform, but that they were designed to fail due to mistakes in the requirements specification; Bugs are coding mistakes  e.g. the  Pentium FDIV bug  affecting firmware deep within the chip. Fingers point towards the software developers but again various others are implicated;  Config and management errors are mistakes in the configuration and management of a s

Human error stats

Image
Within our next awareness module on "Mistakes", we would quite like to  using some headline statistics to emphasize the importance of human error in information security,  illustrating and informing . So what numbers should we use?  Finding numbers is the easy part - all it takes is a simple Google search. However, it soon becomes apparent that many of the numbers in circulation are worthless. So far, I've seen figures ranging from 30 to 90% for the proportion of incidents caused by human error, and I've little reason to trust those limits! Not surprisingly the approach favored by marketers is to pick the most dramatic figure supporting whatever it is they are promoting. Many such figures appear either to have been plucked out of thin air (with little if any detail about the survey methods) or generated by nonscientific studies deliberately constructed to support the forgone conclusion. I imagine "What do you want us to prove?" is one of the most important q

Mistaken awareness

Image
Our next security awareness and training module concerns human error. "Mistakes" is its catchy title but what will it actually cover? What is its purpose? Where is it heading?  [Scratches head, gazes vacantly into the distance] Scoping any module draws on: The preliminary planning, thinking, research and pre-announcements that led us to give it a title and a few vague words of description on the website; Other modules, especially recent ones that are relevant to or touched on this topic with an eye to it being covered in February; Preliminary planning for future topics that we might introduce or mention briefly in this one but need not cover in any depth - not so much a grand master plan covering all the awareness topics as a reasonably coherent overview, the picture-on-the-box showing the whole jigsaw; Customer suggestions and feedback, plus conjecture about aspects or concerns that seem likely to be relevant to our customers given their business situations and industries e.

Audit questions (braindump)

Image
"What questions should an auditor ask?" is an FAQ that's tricky to answer since "It depends" is technically correct but completely unhelpful.   To illustrate my point, here are some typical audit questions or inquiries: What do you do in the area of X Tell me about X Show me the policies and procedures relating to X Show me the documentation arising from or relating to X Show me the X system from the perspectives of a user, manager and administrator Who are the users, managers and admins for X Who else can access or interact or change X Who supports X and how good are they Show me what happens if X What might happen if X What else might cause X Who might benefit or be harmed if X What else might happen, or has ever happened, after X Show me how X works Show me what’s broken with X Show me how to break X What stops X from breaking Explain the controls relating to X What are the most important controls relating to X, and why is that Talk me through your train