Showing posts with label ID theft. Show all posts
Showing posts with label ID theft. Show all posts

Wednesday 13 April 2022

Domotics - a can-o-worms


This morning, I’ve been browsing and thinking about ISO/IEC 27403, a draft ISO27k standard on the infosec and privacy aspects of “domotics” i.e. IoT things at home.

 

Compared to a [reasonably well controlled] corporate situation, there are numerous ‘challenges’ (risks) in the home setting e.g.:

  • Limited information security awareness and competence by most people. IoT things are generally just black-boxes.
  • Ad hoc assemblages of networked IT systems - including things worn/carried about the person (residents and visitors) and work things, not just things physically installed about the home (e.g. smart heating controls, door locks and cat feeders).
  • Things are not [always] designed for adequate security or privacy since other requirements (such as low price and ease of use) generally take precedence. Finite processing and storage capacities, plus limited user interfaces, hamper/constrain their security capabilities.
  • Lack of processes for managing security and privacy systematically at home. If anything, activities tend to be ad hoc/informal and reactive rather than proactive.
  • Informality: the home is a relatively unstructured, unmanaged environment compared to the typical corporate situation. Few domotics users even consider designing a complete system, although certain aspects or subsystems may be intentionally designed or at least assembled for particular purposes (e.g. entertainment).
  • Dynamics and diversity: people, devices and services plus the associated challenges and risks, are varied and changeable. The home is a fairly fluid environment anyway, and innovation is driving the tech at quite a pace.
  • Limited ability to control who may be present in/near the home and hence may be interacting with IoT devices e.g. adult residents plus children, owners, visitors, installers, maintenance people, neighbours, intruders ...  Physically securing things against accidental or malicious interaction is difficult, while networking compounds the issue.
  • Limited ability to manage and control IoT device and service supply chains, as well as the installation, configuration, use, monitoring  and maintenance of devices and services, with little if any coordination among the parties.

Good luck to anyone seriously attempting to secure their own home, or for corporations concerned about securing their employees including home workers (execs and plebs) and an increasingly mobile and tooled-up workforce. 

For instance, I have only a rough idea of what IoT things are in my home, some of which are not mine and are not under my control. Security configuration is, at best, an ad hoc activity when (some) things turn up. Security monitoring and management (e.g. patching) are almost nonexistent, in practice. Being an infosec professional and geek, I do my level best to contain and protect work-related and personal info but it is hard going in such an open, dynamic and potentially hostile environment. “Zero trust” just about sums it up.

The practical limitations, in turn, open the door to all manner of mischief and misfortune.  It’s a veritable can-o-worms I tell you.

Tuesday 28 July 2020

An interesting risk metric

We were chatting over coffee this morning about an organisation that is recruiting at the moment. Having been through the cycle of advertising, preselecting/long-listing, interviewing and short-listing candidates, their references came back negative, forcing the organisation to reboot the recruitment process.

On the one hand, that's a disappointing and somewhat costly outcome. It suggests, perhaps, that the preselection and interviewing steps could be tightened up. Were there warning signs - yellow or red flags that could/should have been spotted earlier in the process?

On the other, it also indicates that the selection/recruitment process is effectively identifying and weeding-out unsuitable applicants, avoiding what could have turned out to be even costlier incidents down the line if the appointments had been made and the new recruits had turned out to be unsuitable.

So, Proportion of shortlisted candidates rejected as a result of poor references is one of several possible measures of the recruitment process, with implications for risks and opportunities, costs and benefits. Very high or low values of the metric, or adverse trends, or sudden changes, may all be cause for concern and worthy of investigation, whereas middling, "neutral" values are to be expected.

The metric probably wouldn't have even occurred to me except that I happen to be documenting information security controls for joiners, movers and leavers at the moment for the next phase of SecAware ISMS templates. Information risks should be taken into account during the recruitment process. Confirming applicants' identities, taking up references, confirming employment histories and qualifications on their CVs, and running other background checks (e.g. for criminal records or credit issues) can be important controls if legally permissible, especially for appointments into trusted roles - and, by the way, that includes internal transfers and promotions as well as new recruits.  

Wednesday 22 January 2020

Further lessons from Travelex

At the bottom of a Travelex update on their incident, I spotted this yesterday:

Customer Precautions
Based on the public attention this incident has received, individuals may try to take advantage of it and attempt some common e-mail or telephone scams. Increased awareness and vigilance are key to detecting and preventing this type of activity. As a precaution, if you receive a call from someone claiming to be from Travelex that you are not expecting or you are unsure about the identity of a caller, you should end the call and call back on 0345 872 7627. If you have any questions or believe you have received a suspicious e-mail or telephone call, please do not hesitate to contact us. 

Although I am not personally aware of any such 'e-mail or telephone scams', Travelex would know better than me - and anyway even if there have been no scams as yet, the warning makes sense: there is indeed a known risk of scammers exploiting major, well-publicised incidents such as this. We've seen it before, such as fake charity scams taking advantage of the public reaction to natural disasters such as the New Orleans floods, and - who knows - maybe the Australian bushfires.

At the same time, this infosec geek is idly wondering whether the Travelex warning message and web page are legitimate. It is conceivable that the cyber-criminals and hackers behind the ransomware incident may still have control of the Travelex domains, webservers and/or websites, perhaps all their corporate comms including the Travelex Twitter feeds and maybe even the switchboard behind that 0345 number. 

I'm waffling on about corporate identity theft, flowing on from the original incident.

I appreciate the scenario I'm postulating seems unlikely but bear with me and my professional paranoia for a moment. Let's explore the hypothetical information risks and see where it leads.

Firstly, corporate identity theft may not be as well publicised as personal identity theft but it is a genuine risk, as demonstrated through incidents such as: 
  • Scammers seizing control of DNS records to redirect traffic from corporate websites to their own; 
  • Scammers using fraudulently obtained or fake digital certificates, or exploiting browser vulnerabilities, to undermine HTTPS controls; 
  • Phishing where victims are socially-engineered into believing they are interacting with the lure organization's website; 
  • Fake apps, spyware and bank Trojans designed to steal login credentials and other confidential information while maintaining the facade of normality; 
  • Cybersquatters registering domains similar to legitimate corporate domains with different extensions, typos or lookalike characters, intending to mislead visitors; 
  • Counterfeiting, where branding, logos, packaging etc. are used to dupe victims (consumers and sometimes also retailers and corporate customers) into buying fake and usually substandard products; 
  • Various telephone, email and social media scams involving misrepresentation and other social engineering methods to mislead and defraud victims who mistakenly believe they are dealing with legitimate companies, authorities or other trusted bodies. 
Secondly, the breadth and depth of network security compromise involved in major ransomware and other malware incidents suggests an even more sinister threat: the ransom demand is merely a dramatic, shocking point in the course of the incident, an incident that started at some prior point when the first corporate system was hacked or infected. Since then, possibly for days, weeks or months, the perpetrators would presumably have been surreptitiously roaming around the network 'behind enemy lines', exploring the topography and mapping out controls, installing and preparing to trigger the ransomware (perhaps also disabling the backups), stealing and exfiltrating corporate information to reinforce the ransom demands (perhaps selling or disclosing it for kicks, or stashing it away for a rainy day) and who knows what else. 

It is feasible, then, for the cybercriminals to have taken command of Travelex's external relations, including the website, the current holding pages and Tweets. They could all be fakes, the hackers pressing home management's powerlessness. How would we tell? Even the Travelex CEO's talking-heads videoblog concerning the incident could be part of the scam. Like many of their retail customers, I have no idea whether the person we've seen in the video is really their CEO or an actor, an imposter, perhaps a deepfake video animation.

Even if you find that lurid scenario untenable, there are less extreme possibilities worth considering. The fact is it's no simple matter to lock down a complex global corporate network following such a compromise, shutting out the hackers while also releasing official information, patching and securing systems, recovering compromised data and services, resuming internal corporate comms and keeping various external stakeholders in touch with developments. Maybe the hackers still have partial access (e.g. through covert backdoors) and limited control, enough to observe and meddle with the recovery activities, discredit and disrupt comms and so restrict management's freedom of action.

As with the Sony incident 5 years ago, there's a lot we can learn from Travelex's misfortune, through a blend of observation, analysis and supposition. All it take is some appreciation of the information risk and security aspects, a vivid imagination, and the ability to draw out general lessons from the specific case. For example, under crisis conditions, normal internal and external corporate communications may be disrupted and untrustworthy ... so what can be done now to prepare for that eventuality? Recovering from a major cyber incident takes rather more than just 'invoking the IT disaster recovery plan'! February's security awareness module will have a gripping story to tell, for sure!

Friday 24 May 2019

Leaving a digital legacy

Yesterday morning, I checked the ISO27k Forum messages as usual. Among the ping-pong of ongoing conversations was a sad request to stop emailing a Forum member who died just last week. His widow sent a few polite messages through his email account to the whole list, replying to an assortment of recent Forum emails. Presumably she didn't read or comprehend the 'unsubscribe' instructions from Google at the bottom of every message, and given the circumstances, it's entirely understandable - not least because I think she is Spanish, while the Forum and its instructions are in English.

Unsubscribing someone from an email list is a simple example – something that’s easy for those of us who frequently use managed mailing lists (or groups or reflectors or Special Interest Groups or whatever they are called) but is not necessarily obvious to those who don’t, especially when they are in turmoil, grieving and overloaded with a million difficult tasks all at once. It’s an extraordinarily stressful time. Thinking logically is an effort.

The same thing applies to other forms of social media, both professional and casual, plus various work systems, plus online banking, tax systems and so forth – online systems that our loved ones may need to access when we’re unable. And the same again for local accounts including boot passwords. These are our ‘digital footprints’.

Both pre-and post-mortem information security issues cover the whole CIA gamut:
  • Confidential passwords, passphrases, account numbers, IDs etc. may be (and jolly well should be!) hard or impossible for someone to retrieve on our behalf – password vaults being a classic example. Don’t forget that super-strong passwords/passphrases and biometrics are useless without the key person;
  • Integrity concerns mean we can’t simply demand access to someone else’s affairs: there are procedures to be followed, things to prove, legal and administrative hoops to jump through, which takes time and effort, plus there are trust aspects to this (to what extent should we trust those tasked with dealing with our affairs? What if they turn out to be unable, or unsuitable?);
  • Availability of information assets is certainly an issue: recall the recent story about the death of a Canadian CEO for a cryptocurrency exchange business with sole access to the vault containing $millions of customer as well as corporate assets? Without the essential key, the crypto did exactly what it was meant to do. The inability to access someone’s smart phone or tablet or safety deposit box without their access PIN code are everyday examples, and can be a distinct challenge even for the spooks.
It is generally possible for our survivors (or rather, ‘executors’ with the legal right to manage our affairs) to gain access and control of, say, our bank and investment accounts, pensions and insurance etc. through official mechanisms, but for obvious reasons the authorization and control transfer process is formal, tedious and can be slow … which can be a massive problem if there is a desperate need for cash to pay for household and funeral expenses etc

A pragmatic approach is to think ahead. Make sure we don’t take all our super-duper passwords with us to the grave, for starters … which may mean writing them into our wills, sharing them with our nearest-and-dearest or lawyers or trusted colleagues, or at least leaving behind sufficiently strong clues for someone who knows us very well (and isn’t totally consumed with grief) to figure out the secret phrase that opens our password vault. Simply letting someone know that we use a particular password vault is a good start, ideally showing them how it works. 

Making escrow arrangements for our source code is another example – a delicate subject that I need to broach, again, with a talented programmer friend. 

By the way, our simply forgetting a password or whatever can cause real problems. It’s not just about death. Forgetfulness, stress, overload, mental illness and old age can put us in the same spot. The sheer number of passwords and their complexity is the main reason that password vaults rock.

Giving someone we trust ready access to our email accounts is another tip, especially as so much revolves around email – including ‘lost password’ retrieval mechanisms for instance. 

That’s why hackers and social engineers are so keen to gain access to a victim’s email account/s. Aside from simply impersonating them and exploiting their social networks, the ability to reset their passwords on other systems extends the identity fraud. Federated identity management can make this issue even worse: imagine all the mischief someone can do with control of your Google, Facebook, government or work ID!

There are other practical things we can do to prepare for our incapacity and ultimate demise, such as writing a will (in a proper, legally valid manner – not as easy as it may appear), maintaining/updating it (e.g. whenever we change our vault passwords), nominating and informing suitable executors (including ‘digital executors’ if that means anything to you and your legislature), arranging insurance, clearing our debts and so on. Those of us lucky enough to have investments ranging from savings accounts, houses and businesses to golf clubs, vintage motorbikes and priceless collections of antique Star Wars figures (still sealed in their original boxes, naturally) can help by preparing written lists and descriptions of our assets with approximate “fire-sale” values and either instructions for their disposal or who to contact for help …

… And that leads to my final Hinson tip for those of you still reading and thinking about this dark and depressing topic: we can help each other. Aside from ourselves, what about our relatives, friends, colleagues and acquaintances. Are they on top of this? Do they need a hand to understand the issues and make preparations while they are still able? Would they welcome our assistance post-mortem? What about organ donation: is that something they'd consider?

This is a tough topic to raise and address, taboo for some, but the alternative is even tougher. 

If I’ve set you thinking, there are loads of resources on the web. If you find anything particularly helpful and interesting Out There, or have anything you’d like to add or modify in what I’ve said here, please comment below. This is NOT a taboo topic. It deserves a good airing.


PS  Further suggestions from friends and colleagues:
  • "I have a file in my desk labeled Death. It leaves instructions for those that will survive me, and includes the password to my password safe." [Not so useful if you die in a house fire though ...]
  • "I have a password vault with a password that uses an algorithm that needs to be derived using some obscure documented rules that only people very close to me would know." [A little puzzle, what fun!]
  • "Someone I know has a two part password to a password vault where his wife has one half and a close friend who lives abroad has the other half." [That's fine if they both survive your friend, and can both be contacted!]
  • "My password vault also has instructions on what do with each account e.g. 'Log onto this hotel booking site and cancel any bookings'" [... or opt for a late check-in maybe] 
  • "As with all business continuity plans you need to tell people about it and test it from time to time. Every so often I get one of my children to “test” my BCP to make sure they can get at my passwords. This is one of those kinds of BCP that you are 100% certain will need to be invoked at some point and where the key objective is to minimise the impact on your loved ones. Don’t underestimate the importance of doing something like this." [True. It's so sad to hear about grieving family and friends facing additional nightmares due to the departed's lack of foresight and prep. Denial is a fifth form of risk treatment after avoidance, mitigation, sharing and acceptance.]

Thursday 22 November 2018

SEC begets better BEC sec

According to an article on CFO.com by Howard Scheck, a former chief accountant of the US Securities and Exchange Commission’s Division of Enforcement: 
"Public companies must assess and calibrate internal accounting controls for the risk of cyber frauds. Companies are now on notice that they must consider cyber threats when devising and maintaining a system of internal accounting controls."
A series of Business Email Compromise frauds (successful social engineering attacks) against US companies evidently prompted the SEC to act. Specifically, according to Howard:
"The commission made it clear that public companies subject to Section 13(b)(2)(B) of the Securities Exchange Act — the federal securities law provision covering internal controls — have an obligation to assess and calibrate internal accounting controls for the risk of cyber frauds and adjust policies and procedures accordingly."
I wonder how the lawyers will interpret that obligation to 'assess and calibrate' the internal accounting controls? I am not a lawyer but 'assessing' typically involves checking or comparing something against specified requirements or specifications (compliance assessments), while 'calibration' may simply mean measuring the amount of discrepancy. 'Adjusting' accounting-related policies and procedures may help reduce the BEC risk, but what about other policies and procedures? What about the technical and physical controls such as user authentication and access controls on the computer systems? What about awareness and training on the 'adjusted' policies and procedures? Aside from 'adjusting', how about instituting entirely new policies and procedures to plug various gaps in the internal controls framework? Taking that part of the CFO article at face value, the SEC appears (to this non-lawyer) very narrowly focused, perhaps even a little misguided. 

Turns out there's more to this:
"As the report warns, companies should be proactive and take steps to consider cyber scams. Specific measures should include:
  • Identify enterprise-wide cybersecurity policies and how they intersect with federal securities laws compliance
  • Update risk assessments for cyber-breach scenarios
  • Identify key controls designed to prevent illegitimate disbursements, or accounting errors from cyber frauds, and understand how they could be circumvented or overridden. Attention should be given to controls for payment requests, payment authorizations, and disbursements approvals — especially those for purported “time-sensitive” and foreign transactions — and to controls involving changes to vendor disbursement data.
  • Evaluate the design and test the operating effectiveness of these key controls
  • Implement necessary control enhancements, including training of personnel
  • Monitor activities, potentially with data analytic tools, for potential illegitimate disbursements
While it’s not addressed in the report, companies could be at risk for disclosure failures after a cyber incident, and CEOs and CFOs are in the SEC’s cross-hairs due to representations in Section 302 Certifications. Therefore, companies should also consider disclosure controls for cyber-breaches."
The Securities Exchange Act became law way back in 1934, well before the Internet or email were invented ... although fraud has been around for millennia. In just 31 pages, the Act led to the formation of the SEC itself and remains a foundation for the oversight and control of US stock exchanges, albeit supported and extended by a raft of related laws and regulations. Todays system of controls has come a long way already and is still evolving.

Tuesday 2 October 2018

Phishing awareness and training module

It's out: a fully revised (almost completely rewritten!) awareness and training module on phishing.

Phishing is one of many social engineering threats, perhaps the most widespread and most threatening.

Socially-engineering people into opening malicious messages, attachments and links has proven an effective way to bypass many technical security controls.

Phishing is a business enterprise, a highly profitable and successful one making this a growth industry. Typical losses from phishing attacks have been estimated at $1.6m per incident, with some stretching into the tens and perhaps hundreds of millions of dollars.

Just as Advanced Persistent Threat (APT) takes malware to a higher level of risk, so Business Email Compromise (BEC) puts an even more sinister spin on regular phishing. With BEC, the social engineering is custom-designed to coerce employees in powerful, trusted corporate roles to compromise their organizations, for example by making unauthorized and inappropriate wire transfers or online payments from corporate bank accounts to accounts controlled by the fraudsters.

As with ordinary phishing, the fraudsters behind BEC and other novel forms of social engineering have plenty of opportunities to develop variants of existing attacks as well as developing totally novel ones. Therefore, we can expect to see more numerous, sophisticated and costly incidents as a result. Aggressive dark-side innovation is a particular feature of the challenges in this area, making creative approaches to awareness and training even more valuable. We hope to prompt managers and professionals especially to think through the ramifications of the specific incidents described, generalize the lessons and consider the broader implications. We’re doing our best to make the organization future-proof. It’s a big ask though! Good luck.

Learning objectives

October’s module is designed to:
  • Introduce and explain phishing and related threats in straightforward terms, illustrated with examples and diagrams;
  • Expand on the associated information risks and controls, from the dual perspectives of individuals and the organization;
  • Encourage individuals to spot and react appropriately to possible phishing attempts targeting them personally;
  • Encourage workers to spot and react appropriately to phishing and BEC attacks targeting the organization, plus other social engineering attacks, frauds and scams;
  • Stimulate people to think - and most of all act - more securely in a general way, for example being more alert for the clues or indicators of trouble ahead, and reporting them.
Consider your organization’s learning objectives in relation to phishing. Are there specific concerns in this area, or just a general interest? Has your organization been used as a phishing lure, maybe, or suffered spear-phishing or BEC incidents? Do you feel particularly vulnerable in some way, perhaps having narrowly avoided disaster (a near-miss)? Are there certain business units, departments, functions, teams or individuals that could really do with a knowledge and motivational boost? Lots to think about this month!

Content outline





Friday 28 September 2018

Phishing awareness module imminent

Things are falling rapidly into place as the delivery deadline for October's awareness module on phishing looms large.

Three cool awareness poster graphics are in from the art department, and three awareness seminars are about done. 

The seminar slides and speaker notes, in turn, form the basis for accompanying awareness briefings for staff, managers and professionals, respectively.  

We also have two 'scam alert' one-pagers, plus the usual set of supporting collateral all coming along nicely - a train-the-trainer guide on how to get the best out of the new batch of materials, an awareness challenge/quiz, an extensive glossary (with a few new phishing-related terms added this month), an updated policy template, Internal Controls Questionnaire (IT audit checklist), board agenda, phishing maturity metric, and newsletter.  Lots on the go and several gaps to be plugged yet.


Today we're ploughing on, full speed ahead thanks to copious fresh coffee and Guy Garvey singing "It's all gonna be magnificent" on the office sound system to encourage us rapidly towards the end of another month's furrow.  So inspirational!  

We've drawn from at least five phishing-related reports and countless Internet sources, stitching together a patchwork of data, analysis and advice in a more coherent form that makes sense to our three audience groups. I rely on a plain text file of notes, mostly quotable paragraphs and URLs for the sources since we always credit our sources. There are so many aspects to phishing that I'd be lost without my notes!  As it is, I have a headfull of stuff on the go so I press ahead with the remaining writing or I'll either lose the plot completely or burst!

For most organizations, security awareness and training is just another thing on a long to-do list with limited resources and many competing priorities, whereas we have the benefit of our well-practiced production methods and team, and the luxury of being able to concentrate on the single topic at hand. We do have other things going on, not least running the business, feeding the animals and blogging. But today is when the next module falls neatly into place, ready to deliver and then pause briefly for breath before the next one. Our lovely customers, meanwhile, are busy running their businesses and rounding-off their awareness and training activities on 'outsider threats', September's topic. As those awareness messages sink in, October's fresh topic and new module will boost energy once more and take things up another notch, a step closer to the corporate security culture that generates genuine business returns from all this sustained effort.

Friday 21 September 2018

Phishing awareness

Today marks the end of a long but successful week. We've been slogging away at the phishing awareness topic for October's module, picking out the key issues, coming up with the awareness messages and figuring out the stories to tell.

Despite technology being such a small part of phishing, it plays an important part that we can't just ignore. Multi-Factor Authentication, for example, is increasingly being used by organizations that care about identification and authentication, so workers are quite likely to have at least heard of it, even if they are not actually using it as yet. Explaining what MFA is would set them up to appreciate what it means when they are offered or required to accept it.

At the same time, MFA is not a universal or ultimate solution. Managers and professionals should appreciate that there are pros and cons to implementing MFA, and lots of choices in exactly what form of MFA the organization might adopt ... but explaining all that in detail would divert or distract attention from  phishing, the main subject. 

Fortunately, we don't need to delve too deep. The rolling monthly sequence of topics means we can pick up on MFA and other aspects another time, without feeling guilty about just skimming over in October.

By the same token, although we haven't delivered an awareness and training module purely on phishing for some time (too long really), we have mentioned/skimmed it repeatedly, several times a year in fact, in the course of covering other topics such as email security, Internet security, malware, social engineering and fraud. 

That's enough for now. Time for a break, re-girding our loins prior to finalizing and polishing October's materials next week.

Which reminds me, why are loins girded anyway? What's that all about, Google?

Saturday 15 September 2018

The business value of infosec

Thanks to a heads-up from Walt Williams, I'm mulling over a report by CompariTech indicating that the announcement of serious "breaches" by commercial organizations leads to a depression in their stock prices relative to the stock market.

I'm using "breach" in quotes because the study focuses on public disclosures by large US commercial corporations of significant incidents involving the unauthorized release of large quantities of personal data, credit card numbers etc. That's just one type of information security incident, or breach of security, and just one type of organization. There are many others.

The situation is clearly complex with a number of factors, some of which act in opposition (e.g. the publicity around a "breach" is still publicity!). There are several constraints and assumptions in the study (e.g. small samples) so personally I'm quite dubious about the conclusions ... but it adds some weight to the not unreasonable claim that "breaches" are generally bad for business. At the very least, it disproves the null hypothesis that "breaches" have no effect on business.

Personally, I'm intrigued to find that "breaches" do not have a more marked effect on stock price. The correlation seems surprisingly weak to me, suggesting that I am biased, over-estimating the importance of infosec - another not unreasonable assumption given that I am an infosec pro! It's the centre of my little world after all!

Aside from the fairly weak "breach" effect, I'd be fascinated to learn more about the approaches towards information risk, security, privacy, governance, incident management, risk & security strategy, compliance etc. that differentiate relatively strong from relatively weak performers on the stock market, using that as an indicator of business performance ... and indeed various other indicators such as turnover, profitability, market share, brand value etc. I'm particularly interested in leading indicators - the things that tend to precede relatively strong or weak performance.

On the flip side, I'd be interested to know whether 'good news' security disclosures/announcements (such as gaining ISO27k or other security certifications, or winning court cases over intellectual property) can be demonstrated to be good for business. Given my inherent personal bias and focus on infosec, I rather suspect the effect (if any) will be weaker than I expect ... but I'm working on it!

Monday 27 August 2018

Dynamic authentication

It is hard to authenticate someone's claimed identity:
  • Quickly;
  • Consistently and reliably to the same criteria at all times;
  • Strongly, or rather to a required level of confidence;
  • Cheaply, considering the entire lifecycle of the controls including their development, use and management;
  • Practically, pragmatically, feasibly, in reality;
  • On all appropriate platforms/systems/devices (current, legacy and future) and networks with differing levels of trustworthiness and processing capabilities;
  • Under all circumstances, including crises or emergencies;
  • For all relevant people (insiders, outsiders and inbetweenies), regardless of their mental and physical abilities/capacities, other priorities, concerns, state of health etc., while also failing to authenticate former employees, twins (evil or benign), fraudsters, haXXors, kids, competitors, crims, spooks, spies, pentesters and auditors on assignment;
  • Using currently viable technologies, methods, approaches and processes; and
  • Without relying on unproven, unverifiable or otherwise dubious technologies.
In short, authenticating people is tough, one of those situations where we're squeezing a half-inflated balloon, hoping it won't bulge alarmingly or just pop.

In practice, when designing and configuring authentication subsystems or functions, the key question is what to compromise on, how much slack can realistically and safely be cut (i.e.  reducing various information risks to an acceptable level), and just how far things need to be pushed (an assurance issue). 

In the ongoing hunt for solutions, quite a variety of authentication methods, tools and techniques has been invented and deployed so far:
  • Vouching ("Jim's OK, I trust Jim and you trust me, right?");
  • Credentials such as business cards, driving licenses, passports, photo IDs, badges, uniforms, sign-marked vehicles, logos ...;
  • Secret passwords;
  • Complex passwords, enforcing rules such as mixed case, punctuation etc.;
  • System-generated passwords;
  • System-generated passwords in forms or styles that are intended to be more mem-or-able;
  • Multiple passwords;
  • Multi-part passwords, the parts held by different people;
  • Long passwords or pass phrases;
  • Passwords that expire and need to be replaced periodically;
  • Passwords that are generated by cryptographic things and expire in a minute or so;
  • Pictorial passwords - picking out specific images from several presented;
  • Digital certificates with PKI on crypto things (digital keys, smart cards, desktops, laptops, smartphones ...);
  • Biometrics based on:
    • Fingerprint, palmprint;
    • Visage/facial recognition;
    • Iris or retinal pattern;
    • Voice recognition;
    • Typing characteristics;
    • Distinctive chemicals (smell) and other bodily or behavioural characteristics such as color, mannerisms, gait (very widely used by animals other than humans); 
    • DNA (quite reliable but hardly instantaneous!);
  • User and/or device location;
  • Network address, hardware address
  • Mode/means/route/mechanism of access;
  • Time of access;
  • Multifactor authentication using more than one 'factor';
  • Probably other stuff I've forgotten about;
  • Some combination of the above.
Depending on how you count them, there are easily more than 20 authentication methods in use today, and yet it is generally agreed that they barely suffice. 

Rather than inventing yet another method, I wonder if we need a different paradigm, a better, smarter approach to authentication? Specifically, I'm thinking about the possibility of continuous, ongoing or dynamic authentication rather than episodic authentication. 

Instead of forcing us to "log in" at the start of a session, how about simply letting us start doing stuff, rating us as we go and deciding what stuff to let us do according to how authentic we appear to be, and what it is that we want to do? So, returning to my earlier point about having to make compromises, the assurance needed before allowing someone to browse the Web is rather different to that needed to let them bank online - and within online banking, viewing account balances is not equivalent to making a funds transfer between accounts, or a payment to another account, in Switzerland, of the entire balance and credit/overdraft value, at 3:30am, from a smartphone somewhere in Lagos ...

Biometric authentication methods have to allow for natural variation between measurements because living organisms vary, and measurement methods are to some extent imprecise. Taking additional measurements is an obvious way to improve accuracy and precision ... so instead of taking a single fingerprint reading, why not keep on re-reading and checking until there is sufficient data and sufficient statistical confidence? Instead of forcing me to use a password of N-characters, why not check how I type the first few characters to see if the little timing and pressure differences indicate it is probably me, perhaps coupling that with facial recognition and additional checks depending on what it is that I'm doing during the session. If I'm doing something out of character, especially something risky, prevent or slow me down. Instead of timing out and locking me out of the system if I wander away to make a cup of tea, reduce my trustworthiness rating and hence the things I can do when I return. Let me boost my trustworthiness if I really need additional rights 'right now' by inviting me to use some of those slower and more costly authentication mechanisms, or correlating authentication/trustworthiness indicators and scores from several systems (e.g. make it harder for me to access the file server if I have not clocked-in to the building with my staff pass card, bought a coffee without sugar from the vending machine, and polled the local cell tower from my cellphone).

Maybe even turn the problem on its head. Rather than making me prove my claimed identity, disprove it by checking what I'm doing for anomalies and concerns. I'm sure there's huge potential in behavioral analysis - not just the basic biometrics such as typing speed but the specific activities I perform, the sequence, the context and so on - building up a more holistic picture of the person in the chair. 

Oh and if the systems are not entirely sure it is me in the chair, why not let me think I am doing stuff while in reality caching my inputs and faking what I see while waiting for me to build up sufficient additional assurance ... or quietly summoning Security.