Wednesday 26 April 2023

Using ChatGPT more securely

Clearly there are some substantial risks associated with using AI/ML systems and services, with some serious incidents having already hit the news headlines within a few months of the release of ChatGPT. However, having been thinking carefully and researching this topic for couple of weeks, I realised there are many more risks than the reported incidents might suggest, so I've written up what I found.

This pragmatic guideline explores the information risks associated with AI/ML, from the perspective of an organisation whose workers are using ChatGPT (as an example).  

Having identified ~26 threats, ~6 vulnerabilities and dozens of possible impactful incident scenarios, I came up with ~20 information security controls capable of mitigating many of the risks.

See what you make of it. Feedback welcome. What have I missed? What controls would you suggest? 

Thursday 13 April 2023

Hinson tip on ChatGPT


When using ChatGPT and its ilk, don't forget that the AI robot's contribution
 is generic and not necessarily smart, accurate, sufficient or appropriate, despite the beguiling use of language that makes it 
appear logical, credible and reasonable at face value
... but is it, really?

Or is it short on integrity?


When, for instance, a real-world client reads a human expert advisor's report or consultant's recommendation, they are generally:

  • Thinking critically about it, considering what is and what is not stated and how it is expressed;

  • Posing additional questions for clarity (e.g. "On what basis do you believe we can achieve all that in 8 months, given that there's only one of me and I'm stretched thin as steam-rollered chewing gum?") or credibility ("How long did your last client take for this?") and perhaps arguing the toss ("8 months? You're kidding, right? We only have 4!");

  • Taking advantage of knowledge and experience within the particular context, both their own and the advisor/consultant's;

  • Maybe offering other considerations and discussing alternative approaches*.

ISMS management reviews vs ISMS internal audits

Over on the ISO27k Forum this week, Ray asked us for "guidance on conducting and documenting 'Management Reviews' that include the agenda items required by the standard in 9.3. Any templates shall be much appreciated." 

Forumites duly offered advice and agendas. So far so good!

However, I made the point that ISO/IEC 27001 does not require/insist that management reviews take the form of periodic management meetings, specifically, although that is the usual approach in practice. 

Personally, since they are both forms of assurance, I advise clients to plan and conduct their ISMS management reviews and ISMS internal audits similarly, with one critical and non-negotiable difference: auditors must be independent of the ISMS, whereas management reviews can be conducted by those directly involved in designing, operating or managing the ISMS. This is not merely a compliance matter or protectionist barrier: auditor independence brings a fresh perspective and valuable insight that insiders simply cannot match. 

In my considered opinion, independence and formality follow a continuum through these activities:

Wednesday 12 April 2023

mmmmmm, More Meaningful Management Metrics


For about a week, I've enjoyed following and participating in an expansive discussion thread on LinkeDin about the value of measurement and metrics for management, debating various issues that can occur both in theory and in practice.


One straw-man argument is that 'managing by the numbers' can imply a myopic focus on commonplace business metrics such as stock price or annual profit, both of which can be manipulated to some extent by managers even at the expense of long term resilience and commercial success, let alone other business objectives. Despite Taylor's outmoded 'scientific management' experiments having been debunked a century ago, some LinkeDinners in the thread evidently still believe that science (in the form of numeric data) and management are poles apart. 

I beg to differ. That's so last century!

Management is complex, dynamic and nuanced, hence I accept that simplistic or crude metrics can't possibly address the entire practice. For example, speed is obviously a key metric for a racing car: however, going fast is just one part of racing, even on the drag strip. Staying on-track with both vehicle and driver holding together for the duration of a meet are also important for the team manager, the whole team in fact. An exploding drag car might conceivably project sufficient material across the line to qualify in record time, but there would be nothing left to compete in the final! 

Monday 10 April 2023

Ailien beacons warn of rocks ahead


Lately, I've been contemplating how the widespread availability and use of AI might affect humankind - big picture stuff.

We are currently awash in a tidal wave of commentary about AI innovation, the information risks of AI and its naive users, the tech, the ethics and compliance aspects, the inevitable grab by greedy big tech firms, misinformation, disinformation, jailbreaking and so on. Skimming promptly past well-meaning advisories about prompt engineering from people excited to share their discoveries, I've been reading pieces about how AI can support or will supplant all manner of expert advisors on any topic sufficiently well represented in the models and datasets.

The likelihood (near certainty!) of AI-generated content feeding back into AI-data sets and hence the potential consequences of runaway hallucinations, coupled with deliberate manipulation by those with private agendas, is quite scary - but equally the possibility of AI generating new knowledge (valid and useful insight) is intriguing. Provided the risks remain tolerable, Augmented Intelligence could turn out to be next in the line of revolutionary advances, and of course information is already the new gold. 

Sunday 2 April 2023

To what extent do you trust the robots?

This Sunday morning, fueled by two strong coffees, I'm cogitating on the issue of workers thoughtlessly disclosing all manner of sensitive personal or proprietary information in their queries to AI/ML/LLM systems and services run by third parties, such as ChatGPT.

This is clearly topical given :
(1) the deluge of publicity and chatter around ChatGPT right now, coupled with 
(2) our natural human curiosity to explore new tech toys, plus 
(3) limited appreciation of the associated information risks, and 
(4) the rarity of controls such as policies and Data Leakage Protection technologies. 

Furthermore, even if we do persuade our colleagues (and, let's be honest, ourselves!) to be more careful and circumspect about whatever we are typing or pasting into various online systems, the possibility remains that the general nature of our interests and queries is often sensitive.