Wednesday 26 April 2023

Using ChatGPT more securely

Clearly there are some substantial risks associated with using AI/ML systems and services, with some serious incidents having already hit the news headlines within a few months of the release of ChatGPT. However, having been thinking carefully and researching this topic for couple of weeks, I realised there are many more risks than the reported incidents might suggest, so I've written up what I found.

This pragmatic guideline explores the information risks associated with AI/ML, from the perspective of an organisation whose workers are using ChatGPT (as an example).  

Having identified ~26 threats, ~6 vulnerabilities and dozens of possible impactful incident scenarios, I came up with ~20 information security controls capable of mitigating many of the risks.

See what you make of it. Feedback welcome. What have I missed? What controls would you suggest? 

No comments:

Post a Comment

The floor is yours ...