Using AI/ML to draft policy
This week, I am preparing a new template for the SecAware policy suite covering the information risks and security, privacy, compliance, assurance and governance arrangements for Artificial Intelligence or Machine Learning systems. With so much ground to cover on this complex, disruptive and rapidly-evolving technology, it is quite a challenge to figure out the key policy matters and express them succinctly in a generic form.
Just for kicks, I set out by asking GPT-4 to draft a policy but, to be frank, it was more hindrance than help. The draft was quite narrowly focused, entirely neglecting several relevant aspects that I feel are important - the information risks arising from the use of commercial AI/ML services by workers, for instance, as opposed to AI/ML systems developed in-house.
The controls it espoused were quite vague and limited in scope, but that's not uncommon in policies. It noted the need for accountability, for instance, but didn't clarify the reasons nor explain how to achieve accountability in practice. It was not pragmatic.
Compared to my usual manumatic approach, it took extra effort to consider, revise and expand on the draft, adapting it to fit into our policy suite. Overall I'm not convinced GPT-4 added value to the process or saved me time - a disappointing outcome. It would have been easier, quicker and better to prepare the policy template without its 'help' - a distraction, albeit an interesting trial.
By all means try it for yourself. Pick a topic area that you know well and on which you already have a mature policy. Get an AI/ML system* to write you a policy on the topic, then evaluate it e.g. by carefully comparing its output to your existing policy. Aside from trivial differences in style and structure, look carefully for material errors ('delusions') in, and omissions from, the policy content, as well as any worthwhile additions/changes, aspects that might be worth adopting in your policy.
Would you have identified those possible changes yourself by critically reviewing your policy without the assistance of GPT-4?
If you didn't understand the topic so well, or if the policy covered a novel topic without an existing policy, might the GPT-4 draft have appeared appropriate, despite its flaws? Might the GPT-4 draft have sufficed as an initial policy, anyway? Or would it have focused attention inappropriately, biasing the policy towards whatever it said and away from other relevant concerns?
Are these issues simply indicative of an immature technology in the hands of inept or naive users? Maybe things will be different in a few weeks!
Bottom line: AI/ML is just a tool that can be used by craftsmen and DIY bodgers alike, with markedly different results. Ironically, that's another relevant information risk that GPT-4 neglected to mention ...
* As I write this blog, GPT is once more unavailable - a victim of its own success I guess, overloaded and unresponsive. Maybe the robot needs more caffeine or a voltage spike to the diodes down its left side. Meanwhile, the SecAware policy template is nearly ready to proofread and publish. Watch this space.
31st March UPDATE: it's done! Grab the SecAware AI/ML security policy template for US$20.
2nd April UPDATE: I'm rapidly learning to write more elaborate, eloquent and specific queries/prompts for ChatGPT, edging ever closer to the point where I can release a swarm of robot flunkies to write, publish, promote and sell valuable policy templates etc. while I retire to lounge by the pool, casually polishing my gold bars (as if!).