Ailien beacons warn of rocks ahead
Lately, I've been contemplating how the widespread availability and use of AI might affect humankind - big picture stuff.
We are currently awash in a tidal wave of commentary about AI innovation, the information risks of AI and its naive users, the tech, the ethics and compliance aspects, the inevitable grab by greedy big tech firms, misinformation, disinformation, jailbreaking and so on. Skimming promptly past well-meaning advisories about prompt engineering from people excited to share their discoveries, I've been reading pieces about how AI can support or will supplant all manner of expert advisors on any topic sufficiently well represented in the models and datasets.
The likelihood (near certainty!) of AI-generated content feeding back into AI-data sets and hence the potential consequences of runaway hallucinations, coupled with deliberate manipulation by those with private agendas, is quite scary - but equally the possibility of AI generating new knowledge (valid and useful insight) is intriguing. Provided the risks remain tolerable, Augmented Intelligence could turn out to be next in the line of revolutionary advances, and of course information is already the new gold.
Organisations that own or control sufficient quantities of information are sitting on gold mines, and have new tools to dig deep themselves or sell the mining rights to third parties. Commercial interests and rampant commercialism will, of course, skew the entire gold field, along with political aims, lobbyists and reactionaries. As powerful organisations scramble to control the mining technologies and gold fields, how long before we see the emergence of 'information assay companies' feeding the emergent 'information market' while blocking the fools' gold, then 'information banks', 'information taxes', 'information heists' committed by 'information outlaws' and all that? By some accounts, we're already in the Wild Wild West, so that's a credible scenario.
Still, I'm left wondering about how the bigger 'system' from which AI is emerging - meaning human society - will change. How will AI affect us, collectively, and how will we respond? As the AI dust settles over the months and years ahead, what social and societal changes can we expect to see in how we generate and use information?
Will we drift into the fantasy world created by misinformation, disinformation and runaway hallucinations, eventually plummeting into a black hole of our own creation?
Will we attempt to put the genie back in the bottle by stifling AI research and development, constraining its availability by government edict and market economics?
Will we learn to trust and exploit AI sensibly while avoiding the pitfalls?
Or something else?
[I'm tempted to ask ChatGPT for answers, except already I'm dubious about its answers and wary of being diverted into whatever the robots and their masters want me to ponder. So I haven't. Feel free to engineer your own prompts.]
Looking still further ahead, how will the individuals, companies and organisations that eventually control AI and information react to those parallel social and societal changes? How will AI itself react? As I drift into the realm of science fiction, it occurs to me that the complexities and dynamics of socio-technical-political change present both risks and opportunities.
Looking still further afield, I wonder whether we've been looking for the wrong kinds of life Out There among the stars: perhaps the most advanced alien societies are in fact "ailiens", entirely technological rather than biological, having possibly progressed through that or some similar development phase in their own unique fashion. Perhaps pulsars are, in fact, ailien lighthouses, beacons from AI civilisations, warning us about rocks in our path ... in which case, we really ought to consider changing course or boosting our shields.
And with that thought, I bid you good day. Live long and prospect.