Sunday versions - more AI-musing content
Today I asked Google's Gemini to rewrite yesterday's blog piece "in the style of a naive student of IT":
"Hey guys, so I was thinking about this weird thing that happens online. People get really mad about stuff, like really mad. They all start yelling at each other and saying the same things over and over again. It's like a big echo chamber, and it gets out of control.
For example, there was this time people were freaking out about 5G towers. They thought it was going to hurt people, but scientists said it was totally safe. But the people who were mad kept saying it was bad, and they got other people to believe them too. It was crazy!
When people get really mad like that, it's hard to talk to them. They don't want to listen to anything else, and they just keep shouting. It's like a big fight, and it can make things worse.
So, what can we do about this? Well, one thing is to just wait for people to calm down. Sometimes, if we let them vent for a while, they'll stop being so angry. Another thing is to try to explain things to them in a nice way, but that can be hard because they might not want to listen.
Maybe we could try to use social media to help people understand what's really going on. We could post facts and information to show people that they're wrong. Or we could try to get other people to support us.
I'm not sure what's the best way to deal with this, but I think it's important to try. We need to stop people from getting so angry and help them understand each other better."
My prompt included the text from yesterday's piece in speech marks, which may possibly explain why Gemini chose such an informal vocal style, also with speech marks ... or perhaps the robot simply believes that 'naive students of IT' communicate in that fashion. Either way, it's becoming quite alien to me, cringeworthy even although I see shadows of my original piece in the latest version.
Notice how it has shrunk markedly in just two AI rewrites, down from the original ~750 words via yesterday's 450 to just 250 now. Gemini is roughly halving the word count with each iteration, downplaying or dropping several of the points I originally made as well as changing the style. It seems to want to summarise ... which is fine by me as it leaves me more room to elaborate on the Gemini output. My own style is more detailed and lengthy, so in a sense we complement each other.
Again, for kicks, I fed exactly the same prompt to ChatGPT which told me "Okay, here goes!":
So, like, there’s this big problem nowadays with people getting really mad about stuff, which is called outrage. It happens when groups of people freak out over things they think are unfair or threatening. A lot of this goes down on social media, where everyone with similar views just keeps feeding off each other’s anger, making things way more intense than they need to be. This makes it super hard for anyone to think clearly or have a decent conversation.
Take the whole issue with 5G towers near schools, for example. Some people think these towers are a huge danger to kids’ health. They get really upset and start blaming scientists and government officials for not caring about their worries. This shared anger can lead to a sort of groupthink, where everyone just gets more and more riled up. Sometimes, charismatic leaders pop up in these groups, spreading fears and misinformation, which only makes things worse.
Once people get upset, that outrage spreads like wildfire and sticks around for a long time. Both sides get super stuck in their beliefs, thinking they’re right and the other side is just wrong. This can even lead to people forming “lynch mobs” to go after those they see as the bad guys.
Trying to deal with all this outrage is tricky. Even people who aren’t as extreme in their views can find it hard to stay calm when everyone around them is freaking out. Logical arguments often get tossed aside in favor of emotional ones, and people might share misleading info just to keep the outrage going.
There are a few ways to tackle this problem. One idea is to just wait for the anger to cool down. It might seem kind of lazy, but sometimes people need time to chill before having a serious talk. Another way is to calmly share facts and avoid getting all emotional, but this can come off as ignoring the feelings of those who are upset.
You could also try to directly address the claims being made by the outraged folks, but that’s tough because it means you really have to understand what they’re upset about. A different approach is to match their emotional tone to show you care, but that can just make things worse.
Lastly, there’s the idea of using social media to connect with those who are outraged, fighting misinformation and rallying support. This could mean teaming up with others to spread a more balanced message.
So, yeah, these are just some ideas for handling outrage, and what works best really depends on the situation. By figuring out how outrage works and how to respond, we might be able to calm things down and have better conversations!
ChatGPT's version is less chatty, more matter-of-fact than Gemini's, closer to my own style. Aside from clues such as "So, like," and "So, yeah,", ChatGPT didn't really emulate a teen speaking, more like a teen writing. It also stayed close to the original word count.
I haven't decided, yet, whether to continue this experiment or drop it, but it occurs to me that the iterations and comparisons between AI/LLMs are quite insightful as a research technique: enough here already for a language or IT student project maybe?