Hinson tip on ChatGPT
is generic and not necessarily smart, accurate, sufficient or appropriate, despite the beguiling use of language that makes it appear logical, credible and reasonable at face value
... but is it, really?
When, for instance, a real-world client reads a human expert advisor's report or consultant's recommendation, they are generally:
- Thinking critically about it, considering what is and what is not stated and how it is expressed;
- Posing additional questions for clarity (e.g. "On what basis do you believe we can achieve all that in 8 months, given that there's only one of me and I'm stretched thin as steam-rollered chewing gum?") or credibility ("How long did your last client take for this?") and perhaps arguing the toss ("8 months? You're kidding, right? We only have 4!");
- Taking advantage of knowledge and experience within the particular context, both their own and the advisor/consultant's;
- Maybe offering other considerations and discussing alternative approaches*.
Critical thinking is, I feel, the most important factor here but formulating appropriate and insightful prompts/questions runs a close second - and that's a skill I'm barely even starting to learn.
* 'Regenerate response' is an intriguing option. The robot has another go at formulating an answer seemingly from scratch (somehow exercising other neural pathways?), rather than simply rephrasing an existing one. Bright ideas can come up in any of them, allowing switched-on users to patch together an even better synthesis than the synthetics. Switched-on experts can also spot the errors and plug the gaps that the robot callously ignores.