Enormously entertaining and mind-bogglingly easy to use, text-generating AI tools are also unreliable and create an avalanche of misinformation. Businesses take note
The business world has been aflutter in the last few months over the prospect and possibility of the new kinds of artificial intelligence-powered tools we’ve seen unveiled.
But as anyone who’s experimented extensively (or even just tinkered around) with these generative tools might be able to tell you, the tools themselves have one major flaw: they make a lot of things up.
“The tech industry often refers to the inaccuracies as ‘hallucinations,’” reports The New York Times. As part of their story, they asked a few AI bots when their newspaper first reported on artificial intelligence; both said 1956, and both produced citations to that effect. Both were wrong: it was 1963 that the paper first used the term. Both citations were at least partially made up. (Another test found that five out of six references produced by ChatGPT were made up.)
Why does it do it? It’s all in the design. Namely, the tools are not designed to be correct, but rather convincing.
“The output is a guess based on an algorithm designed to produce the most plausible or probable realistic reading language output relevant to the context of the prompt it has been given,” explained academic Matthew Hillier.
In other words, doesn’t mean it’s correct.
Some believe that businesses should sit up and take notice of this now, before it spirals into a problem ― both for businesses that use them as well as businesses that become victims. A business that uses an AI tool is ultimately responsible for what it outputs ― which poses a problem if they don’t understand how the tool works. A business that becomes the subject of false AI produced info could suffer even more significant losses.
“The already distraught supply chain may suffer increased disruption because of disinformation about a supplier’s reliability or safety,” Richard Funso writes, by way of example in a warning piece to businesses about the potential pitfalls of relying on AI for accuracy.
“With the release of GPT 3.5 and other generative models, it is now the era of AI-everything,” he concludes. “Companies must be prepared to navigate the murky waters of AI-powered disinformation as they conduct their business…if there is one lesson the Covid-19 pandemic has taught, it is that a global disruptor can disrupt society permanently. ChatGPT and other AI models have disrupted society within a noticeably brief period and will continue to do so for the near future.”
Content written by Kieran Delamont for Worklife, a partnership between Ahria Consulting and London Inc. To view this content in newsletter form, click here.