This month I want to highlight an article, Leaders are ignoring the dangers of ‘confidently incorrect’ AI: Why it’s a massive problem, by Oliver Pickup. Generative AI (“GenAI”) is the current hottest topic inside tech companies and among technophiles. When ChatGPT went public in November 2022, it created endless possibilities for everyone to simplify their lives, including coding on demand, creating trip itineraries, penning emails in any tone, and learning new information. This article discusses a large problem with Large Language Models (LLM), which is that they will confidently create fake information and then insist on its accuracy. For example, this article from May 2023 tells the story of a lawyer who used ChatGPT for legal research. The lawyer’s subsequent brief cited over six relevant court decisions, all of which were hallucinated by ChatGPT (even though ChatGPT insisted they were real).
GenAI will fundamentally transform many aspects of our lives for the better, however we are best served understanding the limitations of this technology to use it responsibly.
“While ChatGPT is a large-scale language model, having been fed over 300 billion words by developer OpenAI, it is prone to so-called ‘hallucinations.’ This was the term used by Google researchers in 2018 to explain how neural machine translations were ‘susceptible to producing highly pathological translations that are completely untethered from the source material.’”
Leave a comment