AI Hallucination

Just when you thought computers couldn’t be more like people, BANG. AI Hallucinates.  Or, to borrow a phrase from Verge reporter Alex Cranz, AI is a “BIG FAT LIAR.”

So what is AI Hallucination? According to IBM, it’s “a phenomenon wherein a large language model (LLM)—often a generative AI chatbot or computer vision tool—perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.”

Nonsensical or altogether inaccurate. Sounds like crazy lies to me. But of course AI techs want to give Watson and his buddies the benefit of the doubt, so they changed the term “crazy lies” to “hallucinations.”

So how does this happen?  Well, computers are really good at discovering information patterns. But computers are also stupid and don’t always know what to do with those patterns. So they make up a whole new reality. And that reality can easily be wrong. Hmmm. Sounds like a presidential candidate I’ve heard about.

But I digress.

So what’s the solution? Apparently, Microsoft has developed a tool that’s supposed to help detect hallucinations. And Microsoft says their tool might fix the problem within a year.

But a lot of AI researchers don’t actually think hallucinations are solvable. Hallucinations may just be an inevitable outcome of all large language models. Hey, even human beings get it wrong from time to time. Why not computers? Those AI researchers say no worries. Just dismiss AI hallucinations as a small annoyance, nothing more.

I seem to recall a time when we expected speed, reliability, and accuracy from our computers. Maybe social media platforms have lowered our expectations. After all, they’re full of crazy lies too.

Leave a Reply

Your email address will not be published. Required fields are marked *