This content originally appeared on NN/g latest articles and announcements and was authored by Caleb Sponheim
Summary: It’s easy to place too much trust in genAI tools. Use only information you can verify or recognize to be true.
Generative artificial intelligence (genAI) has practical applications for UX practitioners. It can accelerate research planning, crossfunctional communication, ideation, and other tasks. GenAI has also trivialized labor-intensive activities such as audio transcription. However, a dark cloud lurks over every use and implementation of genAI — can we trust it?
Hallucinations
GenAI tools consistently make mistakes when performing even simple tasks; the consequences of these mistakes range from benign to disastrous. According to a study by Varun Magesh and colleagues at Stanford, AI-driven legal tools report inaccurate and false information between 17% and 33% of the time. According to a Salesforce genAI dashboard , inaccuracy rates of leading chat tools range between 13.5% and 19% in the best cases. Mistakes like these, originating from genAI outputs, are often termed “hallucinations.”
Some notable instances of genAI hallucinations include:
Read Full Article
This content originally appeared on NN/g latest articles and announcements and was authored by Caleb Sponheim
Caleb Sponheim | Sciencx (2024-08-16T17:00:00+00:00) When Should We Trust AI? Magic-8-Ball Thinking. Retrieved from https://www.scien.cx/2024/08/16/when-should-we-trust-ai-magic-8-ball-thinking/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.