This content originally appeared on HackerNoon and was authored by Anand.S
It’s no secret that it’s the golden age of AI. Whether for business or personal use, everyone is on it. According to a recent McKinsey survey, 65% of organizations are regularly using generative AI in their businesses. And businesses are placing a lot of faith in AI: Forbes Advisor research shows that a staggering 97% of leaders believe ChatGPT will benefit their business.
\ Yet, organizations should move carefully when using these solutions. A recent experiment we conducted at Gramener found that even AI is not immune to bias. People are biased, so randomness is somewhat of a challenge for us. That’s why we may turn to tools such as AI to do that.
\ However, our study’s results revealed that we can’t use AI to escape bias. Let’s dive into the unexpected study results and briefly explain why this was the case.
The Biggest AI Chatbots Aren’t Objective
In the study, Gramener’s engineers asked three popular large language models (LLMs)—Open AI’s GPT-3.5 Turbo, Anthropic’s Claude 3 Haiku, and Google’s Gemini 1.0 Pro—to pick a random number between zero and 100.
\ According to the experiment, a good random number generator would pick all numbers with equal probability. Specifically, there’s a 1% chance of picking any number.
\ However, the results reveal a clear bias across all three LLMs. Each model had a preferred number, with those ending in 7 particularly popular.
\ According to the experiment, Open AI’s GPT-3.5 loves 47, although 42 was its previous favorite according to a study from last year. Claude 3 Haiku’s go-to is 42, and what’s worth noting is that GPT 3.5 was used to train Haiku. Is it possible that the bias is ‘hereditary’? Finally, Google Gemini 1.0 Pro really likes 72, but 42 was its second most popular number.
\ You don’t need to have a degree in advanced mathematics to see a pattern here. And with all three models rejecting repeated digits (11, 33, 44, etc.) and loving 42 (shout out to Douglas Adams’ The Hitchhiker’s Guide to the Galaxy), it’s hard not to spot a bias.
LLMs Are Created In Our Image
You could say that this bias was eerily human-like. Each LLM echoed a human approach to selecting a number. Particularly, those picky traits, like avoiding double digits, highlight that ultimate bias.
\ The fact is that AI models simply don’t understand what ‘random’ as a concept is. It boils down to how they were originally trained and what data was used for them to respond to prompts such as ‘pick a random number.’
\ While there were some open-ended questions from the study, it’s worth remembering that LLMs are as smart—and as biased—as the humans who originally trained them. Although AI can’t ‘think’ for itself, we’re only hitting the tip of the iceberg when it comes to LLM psychology. \n
This content originally appeared on HackerNoon and was authored by Anand.S
Anand.S | Sciencx (2024-10-15T14:00:28+00:00) AI Is Playing Favorite With Numbers. Retrieved from https://www.scien.cx/2024/10/15/ai-is-playing-favorite-with-numbers/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.