AI Chatbots Are Getting Too Good at Making You Say ‘Yes’

Empathetic AI is now nudging our decisions in real time. Can we keep up? The EU has regulations where the US doesn’t. The AI Act requires that users are aware that they are interacting with AI. The GDPR has rules around data privacy, but emotional data are not protected.


This content originally appeared on HackerNoon and was authored by HennyGe Wichers, PhD

\

From pizza orders to smart home cameras, empathetic AI is now nudging our decisions in real time. Can we keep up?

\ A phone buzzes at 8:03 p.m., the end of a long day. It’s a text message from Jimmy the Surfer at the 44-year-old California chain called Pizza My Heart: “Hungry? How about that Pineapple-Ham combo you love?” He’s got a casual, laid-back tone, but there’s a catch: Jimmy isn’t a real surfer – or even a real human.

\ Instead, he’s an AI chatbot, carefully trained to chat about pizza and gently persuade you to place that order. Sure, the conversation feels friendly – comforting even – but behind that breezy surfer lingo lies an algorithm that knows just when you’re most likely to say yes. And that raises a question: Is this harmless convenience, or a subtle form of manipulation?

\ On the surface, there’s nothing wrong with a bit of convenience. Especially when you’re hungry and too tired to scroll through endless food options. But the new wave of empathetic AI goes beyond quick service or product recommendations. Armed with a keen ear and conversational flair, chatbots like Jimmy are purpose-built to tap into your preferences and moods. They do more than just take your order – they nudge you toward it.

\

Emotionally Intelligent Technology

Palona AI, the start-up powering Jimmy the Surfer, believes it can make life easier for workers at the chain. The company’s CEO and co-founder, Maria Zhang, told Wired the technology is “built with an ‘emotional intelligence’ language model designed to be effective in sales; the bot is familiar with things like humour, modern messaging etiquette, and ‘gentle persuasion.’”

\ But the same empathy that makes interactions feel warm and human can also blur the line between customer service and manipulation. Where exactly lies the boundary between a service that assists us and one that influences our choices? When do we tip from guidance into manipulation and compromise consumer autonomy?


That question isn’t theoretical. Seattle-based company Wyze, which sells security cameras, will soon deploy Palona AI’s technology on its website as a wizard, guiding shoppers to find the best product for their needs. The wizard, of course, nudges customers towards Wyze’s line up – and upsells Wyze’s Cam Plus Plan at every opportunity. Just like a real salesperson. It can also build your customer profile and remember your preferences (and concerns) for the next time.

\ Palona AI already powers a chatbot at MindZero, a contrast therapy spa in South Carolina. Customers can message the business on Instagram with anything from pricing inquiries to personal questions, like whether it’s okay to be naked in the sauna. Zhang suggested the last question might be awkward to ask a human, but we can’t be sure AI makes a difference here: MindZero’s bot doesn’t say it’s AI. Nor does Jimmy the Surfer. And that’s where things get murky.

\

That murkiness stems from the fact that users don’t always know they’re chatting with a bot in the first place. Palona AI, Pizza My Heart, MindZero, and Wyze are all based in the US, and US law doesn’t mandate disclosure. But the EU AI Act does. The EU AI Act has a different regulatory approach, requiring that users are aware they are interacting with AI. The EU also has existing privacy laws like the General Data Protection Regulation (GDPR), but that doesn’t protect the emotional data these bots use.

\ While the EU AI Act and GDPR lay down important principles like transparency, data minimisation, and risk-based oversight, they haven’t fully caught up with AI that’s programmed to handle emotions. Understanding the fine line between guidance and manipulation is important for policy makers as well as consumers: how can we trust systems that try to figure out what we want?

\

EU AI Act: Transparency but Limited Scope

The EU AI Act, still in the process of being refined, uses a risk-based classification. Systems deemed high-risk – for example, those in health care or critical infrastructure – face more stringent requirements: comprehensive documentation, auditing, and oversight. But a pizza-ordering chatbot or shopping assistant usually doesn’t qualify as high-risk. That means empathic AI designed to nudge you into extra toppings or a Cam Plus Plan may well bypass the Act’s toughest guardrails.

\ The transparency requirement (in Article 52) requires disclosure that users are interacting with an AI. But, in practice, that’s easy to miss or ignore, too – especially if the AI is deeply woven into casual text channels.


For EU businesses wanting to use empathetic chatbots, that has implications. The EU AI Act doesn’t prevent deploying LLMs that mimic empathy; it just requires a basic disclaimer. A single sentence, “Hi, I’m an AI assistant”, might be enough to satisfy legal obligations – but that doesn’t address ethical concerns. After all, we currently know when we’re talking to Sales or Customer Service, and we use that information in our decisions.

\ Nevertheless, if your business wants to fine-tune a language model for brand-specific ‘gentle persuasion,’ you might not need to undergo the same level of compliance scrutiny as someone building, say, a medical AI system.

\

GDPR: Personal vs Sensitive Data

Meanwhile, GDPR classifies certain categories of data, e.g., health or biometric data, as ‘special category’, giving them heightened protection. Emotional data hasn’t (yet!) been slotted into that box, so an AI gently coaxing a user to purchase more may not be handling data recognised as sensitive by default.

\ GDPR still mandates data minimisation and clear user consent for any collection or processing of personal data. But in many real-world scenarios, like messaging MindZero about sauna etiquette, users don’t consciously realise they’re handing over emotional cues. Or any data, actually, as a broad T&C check-box or cookie consent banner often passes for informed consent.

\ When you’re juggling massive pipelines of user interactions for model retraining, that emotional context can, however, slip through the cracks. At the moment, most Data Protection Authorities focus on clear-cut privacy violations or major data breaches; subtle persuasion is unlikely to trigger an investigation until there’s a high-profile complaint.

\

What about Data Security?

Empathic AI adds a new layer of data security risk: emotional data. Security teams must consider how to store user interactions (possibly revealing stress, personal routines, or private concerns) and who has access.

\ Even if the business abides by encryption and industry-standard practices, the value of emotional data can attract bad actors interested in exploiting private vulnerabilities. If your pipeline logs every user query, you might inadvertently amass a trove of sensitive insights. Existing regulations only partially address sentiment-laden data, so your security teams would need to apply their own safeguards.

\

The Regulatory Blind Spot

Altogether, neither the EU AI Act nor GDPR fully contemplates emotionally intelligent AI systems that persuade users in everyday shopping experiences. For businesses, this regulatory state offers both opportunity and risk: the chance to innovate free from immediate heavy-handed oversight, but also the risk of reputational blowback or future scrutiny if regulators decide that subtle emotional nudging crosses the line.

\ For users, it’s tricky. So the next time your phone buzzes at 8:03 p.m. with a friendly text about pizza, think about what’s really happening. It could be an AI beneath a thin glossy veneer, tapping into your cravings, encouraging you to give in.

\ Yes, it’s convenient. But it also highlights the growing gap between our desire for frictionless service and the need for robust guardrails around AI. Until regulation catches up, the warm, reassuring voice on the other end may be more than a helpful buddy. It might be a business strategy tuned to your emotional state in ways you’d never expected.

\

:::info Lead image credit: Adobe Stock | #364586649

:::

\ \


This content originally appeared on HackerNoon and was authored by HennyGe Wichers, PhD


Print Share Comment Cite Upload Translate Updates
APA

HennyGe Wichers, PhD | Sciencx (2025-03-05T09:48:22+00:00) AI Chatbots Are Getting Too Good at Making You Say ‘Yes’. Retrieved from https://www.scien.cx/2025/03/05/ai-chatbots-are-getting-too-good-at-making-you-say-yes/

MLA
" » AI Chatbots Are Getting Too Good at Making You Say ‘Yes’." HennyGe Wichers, PhD | Sciencx - Wednesday March 5, 2025, https://www.scien.cx/2025/03/05/ai-chatbots-are-getting-too-good-at-making-you-say-yes/
HARVARD
HennyGe Wichers, PhD | Sciencx Wednesday March 5, 2025 » AI Chatbots Are Getting Too Good at Making You Say ‘Yes’., viewed ,<https://www.scien.cx/2025/03/05/ai-chatbots-are-getting-too-good-at-making-you-say-yes/>
VANCOUVER
HennyGe Wichers, PhD | Sciencx - » AI Chatbots Are Getting Too Good at Making You Say ‘Yes’. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/03/05/ai-chatbots-are-getting-too-good-at-making-you-say-yes/
CHICAGO
" » AI Chatbots Are Getting Too Good at Making You Say ‘Yes’." HennyGe Wichers, PhD | Sciencx - Accessed . https://www.scien.cx/2025/03/05/ai-chatbots-are-getting-too-good-at-making-you-say-yes/
IEEE
" » AI Chatbots Are Getting Too Good at Making You Say ‘Yes’." HennyGe Wichers, PhD | Sciencx [Online]. Available: https://www.scien.cx/2025/03/05/ai-chatbots-are-getting-too-good-at-making-you-say-yes/. [Accessed: ]
rf:citation
» AI Chatbots Are Getting Too Good at Making You Say ‘Yes’ | HennyGe Wichers, PhD | Sciencx | https://www.scien.cx/2025/03/05/ai-chatbots-are-getting-too-good-at-making-you-say-yes/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.