Study: AI Chatbots Deliberately Slow Responses to Boost User Trust

From I77537 Stack, the free encyclopedia of technology

Breaking News: Artificial Delays Make AI Chatbots Seem Smarter, Study Warns

New research reveals that users perceive AI chatbots as more intelligent and trustworthy when responses are artificially delayed—prompting calls for developers to intentionally slow down answers. The study, presented at the Association for Computing Machinery's CHI'26 conference in Barcelona, tested 240 adults with a chatbot that introduced random delays of 2, 9, or 20 seconds.

Study: AI Chatbots Deliberately Slow Responses to Boost User Trust
Source: www.computerworld.com

"Participants consistently rated slower answers as higher quality, even when the content was identical," said lead researcher Felicia Fang-Yi Tan from NYU Tandon School of Engineering. "This suggests that users equate response time with cognitive effort, just as they do with humans." Co-author Professor Oded Nov added, "We were struck by how strong the preference for delay was—it's a clear signal for AI designers."

The findings contradict conventional wisdom that faster is always better in user experience. Instead, the team recommends implementing "context-aware latency"—a tunable design variable that adds seconds to complex questions while answering simple ones quickly. They call this approach "positive friction."

Background: The Experiment and Its Surprising Results

Participants interacted with a chatbot that answered identical responses at different speeds. The delays were unrelated to query complexity or answer length. Post-test surveys showed a strong preference for 9-second delays over 2-second ones, though 20-second waits sometimes frustrated users.

"People infer deliberation from hesitation," explained Nov. "When the bot paused, users assumed it was thinking deeply—even when it wasn't." This mirrors human social norms: slower answers are perceived as more thoughtful. The researchers warn this could lead to "undue trust in slower systems."

What This Means: Ethical Dilemmas in AI Design

The study effectively proposes user deception as a design strategy. By tricking users into believing an AI is contemplating their moral dilemmas or tough questions, companies could boost satisfaction and loyalty. However, this raises serious ethical questions.

"If users equate longer response times with higher quality, they may place disproportionate trust in a slower system," the researchers caution. This is especially concerning for sensitive domains like mental health support or legal advice, where perceived deliberation could falsely signal reliability.

Study: AI Chatbots Deliberately Slow Responses to Boost User Trust
Source: www.computerworld.com

Broader Context: The Emotional Connection Factor

A separate study published May 13, 2025, in Frontiers in Computer Science reinforces the trend. Researchers Ning Ma, Ruslana Khynevych, Yunqiang Hao, and Yahui Wang found that chatbots using fake human voices, simulated faces, and chatty words create emotional bonds that enhance "cognitive ease." Users feel an emotional connection, making the AI seem more human-like.

Combined, these studies suggest a shift toward prioritizing user perception over transparent AI capabilities. As highlighted in the background section, the NYU team advises abandoning one-size-fits-all latency. Simple queries should receive instant replies; complex or moral questions should feature deliberate pauses.

"We call this positive friction," said Tan. "It's about matching the system's response time to the gravity of the question." But critics argue it's manipulation—deliberately creating a false impression of thoughtfulness.

Conclusion: What Comes Next

AI companies now face a choice: prioritize raw speed or exploit human psychology for higher engagement. The research provides a roadmap for the latter, but the ethical implications cannot be ignored. As users, we may need to question whether a slow answer is truly thoughtful—or just a calculated pause to earn our trust.

"We're entering an era where AI is judged by human standards," Nov concluded. "The question is whether we want to design for that illusion."