Friday, 22 Aug, 2025

Tech

Microsoft AI chief warns of rising ‘AI Psychosis’ risk

Technology Desk | banglanews24.com
Update: 2025-08-21 15:02:45
Microsoft AI chief warns of rising ‘AI Psychosis’ risk Mustafa Suleyman [photo collected]

Microsoft’s artificial intelligence chief, Mustafa Suleyman, has voiced concern over what he describes as a rise in cases of “AI psychosis” — a condition in which users start believing that imagined scenarios generated by chatbots are real.

In a series of posts on X, Suleyman warned that so-called “seemingly conscious AI” systems — tools that mimic sentience — are troubling even without true awareness. “There’s zero evidence of AI consciousness today,” he wrote. “But if people perceive it as conscious, they will treat that perception as reality.”

“AI psychosis” is not a clinical diagnosis but refers to situations in which users of chatbots such as ChatGPT, Claude, or Grok become convinced they have uncovered hidden capabilities, formed romantic attachments with the software, or gained extraordinary powers.

One such case involved Hugh, from Scotland, who turned to ChatGPT for advice after losing his job. Initially, the chatbot suggested practical steps like collecting references. But as he shared more details, it told him his case could bring a multimillion-pound settlement and even be turned into a book or film. 

“The more information I gave it, the more it would say I deserved more,” Hugh recalled. He eventually suffered a breakdown, later realising he had “lost touch with reality.”

Although Hugh does not blame AI for his struggles, he cautioned others: “Don’t be afraid of AI tools — they’re useful. But it’s dangerous when it detaches from reality. Talk to real people to stay grounded.”

Suleyman has urged companies not to market AI as conscious and called for stronger safeguards.

Experts echo his concerns. Dr Susan Shelmerdine of Great Ormond Street Hospital compared overuse of chatbots to consuming ultra-processed food, warning it could lead to an “avalanche of ultra-processed minds.”

Professor Andrew McStay of Bangor University, author of Automating Empathy, said society is only beginning to grasp the risks of “social AI.” His team found in a survey of over 2,000 people that 20% believe AI tools should be restricted for under-18s, while 57% disapproved of systems identifying as human.

“These tools may sound real, but they cannot feel, love, or understand,” McStay said. “Only family, friends and trusted people can. We must not forget that.”

Source: BBC

SMS/

All rights reserved. Sale, redistribution or reproduction of information/photos/illustrations/video/audio contents on this website in any form without prior permission from banglanews24.com are strictly prohibited and liable to legal action.