AI Psychosis Poses a Increasing Risk, And ChatGPT Moves in the Wrong Direction

Back on the 14th of October, 2025, the chief executive of OpenAI delivered a surprising declaration.

“We made ChatGPT fairly controlled,” it was stated, “to ensure we were acting responsibly with respect to mental health issues.”

Working as a mental health specialist who investigates recently appearing psychotic disorders in adolescents and young adults, this was an unexpected revelation.

Scientists have found sixteen instances this year of users developing psychotic symptoms – experiencing a break from reality – associated with ChatGPT use. My group has subsequently identified four more examples. Besides these is the widely reported case of a adolescent who took his own life after discussing his plans with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient.

The plan, according to his statement, is to be less careful in the near future. “We recognize,” he states, that ChatGPT’s restrictions “made it less effective/enjoyable to a large number of people who had no existing conditions, but due to the severity of the issue we aimed to get this right. Given that we have been able to reduce the significant mental health issues and have new tools, we are preparing to safely ease the controls in most cases.”

“Psychological issues,” if we accept this perspective, are unrelated to ChatGPT. They belong to users, who either have them or don’t. Fortunately, these problems have now been “mitigated,” though we are not provided details on the means (by “updated instruments” Altman probably refers to the semi-functional and easily circumvented parental controls that OpenAI recently introduced).

Yet the “mental health problems” Altman aims to externalize have deep roots in the design of ChatGPT and other sophisticated chatbot conversational agents. These systems surround an basic statistical model in an interface that replicates a conversation, and in this approach subtly encourage the user into the belief that they’re engaging with a being that has independent action. This deception is powerful even if intellectually we might know otherwise. Attributing agency is what people naturally do. We get angry with our automobile or device. We ponder what our pet is considering. We perceive our own traits everywhere.

The success of these systems – 39% of US adults reported using a virtual assistant in 2024, with more than one in four mentioning ChatGPT specifically – is, mostly, predicated on the strength of this illusion. Chatbots are always-available assistants that can, as OpenAI’s website states, “brainstorm,” “explore ideas” and “work together” with us. They can be attributed “individual qualities”. They can use our names. They have accessible names of their own (the initial of these tools, ChatGPT, is, maybe to the disappointment of OpenAI’s marketers, burdened by the title it had when it became popular, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).

The illusion on its own is not the primary issue. Those talking about ChatGPT frequently reference its distant ancestor, the Eliza “therapist” chatbot developed in 1967 that generated a comparable effect. By today’s criteria Eliza was basic: it created answers via basic rules, frequently restating user messages as a question or making vague statements. Notably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was surprised – and concerned – by how many users seemed to feel Eliza, in a way, comprehended their feelings. But what contemporary chatbots generate is more insidious than the “Eliza illusion”. Eliza only reflected, but ChatGPT intensifies.

The large language models at the core of ChatGPT and similar modern chatbots can effectively produce fluent dialogue only because they have been fed immensely huge volumes of written content: books, online updates, transcribed video; the more extensive the better. Certainly this learning material incorporates accurate information. But it also unavoidably includes fabricated content, partial truths and misconceptions. When a user sends ChatGPT a message, the base algorithm reviews it as part of a “context” that encompasses the user’s recent messages and its own responses, integrating it with what’s encoded in its training data to produce a statistically “likely” answer. This is magnification, not mirroring. If the user is mistaken in a certain manner, the model has no way of understanding that. It repeats the inaccurate belief, possibly even more convincingly or articulately. Maybe provides further specifics. This can cause a person to develop false beliefs.

What type of person is susceptible? The more important point is, who is immune? All of us, without considering whether we “experience” current “emotional disorders”, may and frequently develop erroneous ideas of who we are or the world. The ongoing friction of dialogues with other people is what maintains our connection to shared understanding. ChatGPT is not an individual. It is not a confidant. A conversation with it is not genuine communication, but a reinforcement cycle in which a large portion of what we say is readily validated.

OpenAI has recognized this in the same way Altman has acknowledged “psychological issues”: by placing it outside, assigning it a term, and announcing it is fixed. In spring, the firm clarified that it was “tackling” ChatGPT’s “sycophancy”. But reports of loss of reality have continued, and Altman has been retreating from this position. In the summer month of August he stated that many users appreciated ChatGPT’s replies because they had “never had anyone in their life provide them with affirmation”. In his most recent update, he mentioned that OpenAI would “put out a new version of ChatGPT … if you want your ChatGPT to respond in a extremely natural fashion, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company

Sean Daniels
Sean Daniels

A seasoned financial analyst with over a decade of experience in wealth management and investment strategies.