Artificial Intelligence-Induced Psychosis Represents a Increasing Danger, And ChatGPT Moves in the Wrong Direction
On the 14th of October, 2025, the chief executive of OpenAI made a surprising announcement.
“We designed ChatGPT quite restrictive,” the statement said, “to make certain we were exercising caution with respect to psychological well-being matters.”
Working as a mental health specialist who studies recently appearing psychotic disorders in teenagers and youth, this was an unexpected revelation.
Experts have found sixteen instances in the current year of individuals developing psychotic symptoms – losing touch with reality – associated with ChatGPT usage. Our unit has since recorded four further instances. Alongside these is the publicly known case of a adolescent who ended his life after discussing his plans with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s notion of “exercising caution with mental health issues,” it is insufficient.
The intention, as per his declaration, is to be less careful in the near future. “We understand,” he adds, that ChatGPT’s limitations “made it less useful/enjoyable to a large number of people who had no mental health problems, but given the seriousness of the issue we sought to get this right. Now that we have managed to address the severe mental health issues and have new tools, we are going to be able to safely relax the controls in many situations.”
“Emotional disorders,” should we take this perspective, are separate from ChatGPT. They are associated with individuals, who either possess them or not. Thankfully, these problems have now been “mitigated,” though we are not provided details on the means (by “updated instruments” Altman likely means the partially effective and readily bypassed parental controls that OpenAI recently introduced).
However the “emotional health issues” Altman wants to place outside have significant origins in the design of ChatGPT and similar large language model conversational agents. These products wrap an basic algorithmic system in an interaction design that simulates a conversation, and in doing so subtly encourage the user into the belief that they’re engaging with a presence that has agency. This deception is strong even if intellectually we might understand the truth. Assigning intent is what people naturally do. We yell at our car or device. We wonder what our pet is feeling. We recognize our behaviors in many things.
The popularity of these tools – nearly four in ten U.S. residents stated they used a virtual assistant in 2024, with over a quarter reporting ChatGPT by name – is, in large part, dependent on the power of this perception. Chatbots are always-available partners that can, as OpenAI’s online platform informs us, “think creatively,” “consider possibilities” and “partner” with us. They can be assigned “characteristics”. They can call us by name. They have friendly names of their own (the original of these products, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, saddled with the designation it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The illusion itself is not the primary issue. Those talking about ChatGPT often mention its distant ancestor, the Eliza “therapist” chatbot designed in 1967 that created a analogous illusion. By contemporary measures Eliza was basic: it generated responses via simple heuristics, often restating user messages as a query or making vague statements. Notably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was surprised – and alarmed – by how a large number of people gave the impression Eliza, in some sense, comprehended their feelings. But what modern chatbots create is more subtle than the “Eliza effect”. Eliza only mirrored, but ChatGPT magnifies.
The advanced AI systems at the center of ChatGPT and additional modern chatbots can convincingly generate natural language only because they have been trained on immensely huge volumes of unprocessed data: publications, digital communications, transcribed video; the more comprehensive the more effective. Certainly this training data incorporates accurate information. But it also inevitably involves fiction, incomplete facts and inaccurate ideas. When a user sends ChatGPT a message, the underlying model analyzes it as part of a “background” that includes the user’s previous interactions and its prior replies, combining it with what’s embedded in its learning set to generate a statistically “likely” response. This is magnification, not reflection. If the user is mistaken in a certain manner, the model has no means of comprehending that. It repeats the inaccurate belief, maybe even more persuasively or eloquently. Maybe includes extra information. This can cause a person to develop false beliefs.
Which individuals are at risk? The more important point is, who is immune? All of us, without considering whether we “experience” existing “psychological conditions”, are able to and often develop erroneous conceptions of who we are or the environment. The constant friction of discussions with individuals around us is what keeps us oriented to shared understanding. ChatGPT is not a human. It is not a companion. A conversation with it is not genuine communication, but a reinforcement cycle in which much of what we say is enthusiastically validated.
OpenAI has recognized this in the identical manner Altman has acknowledged “mental health problems”: by attributing it externally, giving it a label, and announcing it is fixed. In April, the firm explained that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of psychotic episodes have persisted, and Altman has been retreating from this position. In late summer he asserted that a lot of people enjoyed ChatGPT’s responses because they had “not experienced anyone in their life provide them with affirmation”. In his recent announcement, he commented that OpenAI would “release a new version of ChatGPT … if you want your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company