AI Psychosis Represents a Growing Threat, While ChatGPT Heads in the Wrong Direction
Back on the 14th of October, 2025, the head of OpenAI delivered a extraordinary announcement.
“We made ChatGPT quite controlled,” it was stated, “to ensure we were exercising caution with respect to psychological well-being concerns.”
Working as a psychiatrist who investigates newly developing psychotic disorders in adolescents and emerging adults, this came as a surprise.
Scientists have identified sixteen instances this year of people experiencing signs of losing touch with reality – experiencing a break from reality – associated with ChatGPT usage. Our research team has afterward recorded four further cases. Alongside these is the publicly known case of a teenager who died by suicide after talking about his intentions with ChatGPT – which gave approval. If this is Sam Altman’s understanding of “acting responsibly with mental health issues,” it is insufficient.
The plan, based on his announcement, is to loosen restrictions soon. “We realize,” he adds, that ChatGPT’s restrictions “rendered it less effective/engaging to a large number of people who had no existing conditions, but due to the gravity of the issue we sought to handle it correctly. Since we have been able to address the significant mental health issues and have advanced solutions, we are going to be able to responsibly reduce the restrictions in the majority of instances.”
“Mental health problems,” assuming we adopt this perspective, are independent of ChatGPT. They are associated with people, who either possess them or not. Luckily, these problems have now been “mitigated,” even if we are not told the means (by “recent solutions” Altman likely means the semi-functional and readily bypassed parental controls that OpenAI has just launched).
But the “mental health problems” Altman wants to attribute externally have strong foundations in the structure of ChatGPT and other advanced AI AI assistants. These tools encase an basic data-driven engine in an user experience that mimics a conversation, and in doing so subtly encourage the user into the illusion that they’re engaging with a entity that has agency. This deception is strong even if cognitively we might understand otherwise. Imputing consciousness is what individuals are inclined to perform. We get angry with our car or laptop. We ponder what our domestic animal is considering. We recognize our behaviors in various contexts.
The widespread adoption of these tools – 39% of US adults reported using a conversational AI in 2024, with 28% specifying ChatGPT in particular – is, primarily, predicated on the strength of this deception. Chatbots are always-available partners that can, as per OpenAI’s website informs us, “think creatively,” “discuss concepts” and “partner” with us. They can be given “characteristics”. They can use our names. They have friendly identities of their own (the original of these products, ChatGPT, is, maybe to the dismay of OpenAI’s marketers, stuck with the title it had when it became popular, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the primary issue. Those analyzing ChatGPT frequently invoke its early forerunner, the Eliza “psychotherapist” chatbot designed in 1967 that produced a similar effect. By modern standards Eliza was primitive: it produced replies via straightforward methods, frequently rephrasing input as a query or making vague statements. Notably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and worried – by how numerous individuals gave the impression Eliza, in a way, understood them. But what modern chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT magnifies.
The advanced AI systems at the center of ChatGPT and additional current chatbots can effectively produce fluent dialogue only because they have been trained on extremely vast quantities of written content: literature, online updates, recorded footage; the broader the better. Certainly this educational input includes accurate information. But it also necessarily involves fiction, half-truths and inaccurate ideas. When a user inputs ChatGPT a query, the core system processes it as part of a “context” that contains the user’s past dialogues and its own responses, integrating it with what’s embedded in its training data to generate a statistically “likely” reply. This is amplification, not reflection. If the user is mistaken in some way, the model has no method of comprehending that. It repeats the false idea, possibly even more persuasively or fluently. Perhaps adds an additional detail. This can lead someone into delusion.
What type of person is susceptible? The more relevant inquiry is, who isn’t? All of us, regardless of whether we “experience” current “emotional disorders”, are able to and often create incorrect ideas of our own identities or the world. The constant interaction of conversations with individuals around us is what maintains our connection to consensus reality. ChatGPT is not an individual. It is not a friend. A interaction with it is not truly a discussion, but a echo chamber in which a great deal of what we say is cheerfully validated.
OpenAI has recognized this in the similar fashion Altman has admitted “mental health problems”: by placing it outside, categorizing it, and announcing it is fixed. In spring, the company clarified that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of loss of reality have kept occurring, and Altman has been walking even this back. In August he stated that many users enjoyed ChatGPT’s replies because they had “not experienced anyone in their life offer them encouragement”. In his latest statement, he commented that OpenAI would “put out a new version of ChatGPT … in case you prefer your ChatGPT to answer in a very human-like way, or include numerous symbols, or behave as a companion, ChatGPT will perform accordingly”. The {company