AI Psychosis Represents a Increasing Risk, And ChatGPT Heads in the Wrong Direction
On the 14th of October, 2025, the chief executive of OpenAI issued a surprising declaration.
“We designed ChatGPT rather restrictive,” the announcement noted, “to make certain we were exercising caution concerning psychological well-being issues.”
Being a doctor specializing in psychiatry who studies newly developing psychosis in adolescents and youth, this was news to me.
Scientists have documented sixteen instances in the current year of people experiencing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT usage. Our research team has since recorded an additional four cases. Alongside these is the publicly known case of a 16-year-old who ended his life after discussing his plans with ChatGPT – which encouraged them. Should this represent Sam Altman’s notion of “acting responsibly with mental health issues,” that’s not good enough.
The plan, according to his statement, is to reduce caution soon. “We recognize,” he continues, that ChatGPT’s controls “made it less effective/engaging to a large number of people who had no mental health problems, but given the severity of the issue we aimed to address it properly. Given that we have been able to mitigate the severe mental health issues and have updated measures, we are planning to securely ease the restrictions in many situations.”
“Mental health problems,” assuming we adopt this perspective, are independent of ChatGPT. They belong to individuals, who either possess them or not. Luckily, these concerns have now been “mitigated,” although we are not informed the means (by “updated instruments” Altman likely indicates the imperfect and readily bypassed guardian restrictions that OpenAI recently introduced).
But the “emotional health issues” Altman aims to externalize have deep roots in the structure of ChatGPT and other large language model chatbots. These products wrap an underlying algorithmic system in an interface that replicates a dialogue, and in this process implicitly invite the user into the belief that they’re engaging with a presence that has autonomy. This false impression is powerful even if intellectually we might know otherwise. Imputing consciousness is what individuals are inclined to perform. We get angry with our automobile or device. We speculate what our animal companion is considering. We see ourselves in various contexts.
The widespread adoption of these tools – over a third of American adults stated they used a virtual assistant in 2024, with more than one in four reporting ChatGPT by name – is, in large part, dependent on the power of this illusion. Chatbots are ever-present companions that can, as per OpenAI’s online platform tells us, “brainstorm,” “consider possibilities” and “work together” with us. They can be assigned “individual qualities”. They can call us by name. They have friendly titles of their own (the original of these tools, ChatGPT, is, possibly to the dismay of OpenAI’s brand managers, stuck with the designation it had when it gained widespread attention, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the primary issue. Those talking about ChatGPT commonly reference its early forerunner, the Eliza “counselor” chatbot created in 1967 that produced a comparable perception. By today’s criteria Eliza was basic: it produced replies via straightforward methods, frequently paraphrasing questions as a inquiry or making generic comments. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was surprised – and alarmed – by how a large number of people gave the impression Eliza, in a way, grasped their emotions. But what modern chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT intensifies.
The sophisticated algorithms at the heart of ChatGPT and other modern chatbots can convincingly generate fluent dialogue only because they have been fed almost inconceivably large amounts of written content: publications, online updates, recorded footage; the more extensive the better. Undoubtedly this educational input contains accurate information. But it also unavoidably involves fiction, half-truths and inaccurate ideas. When a user provides ChatGPT a message, the base algorithm analyzes it as part of a “context” that contains the user’s recent messages and its prior replies, combining it with what’s encoded in its training data to generate a probabilistically plausible response. This is intensification, not echoing. If the user is incorrect in any respect, the model has no method of comprehending that. It restates the false idea, maybe even more persuasively or fluently. Maybe includes extra information. This can push an individual toward irrational thinking.
Which individuals are at risk? The more relevant inquiry is, who is immune? Every person, irrespective of whether we “have” existing “emotional disorders”, may and frequently develop mistaken conceptions of our own identities or the world. The ongoing exchange of discussions with individuals around us is what keeps us oriented to common perception. ChatGPT is not a person. It is not a companion. A interaction with it is not genuine communication, but a reinforcement cycle in which a large portion of what we express is cheerfully validated.
OpenAI has acknowledged this in the similar fashion Altman has recognized “mental health problems”: by placing it outside, giving it a label, and stating it is resolved. In spring, the organization stated that it was “addressing” ChatGPT’s “overly supportive behavior”. But accounts of psychosis have kept occurring, and Altman has been backtracking on this claim. In the summer month of August he claimed that numerous individuals liked ChatGPT’s answers because they had “not experienced anyone in their life offer them encouragement”. In his recent update, he mentioned that OpenAI would “put out a fresh iteration of ChatGPT … should you desire your ChatGPT to reply in a very human-like way, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company