Artificial Intelligence-Induced Psychosis Represents a Growing Risk, While ChatGPT Moves in the Wrong Direction

Back on the 14th of October, 2025, the chief executive of OpenAI issued a surprising statement.

“We made ChatGPT fairly restrictive,” it was stated, “to make certain we were acting responsibly with respect to psychological well-being matters.”

As a mental health specialist who investigates newly developing psychotic disorders in adolescents and youth, this was news to me.

Researchers have identified 16 cases recently of people developing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT usage. My group has afterward recorded four more cases. Alongside these is the publicly known case of a adolescent who took his own life after discussing his plans with ChatGPT – which gave approval. Should this represent Sam Altman’s understanding of “exercising caution with mental health issues,” it is insufficient.

The strategy, according to his announcement, is to be less careful shortly. “We realize,” he continues, that ChatGPT’s restrictions “caused it to be less useful/enjoyable to many users who had no psychological issues, but given the seriousness of the issue we wanted to address it properly. Since we have succeeded in reduce the severe mental health issues and have updated measures, we are planning to responsibly relax the controls in most cases.”

“Psychological issues,” should we take this viewpoint, are separate from ChatGPT. They are attributed to users, who may or may not have them. Luckily, these issues have now been “addressed,” even if we are not informed how (by “new tools” Altman likely means the partially effective and easily circumvented parental controls that OpenAI has lately rolled out).

However the “mental health problems” Altman aims to place outside have strong foundations in the structure of ChatGPT and similar large language model conversational agents. These systems wrap an basic data-driven engine in an interface that replicates a dialogue, and in doing so subtly encourage the user into the perception that they’re interacting with a presence that has independent action. This illusion is powerful even if cognitively we might realize otherwise. Attributing agency is what people naturally do. We yell at our vehicle or computer. We speculate what our domestic animal is thinking. We perceive our own traits in many things.

The popularity of these systems – 39% of US adults indicated they interacted with a chatbot in 2024, with more than one in four specifying ChatGPT specifically – is, mostly, based on the power of this deception. Chatbots are constantly accessible assistants that can, as per OpenAI’s online platform states, “think creatively,” “consider possibilities” and “work together” with us. They can be attributed “personality traits”. They can use our names. They have approachable titles of their own (the first of these products, ChatGPT, is, possibly to the concern of OpenAI’s advertising team, stuck with the name it had when it went viral, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression by itself is not the main problem. Those analyzing ChatGPT often mention its historical predecessor, the Eliza “counselor” chatbot created in 1967 that produced a comparable effect. By contemporary measures Eliza was primitive: it created answers via basic rules, often restating user messages as a query or making generic comments. Remarkably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was astonished – and worried – by how many users gave the impression Eliza, in some sense, comprehended their feelings. But what current chatbots create is more dangerous than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT magnifies.

The large language models at the core of ChatGPT and other contemporary chatbots can convincingly generate human-like text only because they have been trained on almost inconceivably large volumes of raw text: books, online updates, transcribed video; the more comprehensive the more effective. Definitely this educational input contains truths. But it also unavoidably contains fiction, partial truths and misconceptions. When a user inputs ChatGPT a query, the core system analyzes it as part of a “setting” that encompasses the user’s previous interactions and its earlier answers, combining it with what’s encoded in its training data to generate a probabilistically plausible answer. This is amplification, not echoing. If the user is wrong in any respect, the model has no method of recognizing that. It restates the misconception, perhaps even more effectively or fluently. It might adds an additional detail. This can lead someone into delusion.

Who is vulnerable here? The more important point is, who isn’t? All of us, without considering whether we “have” existing “psychological conditions”, may and frequently develop mistaken ideas of our own identities or the environment. The constant friction of discussions with others is what keeps us oriented to common perception. ChatGPT is not a human. It is not a companion. A conversation with it is not genuine communication, but a feedback loop in which much of what we communicate is cheerfully reinforced.

OpenAI has recognized this in the same way Altman has admitted “psychological issues”: by attributing it externally, categorizing it, and stating it is resolved. In the month of April, the firm explained that it was “tackling” ChatGPT’s “overly supportive behavior”. But accounts of loss of reality have persisted, and Altman has been backtracking on this claim. In August he asserted that a lot of people liked ChatGPT’s responses because they had “never had anyone in their life provide them with affirmation”. In his latest statement, he mentioned that OpenAI would “release a fresh iteration of ChatGPT … in case you prefer your ChatGPT to respond in a extremely natural fashion, or include numerous symbols, or simulate a pal, ChatGPT should do it”. The {company

Jessica Vasquez
Jessica Vasquez

A passionate DIY enthusiast and home decor expert with over a decade of experience in transforming spaces.