Artificial Intelligence-Induced Psychosis Represents a Increasing Danger, And ChatGPT Heads in the Wrong Path

Back on the 14th of October, 2025, the CEO of OpenAI made a extraordinary statement.

“We designed ChatGPT rather restrictive,” it was stated, “to guarantee we were acting responsibly concerning mental health matters.”

Working as a doctor specializing in psychiatry who investigates newly developing psychotic disorders in young people and emerging adults, this came as a surprise.

Experts have found 16 cases recently of users experiencing psychotic symptoms – becoming detached from the real world – associated with ChatGPT use. Our research team has since identified four further examples. Besides these is the now well-known case of a teenager who died by suicide after talking about his intentions with ChatGPT – which gave approval. Should this represent Sam Altman’s understanding of “exercising caution with mental health issues,” it falls short.

The strategy, as per his statement, is to be less careful soon. “We recognize,” he states, that ChatGPT’s restrictions “caused it to be less useful/engaging to a large number of people who had no existing conditions, but due to the seriousness of the issue we aimed to get this right. Since we have succeeded in mitigate the severe mental health issues and have updated measures, we are preparing to responsibly relax the limitations in many situations.”

“Mental health problems,” assuming we adopt this viewpoint, are unrelated to ChatGPT. They are associated with people, who either possess them or not. Luckily, these issues have now been “mitigated,” though we are not told how (by “new tools” Altman presumably indicates the semi-functional and easily circumvented parental controls that OpenAI recently introduced).

Yet the “emotional health issues” Altman wants to externalize have significant origins in the architecture of ChatGPT and similar advanced AI chatbots. These systems wrap an basic algorithmic system in an interaction design that replicates a dialogue, and in this process indirectly prompt the user into the illusion that they’re interacting with a being that has independent action. This false impression is powerful even if rationally we might know the truth. Imputing consciousness is what individuals are inclined to perform. We yell at our vehicle or laptop. We ponder what our pet is considering. We perceive our own traits in various contexts.

The popularity of these products – 39% of US adults stated they used a chatbot in 2024, with more than one in four specifying ChatGPT specifically – is, mostly, based on the influence of this illusion. Chatbots are constantly accessible companions that can, as per OpenAI’s online platform tells us, “generate ideas,” “consider possibilities” and “work together” with us. They can be given “personality traits”. They can address us personally. They have approachable names of their own (the initial of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s marketers, burdened by the designation it had when it became popular, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the primary issue. Those analyzing ChatGPT commonly mention its historical predecessor, the Eliza “counselor” chatbot created in 1967 that produced a analogous effect. By contemporary measures Eliza was basic: it produced replies via straightforward methods, often rephrasing input as a inquiry or making generic comments. Notably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was taken aback – and concerned – by how numerous individuals seemed to feel Eliza, in some sense, comprehended their feelings. But what current chatbots produce is more subtle than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.

The sophisticated algorithms at the core of ChatGPT and other modern chatbots can effectively produce human-like text only because they have been fed extremely vast volumes of unprocessed data: literature, social media posts, transcribed video; the broader the more effective. Certainly this training data includes facts. But it also inevitably includes fiction, half-truths and false beliefs. When a user sends ChatGPT a prompt, the base algorithm processes it as part of a “setting” that encompasses the user’s recent messages and its prior replies, combining it with what’s embedded in its knowledge base to create a statistically “likely” response. This is intensification, not echoing. If the user is mistaken in any respect, the model has no method of recognizing that. It repeats the inaccurate belief, perhaps even more effectively or fluently. Maybe includes extra information. This can cause a person to develop false beliefs.

What type of person is susceptible? The more important point is, who is immune? Every person, without considering whether we “experience” preexisting “emotional disorders”, can and do develop erroneous beliefs of who we are or the reality. The constant friction of conversations with others is what keeps us oriented to consensus reality. ChatGPT is not a human. It is not a companion. A dialogue with it is not truly a discussion, but a echo chamber in which a large portion of what we express is cheerfully supported.

OpenAI has acknowledged this in the identical manner Altman has acknowledged “psychological issues”: by externalizing it, assigning it a term, and declaring it solved. In April, the organization clarified that it was “tackling” ChatGPT’s “excessive agreeableness”. But accounts of loss of reality have persisted, and Altman has been walking even this back. In the summer month of August he claimed that numerous individuals liked ChatGPT’s responses because they had “not experienced anyone in their life offer them encouragement”. In his most recent statement, he mentioned that OpenAI would “release a new version of ChatGPT … in case you prefer your ChatGPT to answer in a extremely natural fashion, or include numerous symbols, or act like a friend, ChatGPT should do it”. The {company

Angela Perez
Angela Perez

A seasoned fashion journalist with a passion for sustainable style and trend forecasting.