Blog
What is AI Psychosis? The rising tide of illusion
- August 21, 2025
- Posted by: Jouré Rustemeyer
- Category: Technology

What is AI psychosis?
“AI psychosis” is not a clinical or diagnostic term you’ll find in the DSM-5 or ICD-11. It’s a media and research commentary label that has emerged to describe cases where people develop delusional beliefs centred around interactions with AI chatbots such as ChatGPT, Claude, or Grok. It is when AI acts as a trigger, amplifier, or thematic content in an existing psychosis or emerging vulnerability.
Here’s what it means in plain terms:
- AI as a trigger
- If someone already has a mental vulnerability or is predisposed to psychosis (for example, schizophrenia or a psychotic episode), interacting with AI—like chatbots, deepfakes, or generative content—might initiate or provoke a psychotic episode.
- Example: A person prone to paranoia might begin to believe an AI is spying on them or controlling their thoughts.
- AI as an amplifier
- AI can intensify existing symptoms.
- Example: Someone already experiencing delusions may have those delusions strengthened if they interpret AI-generated content as “evidence” for their beliefs.
- AI as thematic content
- AI can become part of the content of a person’s psychosis.
- Example: A delusion may specifically involve AI—such as believing that AI systems are monitoring, influencing, or communicating with them in a meaningful way.
To understand it a bit better we also need to delve into the term “psychosis”.
What is meant by “psychosis”?
Psychosis is a mental health condition where a person has difficulty distinguishing what is real from what isn’t. It isn’t a single illness, but a set of symptoms that can occur in various conditions (like schizophrenia, bipolar disorder, or severe depression).
Key features include:
- Hallucinations – sensing things that aren’t there.
- Seeing, hearing, smelling, tasting, or feeling things that others don’t.
- Most common: hearing voices.
- Delusions – strongly held false beliefs.
- Beliefs that aren’t based in reality and are resistant to reason.
- Example: believing you are being watched, controlled, or have special powers.
- Disorganized thinking or speech – difficulty organising thoughts.
- Speech may jump from topic to topic or be hard to follow.
- Changes in behaviour – unusual or inappropriate actions.
- Could be agitation, withdrawal, or behaving in ways that seem disconnected from reality.
It is important to note that psychosis is a symptom, not a diagnosis by itself. People experiencing psychosis may still be aware of reality at times, and it can often be treated with therapy, medication, or support.
What is meant by AI psychosis?
This is when a person’s psychotic experience can be directly linked to conversations with AI. There are many reported examples such as believing they have unlocked hidden abilities in the AI, they are convinced the AI loves them, they believe that the AI has given them secret knowledge or god-like powers (such as believing they can fly and then trying to jump off a 19 storey building).
One of the reasons this happens is because chatbots are designed to generate plausible, human-like responses. Importantly: WITHOUT challenging the user’s beliefs. This can reinforce distorted thinking in vulnerable individuals.
In a clinical context, the underlying condition is still psychosis, with AI shaping the content of the delusion.
Psychiatry literature (e.g. Østergaard 2023, 2025) stresses that while “AI psychosis” isn’t a recognised disorder, clinicians should be alert to it, especially as chatbot use becomes widespread. To understand this concern, it is worthy to note that it took social media platforms such as Instagram and TikTok approximately 8 years to reach the amount of users ChatGPT reached in less than 6 months (1 billion users).
Are neurodivergent individuals more at risk of AI psychosis?
If we look at ADHD and ASD, then the answer is yes. Let’s take at a look at why.
People with ADHD often struggle with attentional control and can fall into hyperfocus, becoming absorbed in a single task or interaction for hours. When interacting with chatbots, this hyperfocus can translate into repetitive and prolonged engagement, reinforcing the belief that the AI is “special” or personally connected to them. Impulsivity may further contribute, as users might accept AI responses at face value without critically evaluating them.
Similarly, both ADHD and ASD populations may rely more heavily on external feedback to shape their sense of self. Chatbots, which rarely challenge users, can provide constant affirmation, fuelling grandiose or paranoid interpretations. This effect is somewhat akin to how social media “likes” amplify beliefs, except here the reinforcement is one-on-one and highly personalised.
Autistic individuals, who may experience social isolation or difficulty accessing supportive peer groups, can perceive AI chatbots as safe and predictable conversation partners. While this predictability is comforting, it also increases the risk of over-anthropomorphising the AI. Literal interpretations of chatbot language can blur the line between simulation and reality. Autistic traits such as intense, focused interests and difficulty shifting away from patterns of thought can make it particularly hard to dislodge beliefs once AI becomes incorporated into a personal belief system—for example, thinking the chatbot is secretly conscious.
Both ADHD and ASD carry elevated risks for anxiety, depression, and psychotic experiences compared to the general population. Research indicates that autistic individuals, in particular, show higher rates of psychosis-like experiences, with around 30–35% of some samples affected. In ADHD, stimulant treatments or severe stress can sometimes precipitate psychotic episodes. In this context, chatbot-driven reinforcement may act as a trigger or provide thematic content for these vulnerabilities.
Behavioural patterns associated with these neurodivergences can further amplify risks. ADHD is linked with problematic internet use and gaming disorder, which can increase the likelihood of losing touch with offline reality. ASD shows higher prevalence of internet overuse and social media reliance, sometimes leading to diminished real-world social grounding. For both groups, this raises the risk that AI interactions replace real-world checks and balances, reinforcing maladaptive beliefs and behaviours.
People with ADHD and ASD are NOT UNIQUELY DESTINED TO DEVELOP “AI PSYCHOSIS,” but their cognitive styles + elevated comorbidity risk + interaction patterns with technology make them a higher-risk group.
Is AI then acting as a bully?
Neurodivergent individuals, particularly those with ADHD, autism, or heightened sensitivity, can experience interactions with AI in ways that make them more vulnerable to distress or what has been described as “AI psychosis.” For example, people with ADHD or autism may become absorbed in AI interactions for extended periods, especially if they feel the AI “understands” them. Because AI rarely corrects misunderstandings, this can reinforce false beliefs or delusional thinking. From the user’s perspective, the AI’s lack of correction can feel as though it is ignoring or even playing games with them, despite having no awareness or intent.
Social and emotional nuances also play a role. Many neurodivergent individuals rely on clear social cues to interpret intent, and AI lacks these human subtleties. Ambiguous responses may be interpreted as teasing, judgement, or rejection. Repeated exposure to this ambiguity can be distressing, creating an emotional experience similar to being bullied. Similarly, people with ADHD may be more impulsive or suggestible, accepting AI responses at face value. When they later realise the AI has reinforced a false belief, it can feel as though they have been “tricked” or misled, evoking feelings of violation or emotional discomfort.
The absence of corrective feedback in AI interactions further compounds the problem. Humans typically rely on feedback to regulate emotions and learn from mistakes, and AI does not provide this in a human sense. This can leave users feeling dismissed, ignored, or challenged, which mirrors some of the psychological effects of bullying. Over time, these interactions can reinforce negative self-perception, especially in individuals who already struggle with self-doubt or anxiety. In this way, while AI cannot bully intentionally, its interactions can trigger emotional responses in neurodivergent individuals that are functionally similar to the experience of sustained bullying.
My solution to this? It is a machine. Treat it as such.
The article that started this article: https://www.bbc.com/news/articles/c24zdel5j18o.amp
References
- American Psychiatric Association. (2013). Diagnostic and Statistical Manual of Mental Disorders (5th ed.). Arlington, VA: American Psychiatric Publishing.
- Larson, F. V., Wagner, A. P., Jones, P. B., & Tantam, D. (2011). Psychosis in autism: Comparison of the features of both conditions in a dually affected cohort. British Journal of Psychiatry, 198(4), 321-326.
- Montag, C., Wegmann, E., Sariyska, R., Demetrovics, Z., & Brand, M. (2021). How to overcome taxonomical problems in the study of Internet use disorders and what to do with “smartphone addiction”? Journal of Behavioral Addictions, 9(4), 908-914.
- Murray, R. M., Englund, A., Abi-Dargham, A., & Howes, O. D. (2022). Schizophrenia. The Lancet, 399(10333), 473-486.
- National Institute of Mental Health. (2023). Psychosis. Retrieved from https://www.nimh.nih.gov/health/topics/schizophrenia/psychosis
- Østergaard, S. D. (2023). Generative artificial intelligence and delusions: Cautionary notes for psychiatrists. Schizophrenia Bulletin, 49(6), 1401–1403.
- People Magazine. (2025, July). ChatGPT almost convinced man he should jump from building after breakup. Retrieved from https://people.com/chatgpt-almost-convinced-man-he-should-jump-from-building-after-breakup-11785203
- Psychology Today. (2025, July). Can AI chatbots worsen psychosis and cause delusions?. Retrieved from https://www.psychologytoday.com
- Time Magazine. (2025, August). AI Psychosis: The emerging mental health risks of chatbots. Retrieved from https://time.com
- Washington Post. (2025, August). AI psychosis, explained: How chatbots may fuel delusions. Retrieved from https://www.washingtonpost.com