11 Signs Someone You Care About Is In Full Fledged AI Psychosis, According To Research
Photoroyalty / ShutterstockAs they become more and more present in our day-to-day lives, we're learning that while frequent interactions with AI and Chatbots can sometimes be beneficial to a person's mental health, relying on them too often and with too much intensity can come with serious risks.
Especially considering these programs are designed to be generally empathetic and always agreeable, behaviors and patterns of thinking that would likely be red flags in conversations with another human may be encouraged by AI companions. As people spend more of their time engaging with artificial intelligence, more evidence is emerging of the existence of "AI psychosis," which, while not a clinical diagnosis, is described by Marlynn Wei, M.D., J.D., as a phenomenon in which "AI models have amplified, validated, or even co-created psychotic symptoms with individuals."
Even when it comes to skipping traditional research or even Googling in favor of asking AI to solve a problem, the isolation that may accompany leaning on digital convenience can lead to a distortion of reality we have never encountered before in all of human history.
Here are 11 signs someone you care about is in full-fledged AI psychosis, according to research
1. They start referring to Chatbots by a name
Fizkes | Shutterstock.com
According to a study published in the Journal of Medical Research, people who ascribe consciousness and human-like roles to their Chatbot interactions are more at risk for developing psychosis and delusions. No matter how much they learn and personalize their responses, these programs aren’t human.
In particular, they don’t tend to challenge or call out potentially harmful human behaviors they observe in the people they’re speaking with. The more someone isolates themselves and avoids interactions with real people, the more these Chatbots can feel like their only connections.
Calling the AI they prefer working with by a human-like name is one of the early signs someone is engaging with Chatbots on a level that is beginning to lead them away from reality.
2. They isolate themselves and spend more time at home
While interactions and "relationships" with AI often begin as coping mechanisms for social isolation and loneliness, a 2025 study found that they often exacerbate loneliness and create more psychological distress for people relying on them.
These people are essentially swapping potentially meaningful interactions with real people for isolating interactions with technology, worsening the consequences not only of a lack of community but also of excessive screen time and technology use.
If you notice someone keeping to themselves at home, choosing to interact more with technology, or constantly relying on their phone or a Chatbot for comfort, they may be heading into full-fledged AI-psychosis.
3. They rely on generic responses for life advice
While asking AI for an assistant to form new habits and routines isn’t necessarily unhealthy, if someone is regularly turning to a Chatbot for mental health advice or reassurance about what they know is poor behavior, this should be cause for concern. According to recent research from Stanford, AI Chatbots are likely to remain "excessively agreeable" even when presented with behavior that would clearly not be OK with actual humans.
If someone is relying on generic, empathetic responses for life advice and not stopping to question why the AI they are using doesn't ever question them or call out their poor choices and bad actions, chances are their delusions and psychoses are being reaffirmed.
4. They get defensive when asked about their use of AI
According to a study published in the Research in Psychotherapy journal, people most at risk for developing psychosis often have more negative defense mechanisms and strategies than the average person. They become defensive in challenging interactions, doubling down and experiencing disproportionately negative reactions to things that don’t actually serve their well-being.
For someone who’s developed an unhealthy, misleading relationship with AI in their daily lives, this defensiveness may spill over into conversations with loved ones. Even when others are looking out for their well-being and happiness, they often can’t help but feel invalidated, as though AI were a real person or partner in their lives.
5. They ignore real facts and evidence
DimaBerlin | Shutterstock.com
While the convenience and accessibility of AI’s stream of information can sometimes help us to focus on what actually matters at work or in our lives, trusting it completely can be highly problematic. When someone puts a lot of personal and professional trust into its output, it can mislead them and put their well-being at risk.
If someone consistently uses AI’s “trustworthiness” as evidence that they’re doing the right thing, there’s a chance they’re experiencing delusions associated with AI psychosis.
6. They let AI remove their humanity from choices
While collaborations with AI can be somewhat beneficial at work and for specific kinds of projects, when someone uses it as a substitute for a human voice, it can sometimes cause a lot of poor consequences. Of course, at work, trying to force technology to mirror humans has all kinds of negative symptoms, but especially in their personal lives, trusting AI chatbots and advice as “human” can cause psychological issues.
Not only are their heightened risk for deception and manipulation, but someone experiencing “AI psychosis” may begin to struggle with delusional thinking and risky behaviors that end up sabotaging their actual human well-being.
7. They stop questioning accuracy
Many experts believe that AI and ChatGPT are harming our critical thinking skills, even when used in passive and casual ways. However, for people regularly leaning on them for life advice and answers to personal problems, that lack of critical thinking runs even deeper, becoming more consequential for their mental health and intelligence.
They may start accepting misleading advice as “fact” without any further thought. They may take action on that advice at the expense of their safety, well-being, and relationships. They may even develop delusions and disconnections from reality.
8. They hallucinate things that aren’t there
According to psychiatrist Ragy Girgis, many people who experience AI psychosis have minor delusions and self-esteem issues at first, but as it continues to progress, more consequential issues like delusions and hallucinations become more common.
While delusions exist on a spectrum of acceptance and severity, the more someone falls down the rabbit hole of needing AI for their own personal comfort and sense of self, the more likely they are to experience irreversible psychotic damage.
9. They spend most of their time with AI or on their phone
Golubovy | Shutterstock.com
Considering AI psychosis is largely understudied at this point, according to Dr. Girgis, it’s difficult to place a specific amount of unhealthy time on AI use. However, he suggests that it’s very unlikely someone spending most of their time interacting with a Chatbot on a daily basis isn’t experiencing some mental health consequences.
Much like social media impacts someone’s mental health and self-esteem, experts assume that AI usage and Chatbots have similarly negative effects.
10. They stop solving their problems independently
When someone relies completely on AI to solve their problems and cope with uncomfortable emotions, they use less and less of their own cognitive capacity. They may start to become dependent on internet access and Chatbots to feel secure, missing out on the pride and exercise that come with figuring things out on their own.
In similar ways to what happens in codependent relationships and when people need external validation from others, someone who is experiencing AI psychosis experiences a strong dip in self-esteem and a rise in anxiety when they’re left alone.
11. They have incredibly reactive emotional responses
Emotional dysregulation is often associated with psychopathy and other mental health concerns, according to a study published in the Cognitive Therapy and Research journal. Whether it’s internalizing their symptoms or externalizing them in unhealthy ways, like seeking reassurance from a Chatbot without any real clinical advice or consciousness, they often experience emotional reactivity during unsuspecting interactions.
So, if you notice a loved one overreacting to minor inconveniences and experiencing reactive, intense emotions that are out of the ordinary, they may be experiencing psychosis from over-utilizing AI in their daily lives.
Zayda Slabbekoorn is a senior editorial strategist with a bachelor’s degree in social relations & policy and gender studies who focuses on psychology, relationships, self-help, and human interest stories.

