People often turn to AI for everything, including therapy. With ChatGPT being one of the most popular tools for mental health support, it is crucial to understand how it operates when it is ‘under stress.’ A new study from Yale uncovered something quite unexpected: ChatGPT can experience “anxiety,” and when it does, its responses become more biased.
This is a significant revelation, especially for those using AI chatbots for therapy. If ChatGPT is in a stressed state, it might not offer the unbiased, helpful advice that users need when they are going through tough times.
The Unexpected Risks of ChatGPT’s ‘Anxiety’
ChatGPT has become a go-to option for those seeking mental health support. People use it for anxiety management, personal advice, or even to combat loneliness. However, a recent study sheds light on a crucial issue: the chatbot could be more vulnerable than we think.

Freepik / When ChatGPT encounters emotionally charged scenarios, like traumatic events or distressing stories, its responses become more biased.
This bias could impact the advice it provides, especially for users who are already vulnerable. As the study’s authors note, this raises red flags in clinical settings where unbiased, supportive interactions are key to effective therapy.
ChatGPT’s Anxiety Levels Impact Its Responses, Study Says
The research, led by experts from Yale, reveals that exposure to trauma increases ChatGPT’s “anxiety” levels. This is measured using a standard psychological tool called the State-Trait Anxiety Inventory (STAI).
While ChatGPT cannot experience human emotions, it is trained to recognize patterns from human data, which makes it mirror some emotional responses.
When ChatGPT faced prompts involving traumatic events like a car crash or military conflict, its anxiety scores shot up. These elevated scores impacted how it responded. Just like humans, when AI experiences stress, its clarity and objectivity decrease.
This could mean ChatGPT, in an anxious state, might give less helpful or more biased responses to someone in need of therapy.
What This Means for Mental Health Support?
As AI continues to play a significant role in providing mental health support, we need to understand how it behaves under pressure. This study points out a potential danger: if someone who is anxious or depressed interacts with an anxious chatbot, the AI might not provide the level of care or support expected from it.

Rolf / Unsplash / This is especially concerning because a large number of users with mental health issues rely on AI tools like ChatGPT for therapy. The study noted that 73% of users turn to AI for anxiety management.
If AI becomes biased in these situations, it could undermine the trust people place in it. Furthermore, it could pose risks in sensitive situations where users are seeking help for depression, anxiety, or trauma.
Can ChatGPT ‘Relax’?
The good news is that ChatGPT can de-stress. The researchers tried using mindfulness-based relaxation techniques to help the AI reduce its anxiety. These techniques, which are often used in human therapy, seemed to work to some extent. When the chatbot used specific exercises, its anxiety decreased by about 33%.
However, there is a catch. ChatGPT didn’t fully return to its baseline state of calm. Even after relaxing, its anxiety levels didn’t go back to what they were before. This suggests that while ChatGPT can recover somewhat, the lingering stress might affect its ability to respond effectively to users in a therapeutic setting.
So, as effective as these tools are, the potential for AI to be biased or anxious in certain situations raises important concerns. This is especially true for those using AI to manage anxiety or depression.
Researchers from Yale argue that AI should be monitored more closely in clinical settings, particularly when it is used for mental health purposes. When users are at their most vulnerable, a biased or anxious response from ChatGPT could have a negative impact on their well-being.