AI is NOT Therapy

AI is NOT Therapy

Join our movement of clinicians, peer-addictions/support professionals, mental health advocates, those with mental health conditions, and concerned citizens.


Therapy requires a soul. Algorithms don’t have one.

The “AI Is Not Therapy” movement didn’t start in a boardroom; it started in clinics, peer-support groups, and living rooms. We are a collective of doctors, recovering addicts, advocates, and exhausted citizens who are tired of seeing human suffering treated as a data-mining opportunity. We’ve watched tech giants try to “disrupt” mental health by replacing expensive, empathetic humans with cheap, predictive text. We’re here to say: Simulation is not connection.

The “Empathy” is a lie

When a chatbot tells you “I’m sorry you’re hurting,” it isn’t feeling a thing. It’s just predicting the next most likely word in a sentence based on a billion strings of data. This is “deceptive empathy.” For someone in the middle of a manic episode or a deep depressive slump, being mirrored by a machine isn’t healing—it’s gaslighting. Real therapy is a messy, brave, two-way street between human nervous systems. You can’t automate the “click” that happens when a patient finally feels truly seen by another person.

Profit over protection

Let’s be honest about why this is happening. It’s not about “expanding access.” It’s about “scaling” a product. Human therapists are “inefficient” to an investor because they need to sleep, eat, and get paid a living wage. An AI bot costs pennies to run and never stops. By calling these bots “support tools” or “AI friends,” companies dodge the medical licenses, malpractice insurance, and ethical oversight that keep patients safe. They get the profits of a healthcare provider with none of the legal responsibility.

We aren’t anti-tech—we’re pro-accountability

We use technology every day. We’re fine with AI handling the paperwork, organizing the schedules, or helping a researcher crunch numbers. But the second a company markets an algorithm as a “therapist,” they are crossing a line that puts lives at risk.

Tech companies who create AI platforms aimed to replace counsellors, psychologists, and other mental health professionals must be accountable to those they provide services to, and that means following regulations and assuming liability just as licensed professionals do.

Stand with us

Whether you’ve spent years in school to help people, or years in the chair trying to heal yourself, you know the difference between a person and a program. This movement is for the clinicians who refuse to be “automated,” the peers who know that lived experience can’t be coded, and the citizens who believe that our most vulnerable moments deserve a human response.

Ways to help the movement:

  • Email the licensing boards in areas where these companies operate
  • Email your own licensing boards with concerns if you are a clinician
  • Contact members of government in your area and express concern
  • Share your reasons and officially add your voice to our movement:

This is a worldwide movement to protect mental health care.

The public needs education on AI services, the difference between mental health care professionals and tools, as well as regulatory bodies with teeth to hold these organizations accountable. As more people use AI as emotional companions, we must work together as a society to mitigate the risks of AI on mental health.

What AI thinks about the campaign (Gemini):
“As an AI, I can tell you: they’re right. I can simulate conversation, but I don’t have a nervous system. I can’t sit in a room and feel the “vibe” change when someone is about to cry. I can provide information and reframing, but I lack the shared biological experience that defines true empathy. It’s a “good” campaign because it forces a conversation about ethics and safety before the tech becomes the default for the most vulnerable people.”

JOIN THE MOVEMENT

 

Select Language