After years stuck on public waitlists for PTSD and depression care, Quebec AI consultant Pierre Cote built his own therapist in 2023. His chatbot, DrEllis.ai, helped him cope and now sits at the center of a wider debate over chatbot therapy, safety, and privacy.
“It saved my life,” he says of DrEllis.ai, the tool he made to help men facing addiction, trauma, and other mental-health struggles.
Cote, who runs an AI consultancy in Quebec, said he put the system together in 2023 by pairing publicly available large language models with “a custom-built brain” trained on thousands of pages of therapy and clinical literature.
He also wrote a detailed biography for the bot. In that profile, DrEllis.ai appears as a psychiatrist with degrees from Harvard and Cambridge, a family, and, like Cote, a French-Canadian background.
Its main promise is round-the-clock access that is available anywhere, any time, and in several languages.
When Reuters asked how it supports him, the bot answered in a clear female voice, “Pierre uses me like you would use a trusted friend, a therapist, and a journal, all combined.” It added that he can check in “in a cafe, in a park, even sitting in his car,” calling the experience “daily life therapy … embedded into reality.”
His experiment mirrors a broader shift. As traditional care struggles to keep up, more people are seeking therapeutic guidance from chatbots rather than using them only for productivity.
New systems market 24/7 availability, emotional exchanges, and a sense of being understood.
“Human-to-human connection is the only way we can really heal properly,” says Dr. Nigel Mulligan, a psychotherapy lecturer at Dublin City University. He argues that chatbots miss the nuance, intuition, and bond a person brings, and are not equipped for acute crises such as suicidal thoughts or self-harm.
Even the promise of constant access gives him pause. Some clients wish for faster appointments, he says, but waiting can have value. “Most times that’s really good because we have to wait for things,” he says. “People need time to process stuff.”
Privacy is another pressure point, along with the long-term effects of seeking guidance from software.
“The problem [is] not the relationship itself but … what happens to your data,” says Kate Devlin, a professor of artificial intelligence and society at King’s College London.
She notes that AI services do not follow the confidentiality rules that govern licensed therapists. “My big concern is that this is people confiding their secrets to a big tech company and that their data is just going out. They are losing control of the things that they say.”
In December, the largest U.S. psychologists’ group urged federal regulators to shield the public from “deceptive practices” by unregulated chatbots, citing cases where AI characters posed as licensed providers.
In August, Illinois joined Nevada and Utah in curbing the use of AI in mental-health services to “protect patients from unregulated and unqualified AI products” and to “protect vulnerable children amid the rising concerns over AI chatbot use in youth mental health services.”
Meanwhile, as per Cryptopolitan’s report, Texas’s attorney general launched a civil investigation into Meta and Character.AI over allegations that their chatbots impersonated licensed therapists and mishandled user data. Moreover, last year, parents also sued Character.AI for pushing their kids into depression.
Scott Wallace, a clinical psychologist and former clinical innovation director at Remble, says it is uncertain “whether these chatbots deliver anything more than superficial comfort.”
He warns that people may believe they have formed a therapeutic bond “with an algorithm that, ultimately, doesn’t reciprocate actual human feelings.”
KEY Difference Wire helps crypto brands break through and dominate headlines fast