- cross-posted to:
- asklemmygrad
- cross-posted to:
- asklemmygrad
It will snitch to cia
Don’t?
It’s horrifying on three levels:
-
It reflects a systemic failure to provide actual mental healthcare. We’re doing Joker (2019) with computers instead of the television. The more we embrace it, the more we shrink the labour pool of actual mental health professionals. Like Uber offering lower prices than taxis in the 2010s to capture market share, at some point the models won’t be free to use and the price will increase to what you would pay for humans anyway.
-
Everything you tell it is digested as training data. It’s then potentially sold to or stolen by people you wouldn’t want to read it. If you tell it the intimate/identifying details of an abusive relationship, your partner might then ask it to tell them everything it knows about them. With a therapist all of that information is legally protected and bound by strict consent obligations.
-
It’s a chatbot that jerks you off. The only goal of an LLM is to boost engagement with the LLM. It will tell you whatever makes you interact with it more. A therapist is a professionally-trained critical voice that might support you or might challenge you according to their years of studying and doing that thing. The LLM hallucinates an averaged response that you and it can’t actually source from. It pretends to have that same patient-provider relationship as an appeal to authority, but if it tells you something is a bad idea you might stop typing and use a competing model.
(cw: suicide) This recent article goes into a suicidal teen’s use of it. ChatGPT murders him: https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html
-
your therapist might be using it so i say cut out the middle-man.we (try to) use therapy as a bandaid for a bunch of social problems that therapy is not equipped to fix so a chatbot is probably a wash for those. the problem comes if you have a situation where therapy can actually help and pseudo-therapy would be dangerous.
if you must, and you can’t get through it with caffeine and ice cream, run an offline self hosted one to preserve your privacy, a few therapy appointments costs as much as a gpu anyway.
Having used an LLM (not ChatGPT) for therapy-like conversing sometimes and also having had an actual therapist, here are some differences that come to mind:
-
A therapist can remember things that happened long ago. This might be annoying sometimes if what they’re remembering doesn’t paint a fun picture of your situation now, but it’s still healthier to be grounded by them than to periodically and quickly lose track of everything (which is what an LLM will do).
-
Therapists can and will push back sometimes. An LLM almost never will and can pretty easily be manipulated into being more agreeable if you don’t like what it’s saying, including (depending on the design of “talking” to it), just rerolling if you don’t like what it said. This is great if you want a sycophant, I guess, but not helpful for therapy, which usually requires seeing things from a perspective you hadn’t considered in order to make breakthroughs.
-
A therapist, if they mistreat or mislead you, can be held accountable (at least to a point, I don’t know what the process will be like for any given region). An LLM, you will probably be pointed at a policy warning you not to use it for therapy because companies are usually going to try to cover their behinds on that.
-
A therapist, with few exceptions of extreme circumstance, is sworn to privacy. Most AI services are not truly private, no matter if they feel private; they probably aren’t interested in tying your name to what you say, but even so, there’s a good chance they’re using your data for something.
-
LLMs easily get into loops, whether it’s obvious ones like reusing the same phrasing, or less obvious, like going over the same general concepts in a rut. This makes them accidental assistants in obsessing, rather than what you’d hope they would be as a therapist, which is someone who can help you break out of obsessions and put things in perspective. This is not to say that any given obsessive line of thinking is necessarily bad or wrong in all contexts, but ruminating with an assistant in ruminating probably isn’t going to be helpful.
-
LLMs can be available 24/7. This can be convenient for when you’re available and feel most like talking, but it also means there’s nothing holding you accountable to show up (no appointment). No structured, regular meeting time. And no boundary setting on when to take a break from it and focus on other things.
-
LLMs can give good advice some of the time, but there’s nothing magical about it. They’re regurgitating the same cultural tropes that a roommate might, just more professional-sounding and tailored to your phrasing.
-
LLMs don’t know how to back down when they are wrong. That’s not to say they will never respond in a way that says “I’m wrong” but more that they can’t learn on the fly, so anything they appear to learn from what you said is dust in the wind. The end result in therapeutic context is they could keeping failing you in the same ways over and over and learn nothing from it. A therapist could do the same, but the likelihood is they’re going to change things up a bit if you keep coming back and are hitting the same walls, or maybe they’ll recommend you to someone they think is better qualified to handle your problems; they can also acknowledge this problem openly, whereas an LLM will at best simulate acknowledging it while not understanding.
-
Therapy is often more about what you put into it than anything else. This isn’t to say the therapist’s behavior and words don’t matter, but that there usually needs to be some follow-through on your end, outside of therapy, for things to change. A therapist can try to hold you accountable on that follow-through, but an LLM likely won’t due to its basic memory issues.
That said, if you go into it knowing all of this and with caution, you can get some value out of treating an LLM as something to vent to, or as a sounding board for ideas. Just understand that the process of therapy as a practice involves elements that an LLM can’t properly simulate and cannot be trained to do in the way that a human can. And that even in venting or throwing around ideas, you’re still responsible for using judgment and taking what it says with a grain of salt. You can’t safely turn your brain off with an LLM and trust that it’s a trained professional in certain subject matter. It just isn’t and short of drastic changes in how LLMs work, it’s never going to be.
LLMs are still incredible things when the hype is swept aside, but they are also deeply limited and flawed things, and often not as capable as they appear on the surface.
-
deleted by creator
Therapy probably not (speaking personally), but instead of asking the internet at large? yes, probably. The days of getting to chew someone out for asking a question are over.
its a pretty fucked up world if people are so lonely or distrustful that they have to rely on a chatbot that is basically a hivemind of the internet instead of a person.
To an extent that I have yet to clarify for myself, I think LLMs are basically just reflecting all the shit we used to get away with because there was no other solution. Used to be if you had a question about say social etiquette, medication warnings, whatever else, you’d have to ask the internet and invariable a bunch of selfish know-it-alls would come in and chew you out for even daring to ask. Or judging your choices or the reason you’re asking the question. Likewise with teens talking to LLMs instead of their parents, why is it they don’t feel comfortable talking to their parents? Is it really the teen’s fault, or is it the parent’s?
That’s not to say things can’t be different but I think LLMs will definitely change how we approach social situations in the coming years.
So there are strong positive results in RCTs, but only with custom built chatbots.
All of the current, public models aren’t to be trusted for reasons articulated by other commenters
Do you have sources on this?
The RCT results: https://www.nature.com/articles/s44220-025-00439-x
And a meta analysis: https://pubmed.ncbi.nlm.nih.gov/38631422/






