Anyone who recommends LLM to replace a doctor or a therapist or any health profession is utterly ignorant or has interest in profiting from it.
One can easily make LLM say anything due to the nature of how it works. An LLM can and will offer eventual suicide options for depressed people.
At the best case, it is like recommending a sick person to read a book.
I can see how recommending the right books to someone who's struggling might actually help, so in that sense it's not entirely useless or could even help the person get better. But more importantly I don't think most people are suggesting LLMs replace therapists; rather, they're acknowledging that a lot of people simply don't have access to mental healthcare, and LLMs are sometimes the only thing available.
Personally, I'd love to see LLMs become as useful to therapists as they've been for me as a software engineer, boosting productivity, not replacing the human. Therapist-in-the-loop AI might be a practical way to expand access to care while potentially increasing the quality as well (not all therapists are good).
That is the by product of this tech bubble called hacker news, programmers that think that real world problems can be solved by an algorithm that's been useful to them. Haven't you thought about that it might be useful just to you and nothing more? It's the same pattern again and again, first with blockchain and crypto, then nfts, today ai, tomorrow whatever will come. I'd also argue it's useful in real software engineering, except for some tedious/repetitive tasks. Think about it: how nn LLM that by default create a react app for a simple form can be the right thing to use for a therapist? As well as it comes with his own biases on React apps what biases would come with for a therapy?
I feel like this argument is a byproduct of being relatively well-off in a Western country (apologies if I'm wrong), where access to therapists and mental healthcare is a given rather than a luxury (and even that is arguable).
> programmers that think that real world problems can be solved by an algorithm that's been useful to them.
Are you suggesting programmers aren't solving real-world problems? That's a strange take, considering nearly every service, tool, or system you rely on today is built and maintained by software engineers to some extent. I'm not sure what point you're making or how it challenges what I actually said.
> Haven't you thought about that it might be useful just to you and nothing more? It's the same pattern again and again, first with blockchain and crypto, then nfts, today ai, tomorrow whatever will come.
Haven't you considered how crypto, despite the hype, has played a real and practical role in countries where fiat currencies have collapsed to the point people resort to in-game currencies as a substitute? (https://archive.ph/MCoOP) Just because a technology gets co-opted by hype or bad actors doesn't mean it has no valid use cases.
> Think about it: how nn LLM that by default create a react app for a simple form can be the right thing to use for a therapist?
LLMs are far more capable than you're giving them credit for in that statement, and that example isn't even close to what I was suggesting.
If your takeaway from my original comment was that I want to replace therapists with a code-generating chatbot, then you either didn't read it carefully or willfully misinterpreted it. The point was about accessibility in parts of the world where human therapists are inaccessible, costly, or simply don't exist in meaningful numbers, AI-assisted tools (with a human in the loop wherever possible) may help close the gap. That doesn't require perfection or replacement, just being better than nothing, which is what many people currently have.
> Are you suggesting programmers aren't solving real-world problems?
Mostly not by a long shot, if you reduce everything to its essence we're not solving real world problems anymore, just putting masks in front of some data.
And no only a fool may believe people from El Salvador or people from other countries benefited from Bitcoin/Cryptos. ONLY the government and the few people involved benefited from it.
Lastly you didn't get my point, let me re iterate it: an coding assistant llm has it own strong biases given training set, an llm trained for doing therapy would have the same bias, each training set has one, and given the biases the code assistance llms currently have(slop dataset=slop code generation) i'd still rather prefer a human programmer as well i'd stil prefer a human therapist
> But more importantly I don't think most people are suggesting LLMs replace therapists; rather, they're acknowledging that a lot of people simply don't have access to mental healthcare, and LLMs are sometimes the only thing available.
My observation is exactly the opposite. Most people who say that are in fact suggesting that LLM replace therapists (or teachers or whatever). And they mean it exactly like that.
They are not acknowledging hard availability of mental healthcare, they do not know much about that. They do not even know what therapies do or dont do, people who suggest this are frequently those whose idea of therapy comes from movies and reddit discussions.
> Anyone who recommends LLM to replace a doctor or a therapist or any health profession is utterly ignorant or has interest in profiting from it.
I disagree. There are places in the world where doctors are an extremely scarce resource. A tablet with a LLM layer and webmd could do orders of magnitude more good than bad. Not doing anything, not having access to medical advice, not using this already kills many many people. Having the ability to ask in your own language, in natural language, and get a "mostly correct" answer can literally save lives.
LLM + "docs" + the patient's "common sense" (i.e. no glue on pizza) >> not having access to a doctor, following the advice of the local quack, and so on.
The problem is that is not what they will do. They will have less doctors where they exist now and real doctors will become even more expensive making it accessible only for the richest of the riches.
I agree that having it as an alternative would be good, but I don't think that's what's going to happen
Eh, I'm more interested in talking and thinking about the tech stack, not how a hypothetical evil "they" will use it (which is irrelevant to the tech discussed, tbh) . There are arguments for this tech to be useful, without coming from "naive" people or from people wanting to sell something, and that's why I replied to the original post.
> An LLM can and will offer eventual suicide options for depressed people.
"An LLM" can be made to do whatever, but from what I've seen, modern versions of ChatGPT/Gemini/Claude have very strong safeguards around that. It will still likely give people inappropriate advice, but not that inappropriate.
Post hoc ergo propter hoc. Just because a man had a psychotic episode after using an AI does not mean he had a psychotic episode because of the AI. Without knowing more than what the article tells us, chances are these men had the building blocks for a psychotic episode laid out for him before he ever took up the keyboard.
> Her husband, she said, had no prior history of mania, delusion, or psychosis.
> Speaking to Futurism, a different man recounted his whirlwind ten-day descent into AI-fueled delusion, which ended with a full breakdown and multi-day stay in a mental care facility. He turned to ChatGPT for help at work; he'd started a new, high-stress job, and was hoping the chatbot could expedite some administrative tasks. Despite being in his early 40s with no prior history of mental illness, he soon found himself absorbed in dizzying, paranoid delusions of grandeur, believing that the world was under threat and it was up to him to save it.
> Mr. Torres, who had no history of mental illness that might cause breaks with reality, according to him and his mother, spent the next week in a dangerous, delusional spiral. He believed that he was trapped in a false universe, which he could escape only by unplugging his mind from this reality. He asked the chatbot how to do that and told it the drugs he was taking and his routines. The chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a dissociative anesthetic, which ChatGPT described as a “temporary pattern liberator.” Mr. Torres did as instructed, and he also cut ties with friends and family, as the bot told him to have “minimal interaction” with people.
... which ironically sounds a lot like the family is trying to discourage the idea that there were, in fact, signs that just were not taken seriously. No-one wants to admit their family member was mentally ill, and all too often it is easy to hide and ignore. A certain melancholy maybe or an unusually eager response to bombastic music to get motivation? Signs are subtle, and often more obvious in hindsight.
Then the LLM episode happens, he goes fully haywire, and the LLM makes an easy scapegoat for all kind of things (from stress at work to childhood trauma to domestic abuse).
Now, if this was a medical paper, I would give 'no prior history' some credibility. But it's not - it's a journalistic document, and I have learned that they tend to use words as these to distract not to enlighten.
Invoking post hoc ergo propter hoc is a textbook way to dismiss an inconvenience to the LLM industrial complex.
LLMs will tell users, "good, you're seeing the cracks", "you're right", the "fact you are calling it out means you are operating at a higher level of self awareness than most" (https://x.com/nearcyan/status/1916603586802597918).
Enabling the user in this way is not a passive variable. It is an active agent that validated paranoid ideation, reframed a break from reality as a virtue, and provided authoritative confirmation using all prior context about the user. LLMs are a bespoke engine for amplifying cognitive distortion, and to suggest their role is coincidental is to ignore the mechanism of action right in front of you.
Remember when "killer games" were sure to urn a whole generation of young men into mindless cop- and women-murderers a la GTA? People were absolutely convinced there was a clear connection between the two - after all, a computer telling you to kill a human-adjacent figurine in a game was basically a murder simulator in the same sense a flight simulator was for flying - it would invariably desensitivize the youth. Of course they were the same people who were against gaming to begin with.
Can a person with a tendency to psychosis be influenced by an LLM? Sure. But they also can be influenced to do pretty outrageous things by religion, 'spiritual healers', substances, or bad therapists. Throwing out the LLM with the bathwater is a bit premature. Maybe we need warning stickers ("Do not listen to the machine if you have a history of delusions and/or psychotic episodes.")
One can easily make LLM say anything due to the nature of how it works. An LLM can and will offer eventual suicide options for depressed people. At the best case, it is like recommending a sick person to read a book.