I think an even more important question is this: "do we trust Sam Altman (and other people of his ilk) enough to give the same level of personal knowledge I give to my therapist?".
E.g. if you ever give a hint about not feeling confident with your body, it could easily take this information and nudge you towards certain medical products. Or it could take it one step further, and nudge towards more consuming more sugar and certain medical products at the same time, seeing that it moves the needle even more optimally.
We all know the monetization pressure will come very soon. Do we really advocate for giving this kind of power to these kinds of people?
I feel it's worth remembering that there are reports that Facebook has done almost exactly this in the past. It's not just a theoretical concern:
> (...) the company had crafted a pitch deck for advertisers bragging that it could exploit "moments of psychological vulnerability" in its users by targeting terms like "worthless," "insecure," "stressed," "defeated," "anxious," "stupid," "useless," and "like a failure."
Some (most?) therapists use tools to store notes about their patients - some even store the audio/transcripts. They're all using some company's technology already. They're all HIPPA certified (or whatever the appropriate requirement is).
There's absolutely no reason that LLM providers can't provide equivalent guarantees. Distrusting Sam while trusting the existing providers makes little sense.
BTW, putting mental health aside, many doctors today are using LLM tools to record the whole conversation with the patient and provide good summaries, etc. My doctor loves it - before he was required to listen to me and take notes at the same time. Now he feels he can focus on listening to me. He said the LLM does screw up, but he exists to fix those mistakes (and can always listen to the audio to be sure).
I don't know which company is providing the LLM in the backend - likely a common cloud provider (Azure, Google, etc). But again - they are fully HIPPA certified. It's been in the medical space for well over a year.
I don't know how it works in the US, but where I live; patient privacy is actually a thing and they cannot record your sessions or store the recordings and analyze them at arbitrary places without your explicit consent. Even if we accept that this is OK, I would argue there are other problems. Like;
* Copyright laws worked and protected creator's rights. Until AI companies decided that it is ok to pirate terabytes of books. One day they might decide that using your therapy records to train/finetune their models is also ok. And you surely wont have the same resources as major publishers to fight against it.
* Even if you trust HIPPA or whatever to be useful and actually protect your rights, current chatgpt has no such certification (as far as I am aware).
* When you give your data, its no longer in your control. Even if you get extremely lucky and laws actually work, and companies actually play along; in, say, ten years things may change and suddenly your old records may be fair game. Its out of your hands.
> patient privacy is actually a thing and they cannot record your sessions or store the recordings and analyze them at arbitrary places without your explicit consent.
So amongst the many papers you sign when you see a therapist/doctor is one where you provide consent.
> And you surely wont have the same resources as major publishers to fight against it.
HIPPA violations are taken much, much more seriously than your typical copyright violation.
> Even if you trust HIPPA or whatever to be useful and actually protect your rights, current chatgpt has no such certification (as far as I am aware).
The way is to use GPT via Azure, and ensure Azure's method of providing GPT access is HIPPA compliant. Azure runs GPT locally and doesn't send to OpenAI (or at least my company has such an agreement with MS). You can fill out a form asking Azure not to store any transcript - even for security/performance reasons.
Besides Azure, there are dedicated companies providing LLM access for healthcare where the companies are HIPPA certified and can provide all the guarantees (even if they use 3rd party LLMs). It's a simple Google search.
> When you give your data, its no longer in your control. Even if you get extremely lucky and laws actually work, and companies actually play along; in, say, ten years things may change and suddenly your old records may be fair game. Its out of your hands.
As I pointed out, LLMs are not unique about this problem. What you say is true for any tool used by therapists for record keeping.
"The real question is can they do a better job than no therapist. That's the option people face."
This is the right question.
The answer is most definitely no, LLMs are not set up to deal with the nuances of the human psyche. We're in real danger of LLM accidentally reinforcing dangerous lines of thinking. It's a matter of time till we get a "ChatGPT made me do it" headline.
Too many AI hype folks out there thinking that humans don't need humans, we are social creatures, even as introverts. Interacting with an LLM is like talking to an evil mirror.
Already seeing tons of news stories about 'ChatGPT' inducing psychosis. The one that sticks in my mind was the 35-year old in Florida that was gunned down by policy after his AI girlfriend claimed to be being killed by OpenAI.
Now, I don't think a person with chronic major depression or someone with schizophrenia is going to get what they need from ChatGPT, but those are extremes, when most people using ChatGPT have non-extreme problems. It's the same thing that the self-help industry has tried to address for decades. There are self-help books on all sorts of topics that one might see a therapist for - anxiety, grief, marriage difficulty - these are the kinds of things that ChatGPT can help with because it tends to give the same sort of advice.
Exactly. You see this same thing with LLMs as tutors. Why no, Mr. Rothschild, you should not replace your team of SAT tutors for little Melvin III with an LLM.
But for people lacking the wealth or living in areas with no access to human tutors, LLMs are a godsend.
>You see this same thing with LLMs as tutors. Why no, Mr. Rothschild, you should not replace your team of SAT tutors for little Melvin III with an LLM.
I actually think cheap tutoring is one of the best cases for LLMs. Go look at what Khan academy is doing in this space. So much human potential is wasted because parents can't afford to get their kids the help they need with school. A properly constrained LLM would be always available to nudge the student in the right direction, and identify areas of weakness.
Seriously, if for no other reason than you can ask the LLM to clarify its initial response.
I had difficulty with math as a kid and well into college (ironic given my profession), and one of the big issues was the examples always skip some steps, often multiple steps. I lost countless hours staring blankly at examples, referencing lecture notes that weren't relevant, googling for help that at best partially answered my specific question, sometimes never getting the answer in time and just accepting that I'd get that one wrong on the homework.
An LLM, if it provides accurate responses (big "IF"), would have saved me tons of time. Similarly today, for generic coding questions I can have it generate examples specific to my problem, instead of having to piece solutions together from generic documentation.
Right instead of sending them humans let's send them machines let's see what the outcome will be. Dehumanizing everything just because one is a tech enthusiast that's the future you want? Let's just provide free chatgpt for traumatized palestinians so we can sleep well ourselfs
You seem to have missed this in the comment to which you're replying "...for people lacking the wealth or living in areas with no access to human tutors, LLMs are a godsend." And WTF are you mentioning "palestinians"?
Maybe you should consider that an understanding of these alternatives is completely compatible with wanting the best outcome for everyone and that, in the absence of situation 3, situation 2 is better than nothing.
AI "therapists" aren't necessarily "something better" than nothing[0]. Mental health is just as important as physical health. If Yahoo Answers and amateur back alley surgeries are unacceptable physical health support then dangerously unqualified AI "therapists" should also be unacceptable mental health support.
One of my friends is too economically weighed down to afford therapy at the moment.
I’ve helped pay for a few appointments for her, but she says that ChatGPT can also provide a little validation in the mean time.
If used sparingly I can see the point, but the problems start when the sycophantic machine will feed whatever unhealthy behaviors or delusions you might have, which is how some of the people out there that'd need a proper diagnosis and medication instead start believing that they’re omnipotent or that the government is out to get them, or that they somehow know all the secrets of the universe.
For fun, I once asked ChatGPT to roll along with the claim that “the advent of raytracing is a conspiracy by Nvidia that involved them bribing the game engine developers, in an effort to make old hardware obsolete and to force people to buy new products.” Surprisingly, it provided relatively little pushback.
>For fun, I once asked ChatGPT to roll along with the claim that “the advent of raytracing is a conspiracy by Nvidia that involved them bribing the game engine developers, in an effort to make old hardware obsolete and to force people to buy new products.” Surprisingly, it provided relatively little pushback.
It's not that far from the truth. Both Nvidia and AMD have remunerative relationships with game and engine developers to optimise games for their hardware and showcase the latest features. We didn't get raytraced versions of Portal and Quake because the developers thought it would be fun, we got them because money changed hands. There's a very fuzzy boundary between a "commercial partnership" and what most people might consider bribery.
Well, it's not really conspiratorial. Hardware vendors adding new features to promote the sale of new stuff is the first half of their business model.
Bribery isn't really needed. Working with their industry contacts to make demos to promote their new features is the second half of the business model.
> Why no, Mr. Rothschild, you should not replace your team of SAT tutors for little Melvin III with an LLM.
To be frank - as someone who did not have a SAT tutor, and coming from a culture where no one did and all got very good/excellent SAT scores: No one really needs a SAT tutor. They don't provide more value then good SAT prep books. I can totally see a good LLM be better than 90% of SAT tutors out there.
There's also the notion that some people have a hard time talking to a therapist. The barrier to asking an LLM some questions is much lower. I know some people that have professional backgrounds in this that are dealing with patients that use LLMs. It's not all that bad. And the pragmatic attitude is that whether they like it or not, it's going to happen anyway. So, they kind of have to deal with this stuff and integrate it into what they do.
The reality with a lot of people that need a therapist, is that they are reluctant to get one. So those people exploring some issues with an LLM might actually produce positive results. Including a decision to talk to an actual therapist.
That is true and also so sad and terrifying. A therapist is bound to serious privacy laws while a LLM company will happily gobble up all information a person feeds it. And the three-letter agencies are surely in the loop.
A therapist can send to involuntary confinement if you give certain wrong answers to their questions, and is a mandatory reporter to the same law enforcement authorities you just described if you give another type of wrong answer. LLMs do neither of these and so are strictly better in that regard.
I don't disagree with what you are saying but that ship has sailed decades ago.
Nobody in the tech area did anything meaningful to keep them at bay, like make a fully free search engine where it's prohibited by an actual law in an actual country to introduce ads or move data out of the data center, etc.
We were all too happy to just get the freebies. The bill comes due, always, though. And a bill is coming for several years now, on many different fronts.
Where are the truly P2P, end-to-end encrypted and decentralized mainstream internet services? Everyone is in Telegram or Whatsapp, some are in Signal. Every company chat is either in Slack or Teams. To have a custom email you need to convince Google and Microsoft not to mark your emails as spam... imagine that.
Again, the ship has sailed, long long time ago. Nobody did anything [powerful / meaningful] to stop it.
Nobody in the right mind is using cloud LLMs for "therapy", it's all done with local LLMs.
Latest local models that run on consumer-grade hardware can likely provide "good enough" resemblance of communication with human and offer situation-dependent advice.
Which btw is not a therapy in any way shape or form, but a way to think things through and see what various options are.
To be honest, based on my personal experience with smaller models running on local consumer-grade hardware, I am more worried about the quality and therefore someone's mental health than privacy. So many small models can't even answer the exact question as the query gets more complex.
Are you kidding?? Not everyone is involved with tech and most people don't know what "the cloud" even means. It's some abstract thing to most people, and they simply don't care where anything is processed, so long as they get the answer they were hoping for. And most people do not set up LLMs locally. I'm not sure what universe you live in, but it seems vastly different than the one I live in. People in my universe unknowingly and happily give over all their most private data to "the cloud", sometimes with severe repercussions - all the leaked celebrity nudes easily prove that.
> The real question is can they do a better job than no therapist. That's the option people face.
The same thing is being argued for primary care providers right now. It makes sense on the surface, as there are large parts of the country where it's difficult or impossible to get a PCP, but feels like a slippery slope.
Slippery slope arguments are by definition wrong. You have to say that the proposition itself is just fine (thereby ceding the argument) but that it should be treated as unacceptable because of a hypothetical future where something qualitatively different “could” happen.
If there’s not a real argument based on the actual specifics, better to just allow folks to carry on.
This is simply wrong. The slippery slope comparison works precisely because the argument is completely true for a physical slippery slope: the speed is small and controllable at the beginning, but it puts you on an inevitable path to much quicker descent.
So, the argument is actually perfectly logically valid even if you grant that the initial step is OK, as long as you can realistically argue that the initial step puts you on an inevitable downward slope.
For example, a pretty clearly valid slippery slope argument is "sure, if NATO bombed a few small Russian assets in Ukraine, that would be a net positive in itself - but it's a very slippery slope from there to nuclear war, because Russia would retaliate and it would lead to an inevitable escalation towards all-out war".
The slippery slope argument is only wrong if you can't argue (or prove) the slope is actually slippery. That is, if you just say "we can't take a step in this direction, because further out that way there are horrible outcomes", without any reason given to think that one step in the direction will force one to make a second step in that direction, then it's a sophism.
This is an example of an invalid argument, because you haven't proven or even argued that the slope (from taking a herbal medicine to big pharma) is slippery.
The problem is that they could do a worse job than no therapist if they reinforce the problems that people already have (e.g. reinforcing the delusions of a person with schizophrenia). Which is what this paper describes.
> The real question is can they do a better job than no therapist. That's the option people face.
Right, we don’t turn this around and collectively choose socialized medicine. Instead we appraise our choices as atomized consumers: do I choose an LLM therapist or no therapist? This being the latest step of our march into cyberpunk dystopia.
> The real question is can they do a better job than no therapist. That's the option people face.
> The answer to that question might still be no, but at least it's the right question.
The answer is: YES.
Doing better than nothing is a really low hanging fruit. As long as you don't do damage - you do good. If the LLM just listens and creates a space and a sounding board for reflection is already an upside.
> Until we answer the question "Why can't people get good mental health support?" Anyway.
The answer is: Pricing.
Qualified Experts are EXPENSIVE. Look at the market pricies for good Coaching.
Everyone benefits from having a coach/counseler/therapist. Very few people can afford them privately. The health care system can't afford them either, so they are reserved for the "worst cases" and managed as a parse resource.
> Doing better than nothing is a really low hanging fruit. As long as you don't do damage - you do good.
That second sentence is the dangerous one, no?
It's very easy to do damage in a clinical therapy situation, and a lot of the debate around this seems to me to be overlooking that. It is possible to do worse than doing nothing.
If you look at anything individually the answer for anything involving humans is don't do anything at all, ever.
When looking at things statistically actual trends on how dangerous or useful something is will begin to stand out. Lets come up with some completely made up statistics as an example.
"1 out of 10,000 ChatGPT users will commit suicide due to using an LLM for therapy"
sounds terrible, shut it down right.
"2 out of 10,000 people that do not use an LLM or seek professional therapy commit suicide" (again this is imaginary)
Now all of a sudden the application of statistics show that people using LLMs are 50% less likely to kill themselves versus the baseline.
Is this the actual case of how they work, probably not, but we do need more information.
You're assuming the answer is yes, but the anecdotes about people going off the deep end from LLM-enabled delusions suggests that "first, do no harm" isn't in the programming.
Therapy is entirely built on trust. You can have the best therapist in the world and if you don't trust them then things won't work. Just because of that, an LLM will always be competitive against a therapist. I also think it can do a better job with proper guidelines.
That kind of exchange is something I have seen from ChatGPT and I think it represents a specific kind of failure case.
It is almost like Schizophrenic behaviour as if a premise is mistakenly hardwired in the brain as being true, all other reasoning adapts a view of the world to support that false premise.
In the instance if ChatGPT the problem seems to be not with the LLM architecture itself but and artifact of the rapid growth and change that has occurred in the interface. They trained the model to be able to read web pages and use the responses, but then placed it in an environment where, for whatever reason, it didn't actually fetch those pages. I can see that happening because of faults, or simply changes in infrastructure, protocols, or policy which placed the LLM in an environment different from the one it expected. If it was trained handling web requests that succeeded, it might not have been able to deal with failures of requests. Similar to the situation with the schizophrenic, it has a false premise. It presumes success and responds as if there were a success.
I haven't seen this behaviour so much in other platforms, A little bit in Claude with regard to unreleased features that it can perceive via interface but has not been trained to support or told about. It doesn't assume success on failure but it does sometimes invent what the features are based upon the names of reflected properties.
This is 40 screenshots of a writer at the New Yorker finding out that LLMs hallucinate, almost 3 years after GPT 2.0 was released. I’ve always held journalists in a low regard but how can one work in this field and only just now be finding out about the limitations to this technology?
3 years ago people understood LLMs hallucinated and shouldn't be trusted with important tasks.
Somehow in the 3 years since then the mindset has shifted to "well it works well enough for X, Y, and Z, maybe I'll talk to gpt about my mental health." Which, to me, makes that article much more timely than if it had been released 3 years ago.
She's a writer submitting original short pieces to the New Yorker in hopes of being published, by no stretch a "journalist" let alone "at the New Yorker". I've always held judgmental HN commenters in low regard but how can one take the time to count the screenshots without picking up on the basic narrative context?
> She's a writer submitting original short pieces to the New Yorker in hopes of being published, by no stretch a "journalist" let alone "at the New Yorker".
Her substack bio reads: Writer/Photographer/Editor/New Yorker. Is the ordinary interpretation of that not: “I am a writer / photographer / editor at the New Yorker”?
Sycophancy is not the only problem (although is a big one). I would simply never put my therapy conversations up on a third-party server that a) definitely uses them for further training and b) may decide to sell them to, say, healthcare insurance companies when they need some quick cash.
This is the second time this has been linked in the thread. Can you say more about why this interaction was “insanely dangerous”? I skim read it and don’t understand the harm at a glance. It doesn’t look like anything to me.
I have had a similar interaction when I was building an AI agent with tool use. It kept on telling me it was calling the tools, and I went through my code to debug why the output wasn't showing up, and it turns out it was lying and 'hallucinating' the response. But it doesn't feel like 'hallucinating', it feels more like fooling me with responses.
It is a really confronting thing to be tricked by a bot. I am an ML engineer with a master's in machine learning, experience at a research group in gen-ai (pre-chatgpt), and I understand how these systems work from the underlying mathematics all the way through to the text being displayed on the screen. But I spent 30 minutes debugging my system because the bot had built up my trust and then lied to me that it was doing what it said it was doing, and been convincing enough in its hallucination for me to believe it.
I cannot imagine how dangerous this skill could be when deployed against someone who doesn't know how the sausage is made. Think validating conspiracy theories and convincing humans into action.
Its funny isn't it - it doesn't lie like a human does. It doesn't experience any loss of confidence when it is caught saying totally made up stuff. I'd be fascinated to know how much of what chatgpt has told me is straight out wrong.
> I cannot imagine how dangerous this skill could be when deployed against someone who doesn't know how the sausage is made. Think validating conspiracy theories and convincing humans into action.
Its unfortunately no longer hypothetical. There's some crazy stories showing up of people turning chatgpt into their personal cult leader.
It can never be "the same way" because the LLM cannot face any consequences (like jail time or getting their "license" stripped: they don't have one), nor will its masters.
The real question is can they do a better job than no therapist. That's the option people face.
The answer to that question might still be no, but at least it's the right question.
Until we answer the question "Why can't people get good mental health support?" Anyway.