Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have seen more than once people using ChatGPT as a psychologist and psychiatrist and self diagnosing mental illnesses. It would be hilarious if it was not so disasterous. People with real questions get roped in by an enthusiastically confirming sycophantic model that will never question or push back, just confirm and lead on.

Tbh, and I usually do not like this way of thought, but these are lawsuits waiting to happen.



  > I have seen more than once people using ChatGPT as a psychologist and psychiatrist and self diagnosing mental illnesses.
I'm working with a company that writes AI tools for mental health professionals. The output of the models _never_ goes to the patient.

I've heard quite a few mouthfuls of how harmful LLM advice is for mental health patients - with specific examples. I'm talking on the order of suicide attempts (and at least one success) and other harmful effects to the patient and their surroundings as well. If you know of someone using an LLM for this purpose, please encourage them to seek professional help. Or just point them at this comment - anybody is encouraged to contact me. I'm not a mental health practitioner, but I work with them and I'll do my best to answer their questions or point them to someone who can. My Gmail username is the same as my HN username.


Professionals have confirmation bias too and the LLM will hook into that regardless of patient or professional. It's so good at it that even "training" the professional received can be subtly bypassed.

Basically there needs to be an LLM that can push back.


I'd love to hear more about how you see it. You're absolutely not wrong. I'm writing tools in this space, so if you have experience or examples to share - I've got a huge file of them - I am interested.

You are invited to answer here or in private. My Gmail username is the same as my HN username.


LLMs generate tokens. They don’t have judgement. They would need judgement in order to have a basis for pushing back.


Judgement can be expressed in the form of tokens. Yes LLMs are just token generators. But the tokens they generate express judgement in a way that is indistinguishable from actual judgment.

So who cares. It's like saying nobody drives a car because the thing that's moving them is just a thing that is not a car but acts like a car to an extent that it is indistinguishable. Ok then. It's a car. Good day sir.


It is awful, but at the same time this isn't new. People have for a long time used Google searches to self diagnose their issues. ChatGPT just makes that process ever easier now.

From my viewpoint it speaks more to a problem of the healthcare system.


I agree with everything you said but chatGPT does have an insidious thing where it confirms your biases. It kind of senses what you want and actually runs with it. For truly unbiased responses you literally have to hide your intention from chatGPT. But even so chatGPT can many times still sense your intent and give you a biased answer.


There is something recently (for a few months?) which has made AI extremely sycophantic that it actually drives me crazy

You are completely right... insert some emoji

Shaking my head.

It would be an interesting experiment to see models which aren't sycophantic being used

as such, What is the least sycophantic LLM model if I may ask?


Even before google people got books to self-diagnose problems.


Completely agree. Let’s trust the experts we’ve reported on for years: put your symptoms into WebMD. Now please excuse me; according to WebMD apparently this stubbed toe means I have cancer so I have to get that treated.


The model's default mode is to be helpful and agreeable, which is exactly the wrong dynamic when someone's dealing with mental health issues or looking for a diagnosis


You’re absolutely right! Just given that behavior, I’d say there’s a pretty good chance that there is some real mental illness there.


> I have seen more than once people using ChatGPT as a psychologist and psychiatrist and self diagnosing mental illnesses. It would be hilarious if it was not so disasterous.

What is the disaster?


"People with real questions get roped in by an enthusiastically confirming sycophantic model that will never question or push back, just confirm and lead on."

That wasn't too buried IMHO


> "People with real questions get roped in by an enthusiastically confirming sycophantic model that will never question or push back, just confirm and lead on."

> That wasn't too buried IMHO

I read that, but still fail to see evidence of a concrete "disaster". For example, are we seeing a huge wave of suicides that are statistical outliers triggered by using Chatbots? Or maybe the worry (unsubstantiated) is that there's a lagging effect and the disaster is approaching.

I suspect the outcomes are going to be less catastrophic; specifically, it seems to me that more people will have access to a much better level of support than they could get with a therapist (who is not available 24/7, who has less to draw upon, is likely too expensive, etc.). Even if there's an increase in some harm, my instinct from first principles is that the net benefit will be clear.


This isn’t accurate though, the models do push back and do question.

I would say ChatGPT is way way better than the average therapist. I’ve seen maybe between 15-20 psychotherapists over the course of my life and ChatGPT is better than 85% of them I would wager.

I’ve had a therapist recommend me take a motorbike trek across Europe (because that’s what he did) to cure my depression.

I think people tend to radically overestimate the skills of the average therapist, many are totally fucking shit.


> I’ve had a therapist recommend me take a motorbike trek across Europe (because that’s what he did) to cure my depression.

When I try to help people in support group/ therapy group although I am not a therapist, I also try to explain how I fixed my issues or how I do so.

I feel like, that isn't bad take to be really honest in the sense that I personally feel like there are times when we lose the fact that our lives have purpose and your therapist grappled with it by having a unitary goal for himself

If you felt like that was a bad idea, just tell him that what is the thing that they got out of the motorbike trek race that cured their depression, I'd be more curious about that. Considering, maybe then I can try to see if my life has/had the same problems and what is the common theme and discussing about it later.

Personally I feel like Chatgpt is extremely sycophantic and I feel really weird interacting with it. For one, I really like interacting with humans atleast in the therapy mindset I suppose.


But telling someone living in New York with a full-time job and a girlfriend etc, that the solution to their depression is to quit their job and take several months to travel across Europe on a motorbike isn't exactly practical advice. Which was my point.

I could get better advice by asking an LLM what to do about my depression.


Ah context matters.. for someone currently single lets say with no jobs and feeling lack of purpose, it wouldn't have been so bad. Maybe tried to self project with different context and I mean, its understandable why you might feel this way regarding him

I still doubt asking an LLM about depression if I am being honest, I just don't think its the best thing overall or something that should be considered norm I suppose but I am not sure as even in things like these context matters.


>I’ve had a therapist recommend me take a motorbike trek across Europe (because that’s what he did) to cure my depression.

It's not bad advice.


It can help and also hurt, depends. You have some mild situation that is quite common? Good chance it will give you good advice.

Something more complex, more towards clinical psychiatry rather than shoulder to cry on/complaining to friend over coffee (psychologists)? You are playing a russian roulette that model in your case won't hallucinate something harmful, while acting very confidently, more than any relevant doctor would be.

There have been cases of models suggesting suicide for example or general harmful behavior. There is no responsibility, no oversight, no expert to confirm or refute claims. Its just a faster version of reddit threads.


Yes! They are extremely susceptible to user influence, based on what I have read and experienced in personal use. If your initial message contains a seed, e.g., "Should I break up with my boyfriend? He doesn't listen to me," the LLM will stick with the seed of negativity and not make an objective analysis of the situation or take initiative to challenge your/its original perspective.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: