Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're hypothesizing that it gave him a medically dangerous answer, with the only evidence being that he blamed it. Conveniently, the chat where he claimed it did is unavailable.

Would you at least agree that, given an answer like ChatGPT gave me, it's entirely his fault and there is no blame on either it or OpenAI?



Do you not understand that ChatGPT gives different answers to different prompts and sometimes to the same prompt?

You don't know the specifics of questions he asked, and you don't know the answer ChatGPT gave him.


Nor does anyone else. Including, in all likelihood, the guy himself. That's not a basis for a news story.


Precisely. So how then can you claim that because it gave you a specific answer to a specific question, that it surely gave him a correct answer and it's his fault, when you don't even know what the hell he asked it?


>You're hypothesizing that it gave him a medically dangerous answer,

No. I'm saying AI is not infallible (regardless of context/field), it may have given him a medically sound answer, a medically dangerous one, or something else altogether, and could have done so in any manner of ways that may or may not have made sense.

Most importantly, I'm saying that just because it gave YOU an answer YOU understood (regardless of it's medical merit), it may not have given HIM that same answer.

> Would you at least agree that, given an answer like ChatGPT gave me, it's entirely his fault and there is no blame on either it or OpenAI

If you trust AI without critically reviewing it's output, you shoulder some of the blame, yes. But AI giving out bad medical advice is absolutely a problem for OpenAI no matter how you try to spin it.

It's entirely capable of giving a medically sound answer, Yes. That doesn't mean it will do so to every one, every time, even when the same question is asked.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: