Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is a lot of research in psychology on this. ChatGPT will happily refer you to various books written over the decades on how the strong hierarchical structure of medicine makes it particularly prone to cognitive dissonance, with lots of concrete examples of doctors behaving that way.


I asked GPT-5 Mini and it said, in part, "some studies find clinicians update more when evidence is clear and actionable (better calibration), others show clinicians exhibit the same biases as laypeople (e.g., positivity bias, anchoring). Results depend on task design, sample, and how “evidence” is presented. (...) Experimental psychology and related fields show measurable differences in how some doctors update beliefs—many update appropriately to clear, high‑quality evidence and some update better than average—but there is no simple universal claim that all doctors are more willing than the average person to change their minds. The evidence supports a nuanced conclusion: clinicians can be more evidence‑responsive in domain‑relevant tasks, but updating varies widely and is strongly shaped by task framing, institutional context, performance level, and incentives."

But that's because GPT-5 Mini's responses are strongly shaped by task framing and context. You can get it to say just about anything as long as you stay away from taboo areas.

It did refer me to a lot of books, and the ones I looked up did in fact actually exist, but none of them seemed to be relevant.


Huh. Some of the books will be ones about doctors, like https://www.amazon.com/Doctors-Think-Jerome-Groopman-2007-03..., which you have to read to find how they talk about cognitive dissonance. Others will be about cognitive dissonance in general, like https://www.amazon.com/Mistakes-Were-Made-but-Third/dp/03583..., which you have to read to find how they use the medical profession as starring examples.

Sorry for not having given you an easier cookie crumb.


No worries. I was sort of poking fun at your bringing up ChatGPT as a source (especially in a discussion about epistemic hygiene), but I probably shouldn't have done that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: