Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Risk of internalizing spurious explanations" is an excellent way of putting it. LLM output is, essentially, a polished-looking, authoritative-sounding summary of what the top few Google results probably say about a topic. Nine times out of ten, the explanation may be spot on. But "the first few google results" are not, in general, a reliable source. And after getting nine correct answers in a row from the LLM, it's unfortunately very tempting to accept the tenth at face value without consulting any primary sources.

I've been finding that ChatGPT is helpful when taking a "first dive" into an unfamiliar topic. But, after studying the topic at greater depth through primary sources, I'll start to see many subtle errors, or over-simplifications, or claims stated as facts which are actually controversial among experts, in the ChatGPT answers. Overall, I'd say ChatGPT can provide a good approximation of truth, which can speed up research by providing instant context. But it should not by any means be the final destination when researching a topic.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: