Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A climate scientist I follow uses Perplexity AI in some of his YouTube videos. He stated one time that he uses it for the formatting, graphs, and synopses, but knows enough about what he's asking that he knows what it's outputting is correct.

An "expert" might use ChatGPT for the brief synopsis. It beats trying to recall something learned about a completely different sub-discipline years ago.





This is the root of the problem with LLMs.

At best, the can attempt to recall sections of scraped information, which may happen to be the answer to a question. No different to searching the web except you instantly know the source and how much to trust it, if you search yourself. I've found LLMs tend to invent sources when queried (although that seems to be getting better), so it's slower than searching for information I already know exists.

If you have to be more of an expert than the LLM to then verify the output, it requires more careful attention than going back to the original source. Useful, but it's always writing in a different way to previous models/conversations and your own writing style.

LLMs can be used to suggest ideas and summarize sources, if you can verify and mediate it. They can be used for a potential sourcing of information (and the more data agreeing, the better). However, they cannot readily be used to accurately infer new information, so the best they can do here is guess. It would be useful if they could provide a confidence indicator for all scenarios.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: