Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Why on earth would we want a technology that’s as good at summarisation as it is at hallucinations to write encyclopaedia entries?? You can never trust it to be faithful with the sources.

Isn’t summarization precisely one of the biggest values people are getting from AI models?

What prevents one from mitigating hallucination problems with editors as I mentioned? Are there not other ways you can think of this might be mitigated?

> You would need to trust a single publisher with a technology that’s allowing them to crank out millions of entries and updates permanently, so fast that you could never detect subtle changes or errors or biases targeted in a specific way—and that doesn’t even account for most people, who never even bother to question an article, let alone check the sources.

How is this different from Wikipedia already? It seems that if the frequency of additions/changes is really a problem, you can slow this down. Wikipedia doesn’t just automatically let every edit take place without bots and humans reviewing changes



> Isn’t summarization precisely one of the biggest values people are getting from AI models?

If I want an AI summary of a Wikipedia article, I can just ask an AI and cut out the middle-man.

Not only that, once I've asked the AI to do so, I can do things like ask follow-up questions or ask it to expand on a particular detail. That's something you can't do with the copy-pasted output of an AI.


The good news is that you don’t have to use it. I see ways this idea can be improved, some of which I already mentioned in this thread. It just launched recently so judging solely by what it is today is missing the point


It’s just a different class of problem.

Human editors making mistakes is more tractable than an LLM making a literally random guess (what’s the temperature for these articles?) at what to include?


I recall a similar argument made about why encyclopedias written by paid academics and experts were better than some randos editing Wikipedia. They’re probably still right about that but Wikipedia won for reasons beyond purely being another encyclopedia. And it didn’t turn out too bad as an encyclopedia either


Yeah, but that act of "winning" was only possible because Wikipedia raised its own standard by a lot and reined in the randos - by insisting on citing reliable sources, no original research, setting up a whole system of moderators and governance to determine what even counts as a "reliable source" etc.


> Isn’t summarization precisely one of the biggest values people are getting from AI models?

I would say more that it’s one of the biggest illusory values they think they are getting. An incorrect summary is worse than useless, and LLMs are very bad at ‘summarising’.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: