Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I don't see how LLMs can do that.

If it's in the training data, then it should be able to do that. That is to say, a comment's points matter. and the subreddit it's on. and who said it, and how the rest of their comments do/where they are. The LLM could annotate the unredacted reddit dataset with metadata as to where to rate it on the words used, the accuracy of the information, the sarcasm quotient, the hilarity quotient, how condescending the comment is; all of that an LLM could generate metadata about and feed into itself to get better and better.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: