Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't agree with everything but I certainly relate to the comment.

> That's the tell-tale sign of LLM content: zero substance and boring prose.

This is absolutely true. With the right constraints you can get some good answers (and many more parlor tricks as mentioned). Left to their own devices like answering vague questions or on longer answers, llms including GPT4 generate empty substanceless drivel. Doesn't make it worthless, definitely makes it overhyped.



> On two occasions I have been asked [by members of Parliament], 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.


Though LLMs now have fun examples of giving the wrong answers to the right questions, too. That's a neat evolution!


How could vague questions be rewarded with any substance? Im sure that could be baked in somehow but one still has to pick their type of substance otherwise it’s too broad to make any sense. But substance really isn’t their goal, tools don’t provide any of that and LLMs aren’t anything else but a tool. There is a lot of hype, I agree, but that doesn’t mean LLMs aren’t useful and it’s all hype.


So most university papers?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: