Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


Can we have a line on the HN guidelines saying "telling people you asked about <topic> on ChatGPT doesn't make for interesting discussion. Please refrain from doing so."

Especially in this case where you don't even tell what did you find out, only that you did ask ChatGPT. Good for you.


It's the new LMGTFY


But posing as helpful instead of sarcastic, yet equally annoying.


Why feed yourself with potential misinformation that you don’t know the true source of and you might never double check?


> Why feed yourself with potential misinformation that you don’t know the true source of and you might never double check?

Because I enjoy reading HN?


When you run out of HN, do you hit up ChatGPT?


I ask ChatGPT to write me a comment thread in the style of Hacker News.


Oh, that was just too easy... :)


As opposed to? Unless you're actually going to read individual papers, vet the authors, have the baseline statistical knowledge to understand the results, etc, you're always going to be susceptible to potential misinformation. This HN topic is a good example of the difficulty of verifying information.

Who's to say that ChatGPT doesn't provide a better job of filtering it out?


Yes, I would start with Google Scholar, or at least Wikipedia because it tries to provide sources for each statement that it makes.

> Who’s to say that ChatGPT doesn’t provide a better job of filtering it out?

No one because no one knows what sources ChatGPT is blending together in its sentences.


> I would start with Google Scholar

You actually should vet everything you read through Google Scholar too. There is a pervasive belief of consensus in academic science and unfortunately individually verifying information is the only thing we know that actually works- not political consensus.


[flagged]


I am not trolling. I don’t recommend engaging in a good-faith dialogue with someone by prefacing it with your belief that they are trolling.

If you're basing your trust in ChatGPT on the claim that it is trained on Wikipedia, you might as well read Wikipedia instead because then you also see the sources for each claim, or the fact that certain claims are unsourced. ChatGPT will not let you know if a certain claim is more controversial, nor give you further sources to read if you want to know the background of a claim.


I don't have trust for things, I build trust with other humans if earned, potentially dogs. Beyond that there's no reason to trust with objectivity and reason.

Digression aside, I would impress upon you I haven't claimed, nor do I believe what you're claiming I have. I am simply providing you with a slice of the larger original academic paper that provides great, rigorous, and peer-reviewed documentation of precisely what GPT is, and what it's trained on.

From this paper you'll perhaps gather an instinct that these folks working on this problem are widely aware of the extremely well-known concept of "open source knowledge" and were well considered in their application of pruning data to their needs.

I believe you'll perhaps further gather retrospective insight upon the idea that GPT is doing anything more than giving you a T9-predictive-autotext for the entirety of that dataset; meaning you can try to coax it into saying anything you want but if the p-values aren't right or the predictive potential of a given token "coming up next" isn't there, that's just.. how it goes.

It's data. It's not quite the tower of Babel, but we'll get there soon enough. Kind of like how all the fancy 3D video game rendering software that looks incredible these days is still just manipulating tuples and vectors with matrix calculations, stuff you could do on paper but why would you do that math to describe a picture when you could just draw it.

ChatGPT is just drawing the pictures (in this poorly chosen analogy), and giving us the cool graphics. The math is all pretty benign and based in the fundamentals of neural networks, not even the more fanciful CV and deeper trained NLP can get to.

Info-scientists, I wonder what the "rainbow table" of all language and ideas etc. that would be relevant to a latent language learning model would be... this is a wonder to me because I lack the sufficient knowledge and field expertise.


Because ChatGPT assembles letters into words and sentences without attribution and so is untrustworthy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: