Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Heck, sometimes that answer is even correct!


I have been recently forking off a subproject from Git repo. After spending a lot of time messing around with it and getting into a lot of unforeseen trouble, I finally asked ChatGPT how to do it and of course ChatGPT knew the correct answer all along. I felt like an idiot. Now I always ask ChatGPT first. These LLMs are way smarter than you would think.

GPT4 with WolframAlpha plugin even gave me enough information to implement Taylor polynomial approximation for Gaussian function (don't ask why I needed that), which would have otherwise taken me hours of studying if I could even solve it at all.

PS: GPT4 somehow knows even things that are really hard to find online. I recently needed standard error but not of mean but rather of standard deviation. GPT4 not only understood my vague query but gave me formula that is really hard to find online even if you already know the keywords. I know it's hard to find, because I went ahead to double-check ChatGPT's answer via search.


So you implemented a poly approx fror a Gaussian function without understanding what you were doing (implying that if you wanted to do it yourself it would take hours of studying).

Good luck when you need to update it and adjust it - this is the equivalent than copying/pasting a function from Stack Overflow.


I double-checked everything, but that's beside the point. I was replying to GGP's insinuation that ChatGPT is unreliable. In my experience, it's more likely to return correct results than the first page of search. Search results often resemble random rambling about tangentially related topics whereas ChatGPT gets its answer right on first try. ChatGPT understands me when I have only a vague idea of what I want whereas search engines tend to fail even when given exact keywords. ChatGPT is also way more likely to do things right than me except in my narrow area of expertise.


i use a tool for programming that's based on ChatGPT

i find it most helpful when i am not sure how to phrase a query so that direct search would find something. but i also found that in at least half the cases the answer is incomplete or even wrong.

the last one i remember explained in the text what functions or settings i could use in the text, but the code example that it presented did not do what the text suggested. it really drove home the point that these are just haphazardly assembled responses that sometimes get things right by pure chance.

with questions like yours i would be very careful to verify that the solution is actually correct.


> don't ask why I needed that

But now I'm curious!


In the same way you can tell if a search result is "good", you can usually tell if what ChatGPT is telling you something truthful.

And you face the same problem when looking for something in a domain you are not an expert in - no way to tell if a web page is truthful and no way to tell if ChatGPT is right. ChatGPT just lets you make more mistakes more efficiently.

But for those cases where you kind of know the answer, ChatGPT is usually better than search.


Well, you say that flippantly, but if you ask it correctly, in most cases the answer is correct as well. You should obviously double check the solution, but that applies to anything, be it a Google search or a Wikipedia article.


Think of it like floating point logic.


A broken clock is right twice a day!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: