Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Using your potato peeler example:

If my potato peeler told me "Why bother? Order pizza instead." I'd be obese.

An LLM can directly influence your willingness to pursue an idea by how it responds to it. Interest and excitement, even if simulated, is more likely to make you pursue the idea than "ok cool".



> If my potato peeler told me "Why bother? Order pizza instead." I'd be obese.

But why do you let yourself be influenced so much by others, or in this case, random filler words from mindless machines?

You should listen to your own feelings, desires, and wishes, not anything or anyone else. Try to find the motivation inside of you, try to have the conversation with yourself instead of with ChatGPT.

And if someone tells you "don't even bother", maybe show more of a fighting spirit and do it with even more energy just to prove them wrong?

(I know it's easier said than done, but my therapist once told me it's necessary to learn not to rely on external motivation)


It’s not “by others”. It’s by circumstance.

It’s like any other tool. If I wanted to chop wood and noticed how my axe had gone dull, the likelihood of me going “ah f*ck it” and instead go fishing increases dramatically. I want to chop wood. I don’t want to go to the neighbor and borrow his axe, or sharpen my axe and then chop wood.

That’s what has happened with ChatGPT in a sense - it has gone dull. I know it used to work “better” and the way that it works now doesn’t resonate with me in the same way, so I’m less likely to pursue work that I would want to use ChatGPT as an extrinsic motivator for.

Of course if the intrinsic motivation is large enough I wouldn’t let a tool make the decision for me. If it’s mid October and the temperature is barely above freezing and I have no wood, I’ll gnaw through it with my teeth if necessary. I’ll go full beaver. But in early September when it’s 25C outside on a Friday? If the axe isn’t perfect, I’ll have a beer and go fishing.


You are influenced just as much. You're just not aware of it.

Also, I think you're completely missing the point of the conversation by glancing over the nuances of what is being said and relying on completely overgeneralizing platitudes and assumptions that in no way address the original sentiment.


It is very very risky.

You are trusting the model to never recommend something that you definitely should not do, or that does not serve the interests of the service provider, when you are not capable of noticing it by yourself. A different problem is whether you have provided enough information for the model to actually make that decision, or if the model will ask for more information before it begins to act.


Why, though? I'm with GP, I don't understand it at all. If I thought something is interesting, I wouldn't lose interest even if a person reacted with indifference to it; I just wouldn't tell them about it again.


> If my potato peeler told me "Why bother? Order pizza instead." I'd be obese.

But that's not really the right comparison.

The right comparison is your potato peeler saying (if it could talk): "ok, let's peel some stuff" vs "Owww wheee geez! That sounds fantastic! Let's peel some potatoes, you and me buddy, yes sireee! Woweeeee!" (read in a Rick & Morty's Mr Poopybutthole voice for maximum effect).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: