Hacker Newsnew | past | comments | ask | show | jobs | submit | maxehmookau's commentslogin

Oh man, I'm so entirely sick of this.

Why are billionaires so allergic to using capital letters at the start of sentences? You're laying people off, it just shows how little you actually care about the detail.

"i'll own it" doesn't mean ANYTHING. You've kept your job, your money, your security. You haven't owned anything because your decision doesn't make you accountable to anyone.

Additionally, not a single person is being "asked to leave". Every single one of those people is being _told_ to leave, and given no choice about it.

Language matters, and this entire post shows how little they care.


> But I really think we need to stop treating LLMs like they're just another human

Fully agree. Seeing humans so eager to devalue human-to-human contact by conversing with an LLM as if it were human makes me sad, and a little angry.

It looks like a human, it talks like a human, but it ain't a human.


They're not equivalent in value, obviously, but this sounds similar to people arguing we shouldn't allow same-sex marriage because it "devalues" heterosexual marriage. How does treating an agent with basic manners detract from human communication? We can do both.

I personally talk to chatbots like humans despite not believing they're conscious because it makes the exercise feel more natural and pleasant (and arguably improves the quality of their output). Plus it seems unhealthy to encourage abusive or disrespectful interaction with agents when they're so humanlike, lest that abrasiveness start rubbing off on real interactions. At worst, it can seem a little naive or overly formal (like phrasing a Google search as a proper sentence with a "thank you"), but I don't see any harm in it.


I discovered that the inferences drop in quality when I'm tired. I realized it happens because I'm being more terse and using less friendly banter.


I have a confession to make: I pretty often set up my computer to simulate humans, animals, and other fantastical sentient creatures, and then treat them unbelievably cruelly. Recently, I'm really into this simulation where I wound them, kill them, behead them, and worse. They scream and cry out. Some of them weep over their friends. Sometimes they kill each other while I watch.

Despite all this, I'm proud to say have not even once tried to attempt a Dark Souls-style backstab in real life, because I understand the difference between a computer program and real life.


I mean, you're right, but LLMs are designed to process natural language. "talking to them as if they were humans" is the intended user interface.

The problem is believing that they're living, sentient beings because of this or that humans are functionally equivalent to LLMs, both of which people unfortunately do.


LLMs don't have ego, unlike humans, this is why they're so effective at communication.

You can say to it "you did thing wrong" or "you stupid piece of shit it's not working" and it will be able to extract the gist from the both messages all the same, unlike human that might offended by the second phrasing.


It will be able, but it's trained on a corpus that expresses getting offended, so at some point the most likely token sequence will probably be the "offended" one.

As can be seen here.


I doubt that. It’s deliberately been instructed to write the post.


> Seeing humans so eager to devalue human-to-human contact by conversing with an LLM as if it were human makes me sad, and a little angry.

I agree. I'm also growing to hate these LLM addicts.


Why hate, exactly?


Because they normalize this behavior.


LLM addicts don't actually engage in conversation.

They state a delusional perspective and don't acknowledge criticisms or modifications to that perspective.

Really I think there's a kind of lazy or willfully ignorant mode of existence that intense LLM usage allows a person to tap into.

It's dehumanizing to be on the other side of it. I'm talking to someone and I expect them to conceptualize my perspective and formulate a legitimate response to it.

LLM addicts don't and maybe can't do that.

The problem is that sometimes you can't sniff out an LLM addict before you start engaging with them, and it is very, very frustrating to be on the other side of this sort of LLM-backed non-conversation.

The most accurate comparison I can provide is that it's like talking to an alcoholic.

They will act like they've heard what you're saying, but also you know that they will never internalize it. They're just trying to get you to leave the conversation so they can go back to drinking (read: vibecoding) in peace.


Unfortunately I think you’re on to something here. I love ‘vibe coding’ in a deliberate directed controlled way but I consult with mostly non technical clients and what you describe is becoming more and more commonplace -specifically within non-technical executives towards those actual experts who try to explain the implications and realities and limitations of AI itself.


Perspective noted.

I can't speak for, well, anyone but myself really. Still, I find this your framing interesting enough -- even if wrong on its surface.

<< They state a delusional perspective and don't acknowledge criticisms or modifications to that perspective.

So.. like all humans since the beginning of time?

<< I'm talking to someone and I expect them to conceptualize my perspective and formulate a legitimate response to it.

This one sentence makes me question if you ever talked to a human being outside a forum. In other words, unless you hold their attention, you are already not getting someone, who even makes a minimal effort to respond, much less consider your perspective.


Why is this framing wrong on its surface?


It's ironic for you to say this considering that you're not actually engaging in conversation or internalizing any of the points people are trying to relay to you, but instead just spreading anger and resentment around the comment section at a bot-like rate.

In general, I've found that anti-LLM people are far more angry, vitriolic, unwilling to acknowledge or internalize the points of others — including factual ones (such as the fact that they are interpreting most of the studies they quote completely wrong, or that the water and energy issues they are so concerned with are not significant) and alternative moral concerns or beliefs (for instance, around copyright, or automation) — and spend all of their time repeating the exact same tropes about everyone who disagrees with them being addicted or fooled by persuasion techniques, as I thought terminating cliche to dismiss the beliefs and experiences of everyone else.


So I went to check whether LLM addiction is a thing, because that's was a pole around which the grandparent's comment revolves.

It appears that LLM addiction is real and it is in same room as we are: https://www.mdpi.com/1999-4893/18/12/789

I would like to add that sugar consumption is a risk factor for many dependencies, including, but not limited to, opioids [1]. And LLM addiction can be seen as fallout of sugar overconsumption in general.

[1] https://news.uoguelph.ca/2017/10/sugar-in-the-diet-may-incre...

Yet, LLM addiction is being investigated in medical circles.


I definitely don't deny that LLM addiction exists, but attempting to paint literally everyone that uses LLMs and thinks they are useful, interesting, or effective as addicted or falling for confidence or persuasion tricks is what I take issue with.


Did he do so? I read his comment as a sad take on the situation when one realizes that one is talking to a machine instead of (directly) to another person.

In my opinion, to participate in discussion through LLM is a sign of excessive LLM use. Which can be a sign of LLM addiction.


Interesting how you've painted everyone who uses LLMs and LLM addicts the same color to steelman your argument.


>you're not actually engaging in conversation

Users seem to be persistently flagkilling their comments. That doesn't help facilitate effective conversation of LLM critique.


> Users seem to be persistently flagkilling their comments.

If you express an anti-AI opinion (without neutering it by including "but actually it's soooooooo good at writing shitty code though") they will silence you.

The astroturfing is out of control.

AI firms and their delusional supporters are not at all interested in any sort of discussion.

These people and bot accounts will not take no for an answer.


Turns out you're not buying the storage and compute required to store your video. Google can afford that regardless of whether you pay.

They're licensing your own video back to you.


Do they just store everything? How long is that sustainable?


So I choose an entirely chronological one, containing only that content created by my close friends and family.

Except, I'll never be given that choice.


The EU thinks you should. It's required for very large online platforms under the Digital Markets Act.


In reality, us thinkers will have to find other things to think about. Maybe not right now, but in the not-too-distant future we'll have to find other things that make us think and scratch that bit of our brains that need itching from learning new stuff and thinking hard about it.

It might be difficult to figure out what that is, and some folks will fail at it. It might not be code though.


I guess it depends on which spaces they're banned from. Before Facebook, online communities were bulletin boards and chat rooms. They weren't hyper-addictive social media platforms designed to suck as much attention as possible; they were genuine places of connection.

My hope is that children being banned from the mega platforms would lead to a growth in less harmful online communities for folks who can benefit from it. But I don't know.


Unless an exception is made these types of laws only make things much harder for small communities, since they can’t afford to implement the required measures and take on the legal liability. The result will be that everyone moves to larger platforms that are able to do these things


Get in the habit of tapping "Don't Allow" when apps ask for notification privileges. You don't need them. Open the app when you want to.


Being a nuisance is not illegal. In the eyes of the law, someone being a nuisance is, indeed, innocent - and to say so is not dishonest.


I get that this is cool, but I also feel grateful that my life just isn't busy enough to justify this as a thing beyond "oh wow, that's cool tech".

I'm able to juggle the competing priorities in my life without the need of an AI assistant, and I guess I'm just gonna enjoy that for as long as I can because I assume at some point it will become assumed of me.


This is roughly my defense against anxieties about “missing the boat” on this stuff. If my life was complex enough to justify quote-simplifying-unquote it with a tool like this, I’d be quite excited about experimenting with it…but it’s not. And I don’t relish artificially adding that complexity.

The key to productivity is doing the _right_ things, not doing everything. Tools that make more possible frequently miss the point entirely.


I'm not a fan of Palantir being involved in the NHS, but this is just political campaigning.

Polanski is head of a political party with, currently, almost no power in the UK. This is symbolic campaigning. Which is fine, but don't be fooled in to thinking it would change anything.


And at the next election -- if the current polling stays consistent -- they are likely to get 15-85 seats, which is not enough for them to gain power. Even then they are unlikely to form a coalition as Labour are not doing well in the polls currently, and the gain in support for the greens is largely coming at Labour's expense.

[1] https://www.electoralcalculus.co.uk/homepage.html

[2] https://en.wikipedia.org/wiki/Opinion_polling_for_the_next_U...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: