Hacker Newsnew | past | comments | ask | show | jobs | submit | more stingraycharles's commentslogin

You’re correct, you need to learn how to use it. But for some reason HN has an extremely strong anti-AI sentiment, unless it’s about fundamental research.

At this point, I consider these AI tools to be an invaluable asset to my work in the same way that search engines are. It’s integrated into my work. But it takes practice on how to use it correctly.


> for some reason HN has an extremely strong anti-AI sentiment

It's because I've used it and it doesn't come even close to delivering the value that its advocates claim it does. Nothing mysterious about it.


I think what it comes down to is that the advocates making false claims are relatively uncommon on HN. So, for example, I don't know what advocates you're talking about here. I know people exist who say they can vibe-code quality applications with 100k LoC, or that guy at Anthropic who claims that software engineering will be a dead profession in the first half of '26, and I know that these people tend to be the loudest on other platforms. I also know sober-minded people exist who say that LLMs save them a few hours here and there per week trawling documentation, writing a 200 line SQL script to seed data into a dev db, or finding some off-by-one error in a haystack. If my main or only exposure to AI discourse was HN, I would really only be familiar with the latter group and I would interpret your comment as very biased against AI.

Alternatively, you are referring to the latter group and, uh, sorry.


The whole point I tried to make when I said “you need to learn how to use it” is that it’s not vibe coding. It has nothing to do with vibes. You need to be specific and methodological to get good results, and use it for appropriate problems.

I think the AI companies have over-promised in terms of “vibe” coding, as you need to be very specific, not at all based on “vibes”.

I’m one of those advocates for AI, but on HN it consistently gets downvoted no matter how I try to explain things. There’s a super strong anti-AI sentiment here.


My suspicion is because they (HN) are very concerned this technology is pushing hard into their domain expertise and feel threatened (and, rightfully so).


While it will suck when that happens (and inevitably it will), that time is not now. I'm not one to say LLMs are useless, but they aren't all they're being marketed to be.


Or they might know better than you. A painful idea.


Painful? What's painful when someone has a different opinion? I think that is healthy.


There is no scenario where AI is a net benefit. There are three possibilities:

1. AI does things we can already do but cheaper and worse.

This is the current state of affairs. Things are mostly the same except for the flood of slop driving out quality. My life is moderately worse.

2. Total victory of capital over labor.

This is what the proponents are aiming for. It's disastrous for the >99% of the population who will become economically useless. I can't imagine any kind of universal basic income when the masses can instead be conveniently disposed of with automated killer drones or whatever else the victors come up with.

3. Extinction of all biological life.

This is what happens if the proponents succeed better than they anticipated. If recursively self-improving ASI pans out then nobody stands a chance. There are very few goals an ASI can have that aren't better accomplished with everybody dead.


What is the motivation for killing off the population in scenario 2? That's a post-scarcity world where the elites can have everything they want, so what more are they getting out of mass murder? A guilty conscience, potentially for some multiple of human lifespans? Considerably less status and fame?

Even if they want to do it for no reason, they'll still be happier if their friends and family are alive and happy, which recurses about 6 times before everybody on the planet is alive and happy.


It's not a post-scarcity world. There's no obvious upper bound on resources AGI could use, and there's no obvious stopping point where you can call it smart enough. So long are there are other competing elites the incentive is to keep improving it. All the useless people will be using resources that could be used to make more semiconductors and power plants.


Don’t attribute to malice that which can equally be contributed to incompetence.

I think you’re over-estimating the capabilities of these tech leaders, especially when the whole industry is repeating the same thing. At that point, it takes a lot of guts to say “No, we’re not going to buy into the hype, we’re going to wait and see” because it’s simply a matter of corporate politics: if AI fails to deliver, it fails to deliver for everyone and the people that bought into the hype can blame the consultants / whatever.

If, however, AI ended up delivering and they missed the boat, they’re going to be held accountable.

It’s much less risky to just follow industry trends. It takes a lot of technical knowledge, gut, and confidence in your own judgement to push back against an industry-wide trend at that level.


I suspect that AI is in an "uncanny valley" where it is definitely good enough for some demos, but will fail pretty badly when deployed.

If it works 99% of the time, then a demo of 10 runs is 90% likely to succeed. Even if it fails, as long as it's not spectacular, you can just say "yeah, but it's getting better every day!", and "you'll still have the best 10% of your human workers in the loop".

When you go to deploy it, 99% is just not good enough. The actual users will be much more noisy than the demo executives and internal testers.

When you have a call center with 100 people taking 100 calls per day, replacing those 10,000 calls with 99% accurate AI means you have to clean up after 100 bad calls per day. Some percentage of those are going to be really terrible, like the AI did reputational damage or made expensive legally binding promises. Humans will make mistakes, but they aren't going to give away the farm or say that InsuranceCo believes it's cheaper if you die. And your 99% accurate-in-a-lab AI isn't 99% accurate in the field with someone with a heavy accent on a bad connection.

So I think that the parties all "want to believe", and to an untrained eye, AI seems "good enough" or especially "good enough for the first tier".


Agreed, but 99% is being very generous.


A big task my team did had measured accuracy in the mid 80% FWIW.

I think the line of thought in this thread is broadly correct. The most value I’ve seen in AI is problems where the cost of being wrong is low and it’s easy to verify the output.

I wonder if anyone is taking good measurements on how frequently an LLM is able to do things like route calls in a call center. My personal experience is not good and I would be surprised if they had 90% accuracy.


I think these kinds of problems were already solved using ML and to a pretty high accuracy.

But now everyone is trying to make chatbots do that job and they are awful at it.


And that's for tasks it's actually suited for


>I suspect that AI is in an "uncanny valley" where it is definitely good enough for some demos

Sort of a repost on my part, but the LLM's are all really good at marketing and other similar things that fool CEO's and executives. So they think it must be great at everything.

I think that's what is happening here.


> if AI fails to deliver, it fails to deliver for everyone and the people that bought into the hype can blame the consultants / whatever.

Understatement of the year. At this point, if AI fails to deliver, the US economy is going to crash. That would not be the case if executives hadn't bought in so hard earlier on.


Race to "Too big to fail" on hype and your losses are socialized


And if it does deliver, everyone's gonna be out of a job and the US economy is also going to crash.

Nice cul-de-sac our techbro leaders have navigated us into.


Yep, either way things are going to suck for ordinary people.

My country has had bad economy and high unemployment for years, even though rest of the world is doing mostly OK. I'm scared to think what will happen once AI bubble either bursts or eats most white collar jobs left here.


There’s also a case that without the AI rush, US economy would look even weaker now.


> Don’t attribute to malice that which can equally be contributed to incompetence.

At this point I think it might actually be both rather than just one or the other.


> Don’t attribute to malice that which can equally be contributed to incompetence.

This discourse needs to die. Incompetence + lack of empathy is malice. Even competence in the scenario they want to create is malice. It's time to stop sugar-coating it.


I keep fighting this stupid platitude [0]. By that logic, I fail to find anything malicious. Everything could be explained by incompetence, stupidity etc.

[0] https://news.ycombinator.com/item?id=46147328


“Worldly wisdom teaches that it is better for reputation to fail conventionally than to succeed unconventionally.” - Keynes.

Convention here is that AI is the next sliced bread. And big-tech managers care about their reputation.


It's pretty pathetic that they can build a brand based on "doing the exact same thing everyone else is doing" though


> At that point, it takes a lot of guts to say “No, we’re not going to buy into the hype, we’re going to wait and see” because it’s simply a matter of corporate politics

Isn't that the whole mythos of these corporate leaders though? They are the ones with the vision and guts to cut against the fold and stand out among the crowd?

I mean it's obviously bullshit, but you would think at least a couple of them actually would do something to distinguish themselves. They all want to be Steve Jobs but none of them have the guts to even try to be visionary. It is honestly pathetic


What you have is a lot of middle managers imposing change with random fresh ideas. The ones that succeed rise up the ranks. The ones that failed are forgotten, leading to survivorship bias.


Ultimately it's a distinction without a difference. Maliciously stupid or stupidly malicious invariably leads to the same place.

The discussion we should be having is how we can come together to remove people from power and minimize the influence they have on society.

We don't have the carbon budget to let billionaires who conspires from island fortresses in Hawaii do this kind of reckless stuff.

It's so dismaying to see these industries muster the capital and political resources to make these kinds of infrastructure projects a reality when they've done nothing comparable w.r.t to climate change.

It tells me that the issue around the climate has always been a lack of will not ability.


It's mass delusion


That is, until you only allow approved vendors (Microsoft, Cloudflare, etc) to provide these types of services. It’s very easy to pass laws like that, and it seems like centralization is the direction everything is headed.


So if you could get Google/Apple/MS on board, then you could embed controls onto most people's endpoints, and actually that'd work more than trying to put the burden on websites/controlling the network. The trick is those are all US corporations who may or may not want to be responsible for that level of control.

While we still have alternate operating systems, that won't be a universal control of course. You'd have to stop people owning general purpose computing devices for that to be fully effective.


> You'd have to stop people owning general purpose computing devices for that to be fully effective.

That's been the corporate and probably governmental wet dream since the iPhone released. I think the only thing keeping the x86_64 scene from doing the same thing is legacy software support, and open alternatives existing. If Microsoft could've viably banned getting software from anywhere outside their store, they would have.

I would argue with all the computers they sold in "S mode" a few years ago, they earnestly tried it in the home market.


SOC2 is mainly to check boxes, and forces you to think about a few things. There’s no real / actual audit, and in my experience the pen tests are very much a money grab. You’re paying way too much money for some “pentesting” automated suite to run.

The auditors themselves pretty much only care that you answered all questions, they don’t really care what the answers are and absolutely aren’t going to dig any deeper.

(I’m responsible for the SOC2 audits at our firm)


When I worked for a consulting firm some years back I randomly got put on a project that dealt with payment information. I had never had to deal with payment information before so I was a bit nervous about being compliant. I was pointed to SOC2 compliance which sounded scary. Much to my relief (and surprise), the SOC2 questionnaire was literally just what amounted to a survey monkey form. I answered as truthfully as I could and at the end it just said "congrats you're compliant!" or something to that effect.

I asked my my manager if that's all that was required and he said yes, just make sure you do it again next year. I spent the rest of my time worrying that we missed something. I genuinely didn't believe him until your comment.

Edit: missing sentence.


Once this type of issue gets publicized, does that in anyway affect the certification?


Sometimes scandals affect these things. But it's hard to predict.


It’s pretty crazy that Amazon’s $8B investment didn’t even get them a board seat. It’s basically a lot of cloud credits though. I bet both Google and Amazon invested in Anthropic at least partially to stress test and harden their own AI / GPU offerings. They now have a good showcase.


Yeah. I bet there’s a win-win in the details where it gets to sound like a lot of investment for both parties to look good but really wasn’t actually much real risk.

Like if I offered you $8 billion in soft serve ice cream so long as you keep bringing birthday parties to my bowling alley. The moment the music stops and the parents want their children back, it’s not like I’m out $8 billion.


Echoes of Enron-style accounting "tricks"...


Why does everybody keep insisting on this “Enron accounting” stuff. LLM companies need shitloads of compute for specialized use case. Cloud vendor wants to become a big player in selling compute for that specialized use case, and has compute available.

Cloud provider gives credit to LLM provider in exchange for a part of the company.

These are really normal business deals.


Amazon gave away datacenter time share in exchange for stock in a startup. That has nothing to do with electricity futures and private credit revolvers.


They said echoes of the tricks.


I've read thousands of pages of Enron history, this has nothing to do with it.

This is my thought too. They de-risked any other AI startup from choosing AWS as their platform. If the hype continues AWS will get their 30% margin on something growing like rocket emoji, if they don't at least they didn't miss the boat.


And that’s why you should always edit your original prompt to explicitly address the mistake, rather than replying to correct it.


I think you’re severely underestimating the complexity of http/1.1. It’s definitely much simpler than http/2, but it’s a lot of code that needs to be maintained.


To write the code from scratch, sure.

But I'm thinking a few lines of nginx config to proxy http 1.1 to 2


Nginx can't use http2 upstreams, some other reverse proxies can though.


Yes; the web server I use for my site is about twice the size of that blog post. Though, I think that if you drop the file-listing functionality you may be able to get it closer.


Xerox was also famously early with a lot of things but failed to create proper products out of it.

Google falls somewhere in the middle. They have great R&D but just can’t make products. It took OpenAI to show them how to do it, and the managed to catch up fast.


"They have great R&D but just can’t make products"

Is this just something you repeat without thinking? It seems to be a popular sentiment here on Hacker News, but really makes no sense if you think about it.

Products: Search, Gmail, Chrome, Android, Maps, Youtube, Workspace (Drive, Docs, Sheets, Calendar, Meet), Photos, Play Store, Chromebook, Pixel ... not to mention Cloud, Waymo, and Gemini ...

So many widely adopted products. How many other companies can say the same?

What am I missing?


I don't think Google is bad at building products. They definitely are excellent at scaling products.

But I reckon part of the sentiment stems from many of the more famous Google products being acquisitions orignally (Android, YouTube, Maps, Docs, Sheets, DeepMind) or originally built by individual contributors internally (Gmail).

Then here were also several times where Google came out with multiple different products with similar names replacing each other. Like when they had I don't know how many variants of chat and meeting apps replacing each other in a short period of time. And now the same thing with all the different confusing Gemini offerings. Which leads to the impression that they don't know what they are doing product wise.


Starting with an acquisition is a cheap way of accelerating once your company reaches a certain size.

Look at Microsoft - Powerpoint was an acquisition. They bought most of the team that designed and built Windows NT from DEC. Frontpage was an acquisition, Azure came after AWS and was led by a series of people brought in in acquisitions (Ray Ozzie, Mark Russinovich, etc.). It's how things happen when you're that big.


I think it's a little unfair to give DEC credit for NT. Sure, they may have bought the team, but they did most (all?) of the work on NT at Microsoft.

That's not like Google buying Android when they already had a functioning (albeit not at all polished) smartphone OS.


Why wouldn't you count things initially made by individual contributors at Google?


Because those were "free time" projects. It wasn't directed to do by the company, somebody at the company with their flex time - just thought it was a good idea and did it. Googlers don't get this benefit any more for some reason.


Because they're not a good measure of the company's ability to develop products based on the direction from leadership.


Leadership's direction at the time was to use 20% of your time in unstructured exploration and cool ideas like that, though good point of the other poster that that is no longer a policy.

Those are all free products, some of them are pretty good. But free is the best business strategy to get a product to the top of the market. Are others better, are you willing to spend money to find out? Clearly, most people are not interested. The fact that they can destroy the market for many different types of software by giving it away and still stay profitable is amazing. But that's all they are doing. If they started charging for everything there would be better competition and innovation. You could move a whole lot of okay-but-not-great cars, top every market segment you want, if you gave them away for free. Only enthusiasts would remain to pay for slightly more interesting and specific features. Literally no business model can survive when their primary product is competing with good-enough free products.


They come up with tons and tons of products like Google Glass and Google+ and so on and immediately abandon them. It is easy to see that there is no real vision. They make money off AdSense and their cloud services. That's about it.


Google does abandon a lot of stuff, but their core technologies usually make their way into other, more profitable things (collaborative editing from Wave into Docs; loads of stuff from Google+; tagging and categorizing in Photos from Picasa (I'm guessing); etc)


It annoyed me recently that they dropped support for some Nest/Google Home thermostats. Of course, they politely offered to let me buy a replacement for $150.


> Products: Search, Gmail, Chrome, Android, Maps, Youtube, Workspace (Drive, Docs, Sheets, Calendar, Meet), Photos, Play Store, Chromebook, Pixel ... not to mention Cloud, Waymo, and Gemini ...

Many of those are acquisitions. In-house developed ones tend to be the most marginal on that list, and many of their most visibly high-effort in-house products have been dramatic failures (e.g. Google+, Glass, Fiber).


I was extremely surprised that Google+ didn't catch on. The week before Google+ launched, me and all my friends agreed that Facebook is toast, Google will do the same thing but better, and everyone has a Gmail account so there will be basically zero barrier to entry. Obviously, we were wrong; Google+ managed to snatch defeat out of the jaws of victory, Google+ never got significant traction, and Facebook managed to keep growing and now they're yet another Big Evil Tech Corporation.

Honestly, I still don't really know how Google managed to mess that up.


I got early access to Google+ because of where I worked at the time. The invite-only thing had worked great for GMail but unfortunately a social network is useless if no-one else is on it. Then the real names thing and the resulting drumbeat of horror stories like "Google doxxed me to my violent ex-husband" killed what little momentum they had stone dead. I still don't know why they went so hard on that, honestly.


I think the sentiment is usually paired with discussion about those products as long-lasting, revenue-generating things. Many of those ended up feeding back into Search and Ads. As an exercise, out of the list you described, how many of those are meaningfully-revenue-generating, without ads?

A phrasing I've heard is "Google regularly kills billion-dollar businesses because that doesn't move the needle compared to an extra 1% of revenue on ads."

And, to be super pedantic about it, Android and YouTube were not products that Google built but acquired.


They bought YouTube but you have to give Google a hell of a lot of credit for turning it into what it is today. Taking ownership of YouTube at the time was seen by many as taking ownership of an endless string of copyright lawsuits, suing them into oblivion.


Youtube maintains an independent campus from the google/alphabet mothership, I'm curious how much direction they get, as (outwardly, at least) appear to run semi-autonomously.


Before Google touched Android it was a cool concept but not what we think of today. Apparently it didn't even run on Linux. That concept came after the acquisition.


That is because the DoubleClick parasite has long infected the host.


Notably all other than Gemini are from a decade or more ago. They used to know how to make products, but then they apparently took an arrow in the knee.


Didn't they buy lots of those actually ?


And to my point, it took Apple to make the iPhone for Google to make Android.

It took OpenAI for Google to finally understand make a product out of their years if not decades of AI research.

YouTube and Maps are both acquisitions indeed.


Search was the only mostly original product. With the exception of YouTube which was a purchase, Android and ChromeOS all the other products were initially clones.


Google had less incentive. Their incentive was to keep API bottled up and in brewing as long as possible so their existing moats in search, YouTube can extend in other areas. With openai they are forced to compete or perish.

Even with gemini in lead, its only till they extinguish or make chatgpt unviable for openai as business. OpenAI may loose the talent war and cease to be leader in this domain against google (or Facebook) , but in longer term their incentive to break fresh aligns with average user requirements . With Chinese AI just behind, may be google/microsoft have no choice either


Google was especially well positioned to catch up because they have a lot of the hardware and expertise and they have a captive audience in gsuite and at google.com.


I still have PTSD from how much Watson was being pushed by external consultants to C levels despite it being absolutely useless and incredibly expensive. A/B testing? Watson. Search engine? Watson. Analytics? Watson. No code? Watson.

I spent days, weeks arguing against it and ended up having to dedicate resources to build a PoC just to show it didn’t work, which could have been used elsewhere.


It's like poetry, it rhymes


This is going on all over again.


Agentic AI really is changing things. I've had a complete change of heart about it. It's good enough now to boost productivity MASSIVELY for devs.


I think this is one of those things that can be situationally useful, but also come with huge risks to the majority of users.


So UdpSocket should really be called DatagramSocket, UDP being the protocol that operates on these datagrams?

Surprising that they got such a fundamental thing wrong.


That happens when someones learning project ("I rewrite a library in the new language I want to learn") ends up in productive code.


This is in the standard library; it's not a learning project. And it also isn't even incorrect - see erk__'s comment.

Rust is an excellent language and fully capable of production use.


It's not, it's the `socket2` library. The standard sockets don't allow (ab)using actual `UdpSocket`s as a different kind of datagram socket.


Sure it does.

    let f = std::fs::File::open("/dev/null").unwrap();
    let f: std::os::fd::OwnedFd = f.into();
    let socket: std::net::UdpSocket = f.into();
This is really no different. In this example it's not even a socket.



Yes, I know, but the point is that the standard UdpSocket is correctly named as it doesn’t represent any other datagram socket. Uh, we’re pribably in agreement here actually.


Yeah exactly! :-D


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: