Hacker Newsnew | past | comments | ask | show | jobs | submit | Aerolfos's commentslogin

Apart from being all AI-written:

> Reality: It’s not fear; it’s math. If 30% of the workforce is displaced, and the remaining 70% have to pay for the social safety net (or UBI) required to keep the displaced alive, the math breaks.

The argument being given simply avoids the question of where the economy itself is in all this. The workers pay taxes, the taxes pay for infrastructure, sure. But the workers aren't doing work (it got taken), so they aren't earning money, so what do they pay taxes on?

It's all just efficiency gains and everyone currently employed stays employed? Not a single AI company wants that. Not a single tech company wants that. Not only do they want layoffs, they're already happening. So that's not going to work out.

Which means there's less workers being paid, less taxes, less money to be spent on the economy, which means less money to pay workers, which means... the logical conclusion is "no economy at all". Taxes are the last thing to worry about then.


> Which means there's less workers being paid, less taxes, less money to be spent on the economy, which means less money to pay workers, which means... the logical conclusion is "no economy at all".

Except that's not how the economy works.

Suppose you automate web development. Fewer people get paid for that anymore. Does it increase long-term unemployment? Not really, because it creates surplus. Now everybody else has a little extra money they didn't have to spend on web development, and they'll want to buy something with it, so you get new jobs making whatever it is they want to spend the money on instead.

The only way this actually breaks down is if people stop having anything more they want to buy. But that a) seems pretty unlikely and b) implies that we've now fully automated the production of necessities, because otherwise there would be jobs providing healthcare, growing food, building houses, etc.


> Now everybody else has a little extra money

The flaw is assuming that lower costs “free up” money.

Money isn’t "freed". Money is created. Banks create it when they lend against future income. If automation removes wage income, banks don’t create replacement demand: they redirect credit into assets.

That’s why you can have rising productivity, stagnant wages, booming asset prices, and weak consumption at the same time. The missing variable is where credit is created, not how efficient production is. (Think Japan in the 90s)

If you think the AI threat is real buy real assets now. (not financial IOUs in computer systems)

"How Do Banks Create Money?" https://www.youtube.com/watch?v=3N7oD5zrBnc


>Now everybody else has a little extra money they didn't have to spend on web development, and they'll want to buy something with it, so you get new jobs making whatever it is they want to spend the money on instead.

Why assume a business that just boosted profits by reducing headcount would want to spend that surplus on hiring more workers elsewhere? Seems like it would mostly go towards stock buybacks and higher executive pay packages. There might be some leakage into new hiring, but I reckon the overall impact will be intensifying the funneling of money to the top and further hollowing out of the labor market.


But that implicitly assumes all jobs are comparable financially. Sure there’ll always be jobs to do but x number of web devs or whatever is not the same as x numbers of nursing home care workers.

Also in terms of extra money and spending, the logic also breaks a bit because we know that by age cohorts, older cohorts have more money but tend to have less consumer spending than the 25 - 40 cohort.


Your implicit assumption with the web dev is that this scales. Unfortunately, that may not be the case.


It's not a matter of scale. If people don't have to spend as much on X then they end up with extra money and will spend it on Y. Jobs then shift from X to Y.

This has been happening for centuries. The large majority of people used to work in agriculture. Now we can produce food with a low single digit percentage of the population. Textiles, transportation, etc. are all much less labor intensive than they were in the days of cobblers and ox carts, yet the 20th century was not marked with a 90% unemployment rate.

It's either one of two things. Either post-scarcity is possible because machines that can collect and assemble resources into whatever anybody wants at no cost are possible, and then nobody needs to work because everything is free. Or it isn't, there are still things machines can't do, and then people have jobs doing that.


Look at any non-Western country with a massive population, how is their excess labour faring?


It's always hilarious to see lazy, innumerate people claiming that "it’s math" when in reality they just made up numbers and didn't do any calculations.


You're assuming it's even a person when the author admitted they used an LLM for writing this article.


> Apart from being all AI-written

I think nearly 100% of blog posts are run through an LLM now. The author was lazy and went with the default LLM "tone" and so the typical LLM grammar usage and turns of phrase are too readily apparent.

It's really disheartening as I read blogs to get a perspective on what another human thinks and how their thought process works. If I wanted to chat with a LLM, I can open a new tab.


You might be interested in this: https://news.ycombinator.com/item?id=45722069


I never use an LLM for blog posts. Seemed like you need to hear more people telling you that.

Only amateurs and scammers use LLMs for writing.


> It's all just efficiency gains and everyone currently employed stays employed? Not a single AI company wants that.

I disagree with your assertion. Efficiency, production improvements are exactly what many companies are going for. We already have a huge deficit of software that needs to be written that cannot be written with the current Human programming resources available. We have plenty of things, infrastructure and otherwise, that don’t get built because of a lack of human labor to do them. We haven’t colonized the solar system yet due to a lack of resources, etc…

It’s really pessimistic to think that all this tech is going to go to maintaining the current status quo with just much less labor.


100% this. I run a software company and we never run out of new things to build. Our issue is velocity, we would be foolish to lay off employees, rather we must use AI to build things 50% faster if we can. What AI will do is further level the playing field in software, just like tractors allowed farms to expand in size.


> It's all just efficiency gains and everyone currently employed stays employed? Not a single AI company wants that. Not a single tech company wants that. Not only do they want layoffs, they're already happening. So that's not going to work out.

Every single AI company and tech company would be 100% ok with just efficiency gains. They want to make money and proving efficiency is more then enough for that.


> Which means there's less workers being paid, less taxes, less money to be spent on the economy, which means less money to pay workers, which means... the logical conclusion is "no economy at all". Taxes are the last thing to worry about then.

Assuming the hype pans out and we get AGI, the end result won't be "no economy at all," it'll be a really weird one that does nothing to satisfy the common man's needs (because they will be of no economic use to the owners of the technology).

All the world's resources will be harness to satisfy the whims of a very few trillionaires, and there will be no place for you (except perhaps as a cultish sycophant, if you're lucky)


> the logical conclusion is "no economy at all"

Marx called it 150 years ago. Its happening precisely how it said it would.


Exactly. A lot of people think he only wrote about Communism. And he also wrote how capitalism has failed since John Smith first wrote of it.

Is pure communism the right answer? Of course not. But mixing elements would avoid the worst of capitalism and communism.


Billionaires (that REALIZE gains on their investments via loans based on the investments' current value and not the original value)and AI Companies are about representation without taxation, while those that are taxed get their voice drowned out by those same actors.


[flagged]


That sounds evasive. I share the parent's view that the article appears to be largely LLM-written. Given that you cite "your AI assistant", I'm guessing you did lean on an LLM here, perhaps without realizing that it imparted a very distinctive tone.

And honestly, it just cracks me up that it's usually the authors writing about AI lean on the tech. Including the critics...


Busted. I am a human-in-the-loop.

I think everybody should use LLMs to polish their language. This topic is important to me and I want to communicate as effectively as possible.

I stand by every character of the article regardless of which fancy autocomplete I used to polish it. I use spellcheck, too, and a digital tuner for my guitar.

Just want to reiterate: https://alec.is/posts/ai-employees-dont-pay-taxes/#:~:text=I...


But people don't want to read something they can tell is AI, and thus you lose more authority and respect from your readers. If you are interested in getting your words out, and presumably you are as otherwise this wouldn't be a public article, the use of AI does in fact hurt that goal, ironic.

Finally, I will link this, about how "it's insulting to read your AI generated article:" https://news.ycombinator.com/item?id=45722069


I have a quibble here: the people who post in response don’t want to read content they know is AI-generated.

There exists some proportion of people who don’t mind and don’t care about it enough to comment on the topic.


The problem is deeper than that: The majority of hn readers seem to prefer the AI voice.

De gustibus non est disputandum

(The writers of HBO’s Westworld deserve a retroactive Emmy. We’re speedrunning to their speculative fantasy much sooner than anyone could have imagined.)


Or it's more like website bounces, they do care about it enough to close the tab but not enough to comment or specify exactly why they left on some comment section or feedback form.


I personally don't like using LLMs for doing anything creative, but I find it hilarious how if you're against AI for coding you're considered a Luddite by most in these parts. But blog posts? Now that's too far and you deserve to be lambasted.

I don't see anything wrong with what you've done.


At least many of the comments here still seem to be human written and so are much more interesting to read than the increasing number of AI written articles that get linked.


You're absolutely right.

Although to be fair, the fact that the comment section of HN is often more interesting than TFA is something than long predates LLMs.


LLMs are worse at things where performance is based on subjective preference, it's as simple as that.

It's not just blog posts: the staunchest AI supporters are the quickest to call out slop in the default aesthetics of vibe-coded websites, or images, or music.

Pretty much anywhere that "taste" is supposed to be involved.


And you made your communication vastly less effective, spinning conversation off about LLM-generated text.


Also, when are people going to get tired of assuming that em dashes automatically mean something was written by an LLM? At this stage, this is such a tired observation that it's unproductive to repeat.


They literally conclude by quoting Gemini 3:

> As my AI assistant gemini-3-flash-preview put it so blindly today:

So I can excuse someone for jumping to the conclusion the rest of it was LLM assisted too.

I read it and while I didn't think it was LLM writing (until the literal LLM writing), it's an incredibly grating style of writing that would earn ridicule before LLMs.

Obnoxiously building up straw men in section headers then knocking them down with "Reality:" is just not a way to have a useful conversation about a topic. Yuck.


Author confirmed they used an LLM: https://news.ycombinator.com/item?id=46427769


I don't pay attention to em dashes, they're overused as a pithy saying for sure

The actually useful tips are here: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing


One problem with any shibboleths people are looking for is as people see more AI writing they're more likely to begin to pick up AI phrasing. I mean, if the power of Web apps can revive the past participle "gotten" in the UK, it shouldn't be surprising it can get people to use the verb "delve" more often.


The shibboleths aren’t the problem with AI writing; the problem is that it’s bad writing.

Good writing uses novel combinations of tools (vocabulary, rhetoric, metaphor) to communicate novel ideas. Bad writing is a cargo cult of those tools.


That’s a bit of a different problem. I’m talking about people specifically wanting to know whether AI was used. There are lots of bad writers out there with or without AI assistance.


I find it crazy on the basis that the LLMs shouldn't/wouldn't be using them if they weren't in use... in the training set, in the first place.


Did you, or did you not, use any AI or LLMs in the process of writing this article?


Who cares?

A lot of people with disabilities are also using LLMs to level the playing field and to compete with non-disabled people.

Are you going to be the person who tells them they need to stay out, stop using this technology, and stay in their "place"?


Don't argue against a strawman, no one was disagreeing that disabled people use AI.


Em dashes aren't rare because of disuse. It's a Unicode character that can't be easily typed from a Western language keyboards -- real humans shouldn't be able to vocalize it. Hence it's considered an AI giveaway.


Eh if anyone is all in on AI and it replacing human writing it would be an AI company

But then that means if you're a PR or communications person working at this startup (or at Meta?) your job is not secure and that your days there are probably numbered, which I'm sure is great for morale...


> They are retrained every 12-24 months and constantly getting new/updated reinforcement learning layers

This is true now, but it can't stay true, given the enormous costs of training. Inference is expensive enough as is, the training runs are 100% venture capital "startup" funding and pretty much everyone expects them to go away sooner or later

Can't plan a business around something that volatile


You don't need to retrain the whole thing from scratch every time.


GPT-5.1 was based on over 15 months old data IIRC, and it wasn’t that bad. Adding new layers isn’t that expensive.


Google's training runs aren't funded by VC. The Chinese models probably aren't either.


Don't forget "we're obligated to try and sell it so here's an ai generated article to fill up our quota because nobody here wanted to actually sit down and write it"


FWIW all of the content on our eng blog is good ol' cage-free grass-fed human-written content.

(If the analogy, in the first paragraph, of a Roomba dragging poop around the house didn't convince you)


> but surely consumer SSDs wouldn't just completely ignore fsync and blatantly lie that the data has been persisted?

That doesn't even help if fsync() doesn't do what developers expect: https://danluu.com/fsyncgate/

I think this was the blog post that had a bunch more stuff that can go wrong too: https://danluu.com/deconstruct-files/

But basically fsync itself (sometimes) has dubious behaviour, then OS on top of kernel handles it dubiously, and then even on top of that most databases can ignore fsync erroring (and lie that the data was written properly)

So... yes.


> https://github.com/DGoettlich/history-llms/blob/main/ranke-4...

Given the training notes, it seems like you can't get the performance they give examples of?

I'm not sure about the exact details but there is some kind of targetted distillation of GPT-5 involved to try and get more conversational text and better performance. Which seems a bit iffy to me.


Thanks for the comment. Could you elaborate on what you find iffy about our approach? I'm sure we can improve!


Well, it would be nice to see examples (or weights to be completely open) for the baseline model, without any GPT-5 influence whatsoever. Basically let people see what the "raw" output from historical texts is like, and for that matter actively demonstrating why the extra tweaks and layers are needed to make a useful model. Show, don't tell, really.


Ok so it was that. The responses given did sound off, while it has some period-appropriate mannerisms, and has entire sections basically rephrased from some popular historical texts, it seems off compared to reading an actual 1900s text. The overall vibe just isn't right, it seems too modern, somehow.

I also wonder that you'd get this kind of performance with actual, just pre-1900s text. LLMs work because they're fed terabytes of text, if you just give it gigabytes you get a 2019 word model. The fundamental technology is mostly the same, after all.


what makes you think we trained on only a few gigabytes? https://github.com/DGoettlich/history-llms/blob/main/ranke-4...


> To make `Z` a column vector, we would need something like `Z = (Y @ X)[:,np.newaxis]`.

Doesn't just (Y @ X)[None] work? None adding an extra dimension works in practice but I don't know if you're "supposed" to do that


It seems `(Y @ X)[None]` produces a row vector of shape (1,3),

   (Y @ X)[None]
   
   # array([[14, 32, 50]])
   
but `(Y @ X)[None].T` works as you described:

   (Y @ X)[None].T
   
   # array([[14],
   #        [32],
   #        [50]])

I don't know either RE supposed to or not, though I know np.newaxis is an alias for None.


> Why not use X.transpose()?

Or just X.T, the shorthand alias for that


The article is AI-written. Obviously can't be relied on to accurately convey the original information

"But Arduino isn’t SaaS. It’s the foundation of the maker ecosystem." is a giveaway, but the whole set of paragraphs of setup before that is chatgpt's style


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: