I don't think that article makes a strong case; it deliberately phrases examples in the most ridiculous ways and pretends that this is a damning criticism of the phrase itself; it's 'you're telling me a shrimp fried this rice' but with a pretence of rationality.
I think it makes a pretty compelling case that most invocations of the statement are either blindingly obvious or probably false. Can you give a counterexample?
> most invocations of the statement are either blindingly obvious or probably false
So straightaway, you've walked significantly back from the claim in the headline; now half of the time it's 'blindingly obvious' that the statement is correct. That already feels like a strong counterexample to me, and it's the article's own first point.
Secondly, look at this one specifically:
> The purpose of the Ukrainian military is to get stuck in a years-long stalemate with Russia.
Firstly, this isn't obviously false. It's an unfair framing, but I think the Ukrainian military would agree that forcing a stalemate when attacked by a hostile power is absolutely part of their purpose.
Secondly, it is an unfair framing that deliberately ignores that all systems are contextual. A car's purpose is transport, but that doesn't mean it can phase through any obstacle.
The article makes an entirely specious argument, almost an archetypal example of a strawman. It can't sustain its own points over a few hundred words without steadily retreating, and that is far more pointless than the maxim it criticises.
I'm reminded of an XKCD comic [1] about smug miscommunication. Of course any principle is ridiculous when you pretend not to understand it.
I think that's still too rosy a view; it's clear with a lot of big tech that they never had the ideals in the first place. They use claims of principle for marketing purposes and then discard them when it's no longer convenient.
This all seems like a lot of effort so that an agent can run `npm run build` for you.
I get the article's overall point, but if we're looking to optimise processing and reduce costs, then 'only using agents for things that benefit from using agents' seems like an immediate win.
You don't need an agent for simple, well-understood commands. Use them for things where the complexity/cost is worth it.
Feedback loops are important to agents. In the article, the agent runs this build command and notices an error. With that feedback loop, it can iterate a solution without requiring human intervention. But the fact that the build command pollutes the context in this case is a double-edge sword.
If you really need that, the easy solution here is to get a list of errors using an LSP (or any other way of getting a list of errors, even grep "Error:"), and only giving that list of errors to the LLM if the build fails. Otherwise just tell the LLM "build succeeded".
That's an extremely simple solution. I don't see the point in this LLM=true bullshit.
Are you saying that somebody took translations that had already been written and replaced them with AI generated worse translations? That has got to be a rare exception, no?
But more to your point: you might not have run into languages that didn't have proper translations available, but billions of other people did. In the past I read a machine translated book before. It was almost like a derivative work because it would randomly differ by a huge amount from the source material.
1. "Defends" suggests some level of explanation and justification; the White House did not present any here.
2. "AI image showing arrested woman" could mean a fully-generated image of a woman, rather than editing an image of an existing person under law enforcement control to disguise the actual facts. The first one would be bizarre, the second one is much more problematic.
There is no meaningful difference between a 100% percent fabricated image and a some slightly smaller percentage of a digitally manipulated image when presented from the government as fact. There's no need to split hairs.
It's the difference between drawing a cartoon and editing a photograph; the second one is a definite attempt to misrepresent matters of fact, the first could be argued to be illustrative only.
The article title is overly kind; the White House didn't defend the image, they dismissed it as an issue.
This reporting presents it as a debate with reasoning on both sides, rather than a brazen act with no defence supplied. It's not good journalism to legitimise a position that didn't even attempt to legitimise itself.
This has been the story the whole time. Coupled with the insistence the media is unfair they’ve managed to shift the window of what is acceptable. It’s been remarkably effective and most news sources seemingly have no counter.
It's not even about whats acceptable, it's about what they can frame as a narrative for their supporters in as incendiary a manner as possible. Remember that the FCC investigations into Comcast and NBCUniversal weren't predicated on political bias or uneven reporting, but rather that they '...may be promoting invidious forms of DEI in a manner that does not comply with FCC regulations.”
Matthew Gertz, a senior fellow at Media Matters, summarises its mechanisms and intent quite succinctly: “This is the path that Viktor Orbán took in Hungary, where you use the power of the state to ensure that the media is compliant, that outlets are either curbed and become much less willing to be critical, or they are sold to owners who will make that happen."
I don’t disagree that’s a lot of it, and with Hungary as my possible second citizenship have been following Orban closely. I do think there’s something different happening here though. The loop is:
- Do something wildly unacceptable
- Media writes an article declaring the action is indefensible
- Those involved complain publicly about the unfair nature of the story; their supporters back them up
- Next time to avoid controversy media writes a slightly more fair story
It doesn’t even require state power because technically in the US they cannot. There is clearly threat of power kicking journalists out of the pentagon is a clear example. But it’s much more about creating a permission structure through public airing of grievances.
Worse, it seems that these institutions have internalized this as a good thing. "Liberal columnists criticizing the left" is seen as a sign of intellectual righteousness while criticizing the right is seen as behavior that is beneath elite institutions like the New York Times.
The net effect is that when Trump says "we are going to fix housing prices by deporting fifty million people" the Times writes that while the policy may not work it does seem like Trump is trying to tackle the rising cost of housing.
Counter to what? Most news sources are owned by people who support this administration’s positions, and are glad they don’t have to do this whole charade of pretending to care about the truth or normal people.
I mean Donald Trump on Tuesday posted an AI-generated image of himself holding an American flag next to a sign that read "Greenland". Previously he had posted fake videos of Obama being arrested. We're a long way past traditional notions of journalism in this post-satire reality - and the BBC has to adhere to 'rules for me, not for thee' moral outrage after its recent gaffe broadcasting an edited speech of Trumps.
It's incoherent to be anti-copyright because it's used to freeze out competition by corporations and be pro-AI (which is exactly that, at vastly greater scale).
> people might start using said editor prompts to express themselves, causing an increased range in distinct writing styles
We're already seeing people use AI to express themselves in several contexts, but it doesn't lead to an increased range of styles. It leads to one style, the now-ubiquitous upbeat LinkedIn tone.
Theoretically we could see diversification here, with different tools prompting towards different voices, but at the moment the trend is the opposite.
reply