Hacker Newsnew | past | comments | ask | show | jobs | submit | Meekro's commentslogin

I've gathered that the dispute is over Anthropic's two red lines: mass surveillance and fully autonomous weapons. Is there any information (or rumors even) about what the specific request was? I can't believe the government would be escalating this hard over "we might want to do autonomous weapons in the vague, distant future" without a concrete, immediate request that Anthropic was denying.

Even if there was a desire for autonomous weapons (beyond what Anduril is already developing), I would think it would go through a standard defense procurement procedure, and the AI would be one of many components that a contractor would then try to build. It would have nothing to do with the existing contract between Anthropic and the Dept of War.

What, then, is this really about?


you mean beyond this: [0]

>In 2025, reportedly Anthropic became the first AI company cleared for use in relation to classified operations and to handle classified information. This current controversy, however, began in January 2026 when, through a partnership with defense contractor Palantir, Anthropic came to suspect their AI had been used during the January 3 attack on Venezuela. In January 2026, Anthropic CEO Dario Amodei wrote to reiterate that surveillance against US persons and autonomous weapons systems were two “bright red lines” not to be crossed, or at least topics that needed to be handled with “extreme care and scrutiny combined with guardrails to prevent abuses.” You can also read Anthropic’s self-proclaimed core views on AI safety here, as well as their LLM, Claude’s, constitution here.

[0]: https://news.ycombinator.com/item?id=47160226


It’s about punishing a company that is not complying. It’s a show of force to deter any future objections on moral grounds from companies that want to do business with the US gov.

My understanding is that it’s about the contract allowing Anthropic to refuse service when they deem a red line has been crossed. Hegseth and friends probably don’t want any discussions to even start, about whether a red line may be in the process of being crossed, and having to answer to that. They don’t want the legality or ethicality of any operation to be under Anthropic’s purview at all.

I think you're right, this isn't about a specific request but about defense contractors not getting to draw moral red lines. Palmer Luckey's statement on X/Twitter reflects the same idea: https://x.com/PalmerLuckey/status/2027500334999081294

The thinking seems to be that you can't have every defense contractor coming in with their own, separate set of red lines that they can adjudicate themselves and enforce unilaterally. Imagine if every missile, ship, plane, gun, and defense software builder had their own set of moral red lines and their own remote kill switch for different parts of your defense infrastructure. Palmer would prefer that the President wield these powers through his Constitutional role as commander-in-chief.


There's a hell of a difference between "we don't like your terms so we're going to use a different supplier" and "we don't like your terms, so we're going to use the power of the federal government to compel you to change them". The president is the commander-in-chief of the military, but Anthropic is not part of the military! Outside serving the public interest in a crisis, the president has no right to compel Anthropic to do anything. We are clearly not in a crisis, much less a crisis that demands kill bots and domestic surveillance. This is clear overreach, and claiming a constitutional justification is mockery.

I'd encourage you to look up the Defense Production Act. Its powers are probably broad enough that the President could unilaterally force Anthropic to do this whether or not it wants to. It's the same logic that would allow him to force an auto manufacturer to produce tanks. And the law doesn't care whether we are in a crisis or not. It's enough that he determine (on his own) that this action is "necessary or appropriate to promote the national defense."

However, it looks like Trump isn't going to go that route-- they're just going to add Anthropic to a no-buy list, and use a different AI provider.


We'll see where that goes.

Of course a contractor could not decide to unilaterally shut off their missile system, because that would be a contract violation.

A contractor may try to negotiate that unilateral shut off ability with the government, and the government should refuse those terms based on democratic principles, as Luckey said.

But suppose the contractor doesn’t want to give up that power. Is it okay for the government to not only reject the contract, but go a step further and label the contractor as a “supply chain risk?” It’s not clear that this part is still about upholding democratic principles. The term “supply chain risk” seems to have a very specific legal meaning. The government may not have the legal authority to make a supply chain risk designation in this case.


It sounds like the "supply chain risk" designation is just about anyone who works with the DoD not using them, so their code doesn't accidentally make it into any final products through some sub-sub-subcontractor. Since they've made it clear that they don't want to be a defense contractor (and accept the moral problems that go with it), the DoD is just making sure they don't inadvertently become one.

That is not what is happening and its weird that people keep insisting that is all that is happening.

I think this is different. It’s a statement that this product is not qualified to perform that function(autonomous killing decisions). I think it is pure madness to think AI is currently up to this task. I also think it should be a war crime. I think congress should pass a law forbidding it.

There seem to be two separate lines of thought in this conversation: first, that the AI tech isn't smart enough for us to trust it with autonomously killing people. Second, even if it was smart enough, maybe such weapons are immoral to produce?

The first line of thought is probably true, but could change in the next 5 years-- so maybe we should be preparing for that?

The second line of thought is something for democracies to argue about. It's interesting that so many people in this thread want to take this power away from democratic governments, and give it to a handful of billionaire tech executives.


What democratic government are we talking about? Surely you don't mean the U.S. We do not live in a democracy right now.

> My understanding is that it’s about

What is "it" in your comment?

The refusal to sign a contract with Anthropic, or their designation as a supply chain risk?


I was answering “What, then, is this really about?” By “this”, presumably they meant “the dispute”.

The dispute is over the supply chain risk designation though, not over the refusal to sign a contract. If only the latter had happened, we wouldn't be talking here. You're explaining why the department wouldn't want contractors to dictate the terms of usage of their products and services (the latter), but not why this designation would be seen as necessary even in their own eyes (the former).

This priest agrees with you, and has expressed concerns about mediocre homilies that don't speak to the concerns of the particular community: https://youtu.be/pgZXCPCATmc?si=FM4uj2owYBVK_8Mh

I don't see the problem. He's offering a completely legal product to an eager audience. If people want to propose banning social media in some capacity, that could and should be voted on-- but Zuck isn't violating any legal or moral law I've ever heard of, and he shouldn't have to guess what products will be illegal in 20 years and preemptively withdraw them.

If it's harming your mental health, stop using it. The "Delete App" button is right there.


And just stop buying those cigarettes. This is where cultural differences matter, the US has much less concern about the negative societal impact of products than many countries, particularly its erstwhile allies. It's also precisely why it's imperative other countries decouple from US owned social media unless they want to import US values.

in the USA, we largely did "just stop buying cigarettes"

Same with asbestos, I mean what could go wrong in 20 years?

The main discussion here about offering this for kids. So, no! "if it's harming your mental health, stop using it" is not appropriate.

Maybe kids shouldn’t be using it. It’s reaching the point where parents aren’t doing their jobs so the government should ban it to protect the kids.

Just like tobacco, alcohol and porn we didn’t make it cancer and addiction free or remove the nudity - we banned kids from accessing it


Banning something just for kids is an easy win for any politician, since that's one of the few groups that can't punish you in the next election. For that reason alone, I assume we'll get some law within 5-15 years mandating that Facebook ban kids. I assume the kids will trivially bypass it the block, or switch to foreign social media, and we'll go back to business as usual.

Not everything that's legal is good. One presumes that if it's found to be bad and Congress isn't extremely corrupt (as if) it'll become illegal.

Right-- at which point, companies like Facebook will (hopefully) have to obey the law. But we're not there yet. Currently, people are moralizing at Zuck for not voluntarily killing his own products because they're "obviously harmful."

I mean, you realise that legal over the counter heroin used to be a thing, right? Cigarettes are still legal. There is a gap between “obviously harmful thing is legal” and “it is ethical to make great piles of money out of selling the obviously harmful thing (to children, at that)”. The CEO of Phillip Morris, say, isn’t doing anything illegal, but they are a _bad person_ who is knowingly harming society. Same for Zuckerberg.

What is a "moral" law as opposed to a "legal" one? If he is actively promoting a harmful product, I think that would fall into many people's definition of 'morally wrong'.

(I'm basing this on the headline because the article is paywalled)


A product can be helpful to one person and harmful to another. Most products are like that. All sorts of things can be addictive to some people, from potato chips to video games.

There was a major public campaign in the 1950s to ban rock & roll music, and in the 1980s to ban heavy metal. In each case, there were legions of "experts" calling those genres "harmful", and they were taken seriously -- congressional hearings were held, etc.

Point is, "promoting a harmful product" is very much in the eye of the beholder, and doesn't work as an objective moral standard.


I tried that too, and wound up unfollowing everyone except the people who never post. Then my page gets filled with "suggested" content.


The EU would have to put the US on a list of "foreign adversaries", with whatever political fallout comes from that. Not saying they shouldn't, but there will be downsides.

Since controlling these platforms is probably the best ROI for swinging public opinion, I'm sure it's a matter of time before they get seized (one way or another) and redistributed to reliable political allies.


Would they? There are already data residency laws, and the US didn’t have to be on any foreign adversary list for those to work, right?


whether or not they formally put the US on such a list for trade purposes, clearly they see the US as such.


I'm considering switching to US Mobile from Verizon-- it seems way cheaper, and you can still use the Verizon network. Any downsides I should know?


I started mid last year and haven't run into any problems. Haven't needed to switch off of Dark Star yet.

I was previously on Google Fi, and MVNO roaming is bloody fantastic - I always had reception.


What is dark star?


It's the name of their AT&T MVNO.


One tradeoff with MVNOs is you're generally lower priority on the network.


The only time that's going to matter is during a high usage event, like sports or perhaps a mass casualty. At that point, the network is overloaded for the primary users anyway.


Priorities vary across MVNO’s. I’m not an expert but I think USM is on the better end of that priority spectrum.


I switched over a year ago and all I can say is … it’s been excellent. $25/month per line is perfect and service is just as good as our Verizon postpaid.


I think one of the tells might be that all major providers go down, not just one.


My wife and I are on the same Verizon family plan. One of us can be down while the other is fine, then 30 minutes later it's the opposite. It's been like that all day.


Same here, except that when here (central-western NJ) when someone "recovers" here we go from SOS to a few bars but no LTE or 5G indicator. Yikes.


Infrastructure in general seems worse than 20 years ago. Our talent for black-bagging dictators has never been stronger, though!


20 years ago much less of the infrastructure of everyday life depended on an always-on network connection. Smartphones in particular were a relatively niche product. I didn’t even have a cell phone (and not because I was too young), much less expect it to work all the time.


The stated purpose of the law was to get TikTok out of the hands of a foreign adversary, and that was accomplished. Remember when Trump took office, and lots of people were worried he would refuse to enforce this law?

It sounds like the author would have preferred that a different group of billionaires take over.


> and that was accomplished

It's very optimistic to assume that China was beaten here.

Bytedance still owns the algorithm and 30% of the new company. This new wrapper firm is just being granted the license to serve as Bytedance's operations, essentially. All the stuff about it being 'trained on US content' and 'overseen' by Oracle is smoke and mirrors. This is really just the zombie of the deal that was done four years[1] ago and then quietly scrubbed.

This isn't significantly different than the way TikTok has been operating all along, the only difference is a few of the administration's cronies are able to get their heads into the feeding trough.

[1] https://www.cnbc.com/2020/09/19/trump-says-he-has-approved-t...


I wish no one had taken over. The threat of TikTok is easy to understand right now. It’s going to be much more murky after this deal is complete.


From a libertarian perspective, I also thought this was a bad law. It totally abandons faith in the idea of free speech, and admits that China’s “great firewall” was the right idea. I think it’s better to document any lies that were being spread on TikTok, and counter them with truth.

If your first reaction is “but that won’t work!” then you don’t really believe in a free speech based society, and all that’s left to do is argue over which group of shadowy billionaires should get to control everyone.


>and all that’s left to do is argue over which group of shadowy billionaires should get to control everyone

Whichever is better for the majority of people. This the same answer for democracy


i think the "but that wont work" is about visibility.

who are you intending to tell about these tiktok lies? how do you know if youve told the right people? what algorithm is going to pick up your corrections as equally viral as the lies were?

if youre actually going to do it, i think you need your own shadowy billionaire funding paying the various social media companies to pretend that your version of the truth is popular. maybe multiple shadowy billionaires.


> If your first reaction is “but that won’t work!” then you don’t really believe in a free speech based society

While I believe in free speech, free speech isn't some panacea. Nor does it magically exist without protection from powerful interests. What good does speaking up do, if "algorithms" managing the majority of speech have big money riding on promoting irresponsible speech at the expense of sidelining responsible speech.

This isn't a neutral open marketplace of ideas, battling on merit. It is a pervasively manipulated market for profit, and those who will pay to tilt it.

The right way to deal with surveillance and dossier based manipulation by external actors, is not to pick on one actor, but to make surveillance and dossier based manipulation illegal for all actors.

Nobody buys a TV wanting their watching habits to end up impacting what ads they see in web views, and vice versa.

That kind of behind the scenes coordination of unpermissioned data, as leverage against the sources of the data, is deeply anti-libertarian. Anti-liberty in both right and left formulations. (The idea that "libertarian" means the rich have a pass to do anything they can achieve with money, underhanded or not, is a corruption of any concept of individual liberty.)

The enshittification of the world is being driven by this hostile business model. Via permissionless (or permissioned by dark pattern) coordinated privacy violations. And it isn't just foreign adversaries who are benefiting at societies cost.

The constant collecting, collating, and converging of data on anyone doing anything that pervades the private/public economy now is deeply parasitical.

Free speech, like every other right, only achieves its real value in a healthy environment. I.e. a healthy idea competitive environment. I believe in voting too. But similarly, voting only matters in a healthy competitive candidate environment.


> The stated purpose of the law was to get TikTok out of the hands of a foreign adversary, and that was accomplished.

I don't know how we conclude that:

> The new U.S. operations of TikTok will have three “managing investors” that will collectively own 45 percent of the company: Oracle Corporation, Silver Lake, and MGX.

> the private equity firm Silver Lake (which has broad global investments in Chinese and Israeli hyper-surveillance)

> 30.1 percent will be “held by affiliates of certain existing investors of ByteDance; and 19.9 percent will be retained by ByteDance.”

Now we have oligarchs, plus a major surveillance investor group, plus the Chinese.

This doesn't seem to be a solution to anything except that "a deal was made", and any further attempts at cleaning up credible risks have so many players to deal with, they would be DOA.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: