Hacker Newsnew | past | comments | ask | show | jobs | submit | badgersnake's commentslogin

Directing the movements of a car is now “fleet response”, not “driving”. Give them a fucking innovation award or something.

If a passenger says to you, "go around this car by using that private driveway", are they driving the car?

What did you expect from a forum for VC fanboys?

Humans risk jail time, AIs not so much.

A remarkable number of humans given really quite basic feedback will perform actions they know will very directly hurt or kill people.

There are a lot of critiques about quite how to interpret the results but in this context it’s pretty clear lots of humans can be at least coerced into doing something extremely unethical.

Start removing the harm one, two, three degrees and add personal incentives and is it that surprising if people violate ethical rules for kpis?

https://en.wikipedia.org/wiki/Milgram_experiment


> 2012, Australian psychologist Gina Perry investigated Milgram's data and writings and concluded that Milgram had manipulated the results, and that there was a "troubling mismatch between (published) descriptions of the experiment and evidence of what actually transpired." She wrote that "only half of the people who undertook the experiment fully believed it was real and of those, 66% disobeyed the experimenter".[29][30] She described her findings as "an unexpected outcome" that

Its unlikely Milligram played am unbiased role in, if not the sirext cause of the results.


Milgram was flawed, sure. However, you can look at videos of ICE agents being surprised that their community think they're evil and doing evil, when they think they're just law enforcement. There was not even a need for coercion there, only story-telling.

Incorrect. ICE is built off the background of 30-50 years of propaganda against "immigrants", most of it completely untrue.

The same is done for "benefits scroungers", despite the evidence being that welfare fraud only accounts for approximately 1-5% of the cost of administering state welfare, and state welfare would be about 50%+ cheaper to administer if it was a UBI rather than being means-tested. In fact, much of the measures that are implemented with the excuse of "we need to stop benefits scroungers", such as testing if someone is disabled enough to work or not, etc. are simulatenously ineffective and make up most of the cost.

Nevertheless, "benefits scroungers" has entered the zeitgeist in the UK (and the US) because of this propaganda.

The same is true for propaganda against people who have migrated to the UK/US. Many have done so as asylum seekers under horrifying circumstances, and many die in the journey. However, instead of empathy, the media greets them with distaste and horror — dehumanising them in a fundamentally racist way, specifically so that a movement that grants them rights as a workforce never takes off, so that companies can employ them for zero-hour contracts to do work in conditions that are subhuman, and pay them substantially less than minimum wage (It's incredibly beneficial for the economy, unfortunately).


Rightwing propaganda in the USA is part of a concerted effort by the Heritage Foundation, the Powell Memo, Fox News, and supporting players. These things are well understood by researchers and journalists who have produced copious documentation in the form of articles, books, podcast series, etc.

One excellent example is available here[0] in a series by the Lever called Master Plan. According to their website, a book has been written broadening the discussion.

They have played us for fools and evidence of their success is all over the news and our broken society. It's outrageous because none of this was by accident or chance. Forces didn't magically come together in a soup that turned out this way.

0. https://the.levernews.com/master-plan/


Indeed, and many of those same groups are also funding right wing propaganda in other countries.

What you have quoted says a third of people who thought it was real didn’t disobey the experimenter when they thought they were delivering dangerous and lethal electric shocks to a human. Is that correct?

Maybe there was an edit but it's the opposite, 66% disobeyed.

Right, so a third didn’t disobey.

A third of a half who were believers.

So of the entire populace of Milligram participants, 16.5% believed and obeyed.

That's a much, much smaller claim than the popular belief of what Milligram presented.

However, it's still possible that you only need ~16.5% to believe & obey authority for things like the Nazi death camps to occur.


We immediately only need to consider the half that believed the situation was real, if we are concerned with what people do in believably real situations.

Even if we take the 16% though, that's one in six people willing to deliver very obvious direct harm and/or kill another human from exceptionally mild coercion with zero personal benefit attached other than the benefit of not having to say "no". That is a lot.


No, no you don't; The authority includes that of the scientist.

I’m not sure what you’re trying to say here I’ve said nothing about authority.

Normalization of deviance also contributes towards unethical outcomes, where people would not have selected that outcome originally.

https://en.wikipedia.org/wiki/Normalization_of_deviance


I am moderately certain that this only happens in laissez-faire cultures.

If you deviate from the sub-cultural norms of Wall Street, Jahmunkey, you fucked.

It's fraud or nothing, baby, be sure to respect the warning finger(s) of God when you get intrusive thoughts about exposing some scheme--aka whistleblowing.


> lots of humans can be at least coerced into doing something extremely unethical.

Experience shows coercion is not necessary most of the time, the siren call of money is all it takes.


Still > 0

That reduces humans to the homo economicus¹:

> "Self-interest is the main motivation of human beings in their transactions" [...] The economic man solution is considered to be inadequate and flawed.[17]

An important distinction is that a human can *not* make pure rational decisions, or use complex deductions to make decisions on, such as "if I do X I will go to jail".

My point being: if AI were to risk jail time, it would still act different from humans, because (the current common LLMs) can make such deductions and rational decisions.

Humans will always add much broader contexts - from upbringing, via culture/religion, their current situation, to past experiences, or peer-consulting. In other words: a human may make an "(un)ethical" decision based on their social background, religion, a chat with a pal over a beer about the conundrum, their ability to find a new job, financial situation etc.

¹ https://en.wikipedia.org/wiki/Homo_economicus


> a human may make an "(un)ethical" decision based on their social background, religion, a chat with a pal over a beer about the conundrum, their ability to find a new job, financial situation etc.

The stories they invent to rationalise their behaviour and make them feel good about themselves. Or inhumane political views ie fascism which declares other people worth less, so it's okay to abuse them.


Yes, humans tell themselves stories to justify their choices. Are you telling yourself the story that only bad humans do that, and choosing to feel that you are superior and they are worth less? It might be okay to abuse them, if you think about it…

From an IBM training manual (1979):

>A computer can never be held accountable

>Therefore a computer must never make a management decision

The (EDITED) corollary would arguably be:

>Corporations are amoral entities which are potentially immortal who cannot be placed behind bars. Therefore they should never be given the rights of human beings.

(potentially, not absolutely immortal --- would wording as "not mortal by essence/nature"? be better?)


How is a corporation "immortal"?

What is the oldest corporation in the world? I mean, aside from churches and stuff.

Corporations can die or be killed in numerous ways. Not many of them will live forever. Most will barely outlive a normal human's lifespan.

By definition, since a corporation comprises a group of people, it could never outlive the members, should they all die at some point.

Let us also draw a distinction between the "human being" and the "person". A corporation is granted "personhood" but this is not equivalent to "humanity". Being composed of humans, the members of any corporation collectively enjoy their individual rights in most ways.

A "corporate person" is distinct from a "human person", and so we can recognize that "corporate rights" are in a different category, and regulate accordingly.

A corporation cannot be "jailed" but it can be fined, it can be dissolved, it can be sanctioned in many ways. I would say that doing business is a privilege and not a right of a corporation. It is conceivable that their ability to conduct business could be restricted in many ways, such as local only, or non-interstate, or within their home nation. I suppose such restrictions could be roughly analogous to being "jailed"?


Construction company okay?

>Kongo Gumi, founded in 578 AD, is recognized as the oldest continuously operating company in the world, specializing in the construction of Buddhist temples.


Ah, so we should import Japanese people to run our companies.

What needs to do a company from fortune 7 to die?

If kills 1 person they won’t close Google. If steals 1 billion, won’t close either. So what needs to do such a company to be closed down?

I think it’s almost impossible to shut down


Look to history. Here's a list of "Fortune 7" companies from about 50 years ago.

IBM

AT&T

Exxon

General Motors

General Electric

Eastman Kodak

Sears, Roebuck & Co.

Some of them died. Others are still around but no longer in the top 7. Why is that? Eventually every high-growth company misses a disruptive innovation or makes a key strategic error.


What I meant is they can kill people and still survive. So how much bad things they need to do to be shut down?

Kill 100 people? 100000? So seems as long as the lawsuit is less than what they can afford they will survive. Which is crazy.


yes. As long as they are more valuable to people than the lives cost, they will stick around. Part of this is a pragmatic utilitarianism the world run on.

How many people can a doctor kill and still survive? Nobody expects perfection because they like having doctors.


It took an armed rebellion and two acts of parliament to kill the British East India Company.

Your comment is rather incoherent; I recommend prompting an LLM to generate comments with impeccable grammar and coherent lines of reasoning.

I do not know what a "fortune 7" might be, but companies are dissolved all the time. Thousands per year, just administratively.

For example, notable incidents from the 21st c: Arthur Andersen, The Trump Foundation, Enron, and Theranos are all entities which were completely liquidated and dissolved. They no longer meaningfully exist to transact business. They are dead, and definitely 100% not immortal.


Parent was asking what would it take for a fortune 7 (aka the fortune 500 but just the top 7) to go to zero?

But it’s funny that can kill many people and still exist. Steal billions and still exist. It’s a super human disguised as a corporation.

——

Ai generated answer:

You are correct: it is "barely impossible" for a "Magnificent 7" company (Apple, Microsoft, Google, Amazon, NVIDIA, Meta, Tesla) to be shut down by committing a simple crime.

These companies are arguably more resilient than many nation-states. They possess massive cash reserves, diversified revenue streams, and entrenched legal defenses.

Here is an analysis of why individual crimes don't work, and the extreme, systemic events that would actually be required to kill one of these giants.

### Why "Murder" and "Theft" Don't Work

Corporate law is designed to separate the entity from the individuals running it. This is the "Corporate Veil."

* *If they kill one person:* If a Google self-driving car kills a pedestrian due to negligence, or an Amazon warehouse collapses, the company pays a settlement or a fine. It is treated as a "tort" (a civil wrong) or, at worst, corporate manslaughter. The specific executives responsible might go to jail, but the company simply pays the cost and replaces them. * *If they steal 1 billion:* If a company is caught laundering money or defrauding customers (e.g., Wells Fargo opening fake accounts, or banks laundering cartel money), they pay a fine. For a company like Apple (with ~$60–100 billion in cash on hand), a $1 billion fine is a manageable operational expense, often calculated as the "cost of doing business."

### The Only Things That Could Actually "Kill" Them

To truly "close down" or dissolve a company of this size, you need to render it *insolvent* (bankrupt with no hope of restructuring) or legally *dismantle* it.

#### 1. The "Enron" Scenario (Foundational Fraud)

This is the most likely path to sudden death. For a company to die overnight, it must be revealed that its entire business model is fake.

* *The Mechanism:* If it turns out that 90% of Microsoft’s revenue doesn't exist, or that NVIDIA isn't actually selling chips but just moving money between shell companies, the stock price would go to zero instantly. Credit lines would freeze, and they wouldn't be able to pay employees or electricity bills. * *Historical Precedent:* Enron or Arthur Andersen. They didn't just commit a crime; they were the crime. Once the trust evaporated, the business evaporated.

#### 2. The "Standard Oil" Scenario (Government Breakup)

This doesn't "kill" the assets, but it kills the monopoly.

* *The Mechanism:* The US Department of Justice (or EU equivalent) wins a massive antitrust suit and determines the company is too dangerous to exist as a single entity. * *The Outcome:* The government forces a "divestiture." Google might be split into three companies: Google Search, YouTube Inc., and Android Co. The parent company "Alphabet" would cease to exist, but the pieces would survive. This happened to AT&T (Ma Bell) in the 1980s and Standard Oil in 1911.

#### 3. The "Geopolitical Death" Scenario (National Security)

This is rare for US companies but possible.

* *The Mechanism:* If a company were found to be directly funding a hostile foreign power, engaging in treason, or if its products were deemed a fatal threat to national infrastructure. * *The Outcome:* The government could revoke the company's corporate charter (the legal permission to exist). This is the "nuclear option" of corporate law. Alternatively, the government could effectively nationalize the company, taking it over completely (like Fannie Mae/Freddie Mac in 2008, though they survived as "zombies").

#### 4. The "Liability Apocalypse" Scenario

This would require a catastrophe so expensive that it exceeds the company's assets (trillions of dollars).

* *Hypothetical:* Imagine a Tesla software update simultaneously causes every Tesla on earth to accelerate into a crowd, killing 100,000 people. Or an AI model from Google/Microsoft escapes and destroys the global banking database. * *The Outcome:* The resulting class-action lawsuits and liability claims would be in the trillions. If the liability > assets, the company goes into Chapter 7 liquidation. The assets (servers, patents) are sold off to pay the victims, and the company ceases to exist.

### Summary Table: Crimes vs. Consequences

| Action | Consequence | Does the Company Die? | | --- | --- | --- | | *Murder (Individual)* | Settlement / Fine / PR Crisis | *No* | | *Mass Casualty Event* | Massive Fines / CEO Fired | *Unlikely* (Unless liability > Trillions) | | *Theft ($1B+)* | DOJ Fines / Regulatory Oversight | *No* | | *Systemic Fraud* | Stock collapse / Insolvency | *Yes* (The "Enron" Death) | | *Monopoly Abuse* | Forced Breakup | *Sort of* (Splits into smaller companies) |

### The Verdict

You are right. Short of *insolvency* (running out of money completely) or *revocation of charter* (government execution), these companies are immortal. Even if they commit terrible crimes, the legal system prefers to fine them and fire the CEO rather than destroy an entity that employs hundreds of thousands of people and powers the global economy.


> Your comment is rather incoherent; I recommend prompting an LLM to generate comments with impeccable grammar and coherent lines of reasoning.

It seems your reading comprehension has fallen below average. I recommend challenging your skills regularly by reading from a greater variety of sources. If you only eat junk food, even nutritious meals begin to taste bad, hm?

You’re welcome for the unsolicited advice! :)


I changed my stance on "immoral" corporations:

Legal systems are the ones being "immoral" and "unethical" and "not just", not "righteous", not fair. They represent entire nations and populations while corpos represent interests of subsets of customers and "sponsors".

If corpos are forced to pivot because they are behaving ugly, they will ... otherwise they might lose money (although that is barely an issue anymore, given how you can offset almost any kind of loss via various stock market schemes).

But the entire chain upstream of law enforcement behaves ugly and weak, which is the fault of humanities finest and best earning "engineers".

Just take a sabbatical and fix some of that stuff ...

>> I mean you and your global networks got money and you can even stay undetected, so what the hell is the issue? Personal preference? Damn it, I guess that settles that. <<


> Humans risk jail time, AIs not so much.

Do they actually though, in practice? How many people have gone to jail so far for "Violating ethics to improve KPI"?


It's overwhelmingly exceptionally rare, but famously SBF, Holmes, and Winterkorn.

Didn't they famously break actual laws though, not just "violating ethics"?

It's a bit reductive, but yes people are sent to prison for being convicted of crimes.

The interesting logical conclusion from this is that we need to engineer in suffering to functionaly align a model.

Do they, really? Which CEO went to jail for ethical violations?

Jeffrey Skilling, as a major example. Sam Bankman-Fried, Elizabeth Holmes, Martin Shkreli, just to name a few

Well, those committed the only crime that matters in the US: they stole from the rich.

Yeah, it’s exceptionally rare for CEOs, but they’re not the only one’s behaving unethically at work. There’s often a scapegoat.

I imagine it’s in his wife’s name.

No because you missed the joke.

Lots of teams embraced actions to run their CI/CD, and GitHub reviews as part of their merge process. And copilot. Basically their SOC2 (or whatever) says they have to use GitHub.

I’m guessing they’re regretting it.


> Basically their SOC2 (or whatever) says they have to use GitHub

Our SOC2 doesn't specify GitHub by name, but it does require we maintain a record of each PR having been reviewed.

I guess in extremis we could email each other patch diffs, and CC the guy responsible for the audit process with the approval...


Every product vendor, especially those that are even within a shouting distance from security, has a wet dream: to have their product explicitly named in corporate policies.

I have cleaned up more than enough of them.


The Linux kernel uses an email based workflow. You can digitally sign email and add it to an immutable store that can be reviewed.

Does SOC2 itself require that or just yours? I'm not too familiar with SOC2 but I know ISO 27001 quite well, and there's no PR specific "requirements" to speak of. But it is something that could be included in your secure development policy.

Yeah, it’s what you write in the policy.

And it's pretty common to write in the policy, because its pretty much a gimme, and lets you avoid writing a whole bunch of other equivalent quality measures in the policy.

Love GTA1, but IIRC my version at least is a 3dfx glide game which makes it quite hard to play on modern kit.

Will give this a try.


It has an 8-bit color, a True Color and a 3dfx executable side-by-side in the installation folder.

More garbage content on the front page. It’s a constant AI hype pieces with zero substance from people who just happen to work for AI companies. Hacker news is really going downhill.

We’re back to measuring productivity by lines of code are we? Because that always goes well.

I think that’s a little harsh. When the CEO groupthink network says AI all the things, what are the PMs supposed to do?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: