Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't understand the doomer mindset. Like what is it that you think AI is going to do or be capable of doing that's so bad?


I'm not OP or a doomer, but I do worry about AI making tasks too achievable. Right now if a very angry but not particularly diligent or smart person wants to construct a small nuclear bomb and detonate it in a city center, there are so many obstacles to figuring out how to build it that they'll just give up, even though at least one book has been written (in the early 70s! The Curve of Binding Energy) arguing that it is doable by one or a very small group of committed people.

Given an (at this point still hypothetical, I think) AI that can accurately synthesize publicly available information without even needing to develop new ideas, and then break the whole process into discrete and simple steps, I think that protective friction is a lot less protective. And this argument applies to malware, spam, bioweapons, anything nasty that has so far required a fair amount of acquirable knowledge to do effectively.


I get your point, but even whole ass countries routinely fail at developing nukes.

"Just" enrichment is so complicated and requires basically every tech and manufacturing knowledge humanity has created up until the mid 20th century that an evil idiot would be much better off with just a bunch of fireworks.


Biological weapons are probably the more worrisome case for AI. The equipment is less exotic than for nuclear weapon development, and more obtainable by everyday people.


Yeah, the interview with Geoffrey Hinton had a much better summary of risks. If we're talking about the bad actor model, biological weaponry is both easier to make and more likely as a threat vector than nuclear.


It might require that knowledge implicitly, in the tools and parts the evil idiot would use, but they presumably would procure these tools and parts, not invent or even manufacture them themselves.


Even that is insanely difficult. There's a great book by Michael Levi called On Nuclear Terrorism, which never got any PR because it is the anti-doomer book.

He methodically goes through all the problems that an ISIS or a Bin Laden would face getting their hands on a nuke or trying to manufacture one, and you can see why none of them have succeeded and why it isn't likely any of them would.

They are incredibly difficult to make, manufacture or use.


It's very convenient that it is that hard.


Knowing how is very rarely the relevant obstacle. In the case of nuclear bombs the obstacles are, in order of easiest to hardest:

1. finding out how to build one

2. actually building the bomb once you have all the parts

3. obtaining (or building) the equipment needed to build it

4. obtaining the necessary quantity of fissionable material

5. not getting caught while doing 3 & 4


A couple of bright physics grad students could build a nuclear weapon. Indeed, the US Government actually tested this back in the 1960s - they had a few freshly minted physics PhDs design a fission weapon with no exposure to anything but the open literature [1]. Their design was analyzed by nuclear scientists with the DoE, and they determined it would most likely work if they built and fired it.

And this was in the mid 1960s, where the participants had to trawl through paper journals in the university library and perform their calculations with slide rules. These days, with the sum total of human knowledge at one's fingertips, multiphysics simulation, and open source Monte Carlo neutronics solvers? Even more straightforward. It would not shock me if you were to repeat the experiment today, the participants would come out with a workable two-stage design.

The difficult part of building a nuclear weapon is and has always been acquiring weapons grade fissile material.

If you go the uranium route, you need a very large centrifuge complex with many stages to get to weapons grade - far more than you need for reactor grade, which makes it hard to have plausible deniability that your program is just for peaceful civilian purposes.

If you go the plutonium route, you need a nuclear reactor with on-line refueling capability so you can control the Pu-239/240 ratio. The vast majority of civilian reactors cannot be refueled online, with the few exceptions (eg: CANDU) being under very tight surveillance by the IAEA to avoid this exact issue.

The most covert path to weapons grade nuclear material is probably a small graphite or heavy water moderated reactor running on natural uranium paired up with a small reprocessing plant to extract the plutonium from the fuel. The ultra pure graphite and heavy water are both surveilled, so you would probably also need to produce those yourself. But we are talking nation-state or megalomaniac billionaire level sophistication here, not "disgruntled guy in his garage." And even then, it's a big enough project that it will be very hard to conceal from intelligence services.

[1] https://en.wikipedia.org/wiki/Nth_Country_Experiment


> The difficult part of building a nuclear weapon is and has always been acquiring weapons grade fissile material.

IIRC the argument in the McPhee book is that you'd steal fissile material rather than make it yourself. The book sketches a few scenarios in which UF6 is stolen off a laxly guarded truck (and recounts an accident where some ended up in an airport storage room by error). If the goal is not a bomb but merely to harm a lot of people, it suggests stealing miniscule quantities of Plutonium powder and then dispersing it into the ventilation systems of your choice.

The strangest thing about the book is that it assumes a future proliferation of nuclear material as nuclear energy becomes a huge part of the civilian power grid, and extrapolates that the supply chain will be weak somewhere sometime, but that proliferation never really came to pass, and to my understanding there's less material circulating around American highways now than there was in 1972 when it was published.


The other thing is the vast majority of UF6 in the fuel cycle is low-enriched (reactor grade), so it's not useful for building a nuclear weapon. Access to high-enriched uranium is very tightly controlled.

You can of course disperse radiological materials, but that's a dirty bomb, not a nuclear weapon. Nasty, but orders of magnitude less destructive potential than a real fission or thermonuclear device.


That same function could be fulfilled by better search engines though, even if they don't actually write a plan for you. I think you're right about it being more available now, and perhaps that is a bad thing. But you don't need AI for that, and it would happen anyway sooner or later even with just incremental increases in our ability to find information other humans have written. (Like a version of google books that didn't limit the view to a small preview, to use your specific example of a book where this info already exists)


I think the most realistic fear is not that it has scary capabilities, it's that AI today is completely unusable without human oversight, and if there's one thing we've learned it's that when you ask humans to watch something carefully, they will fail. So, some nitwit will hook up an LLM or whatever to some system and it causes an accidental shitstorm.


Never seen terminator?

Jokes aside, a true agi would displace literally every job over time. Once agi + robot exists, what is the purpose for people anymore. That's the doom, mass societal existentialism. Probably worse than if aliens landed on earth.


You jest, but the US Department of Defense already created SkyNet.

It does, almost, exactly what the movies claimed it could do.

The, super-fun, people working in national defense watched Terminator and instead of taking the story as a cautionary tale, used the movies as a blueprint.

This outcome in a microcosm is bad enough, but take in the direction AI is going and humanity has some real bad times ahead.

Even without killer autonomous robots.


Ok, so AI / Robots take all the jobs. Why is that bad? It's not like the civil war was fought to end slavery because people needed jobs. All people really need is some food and clean water. Healthcare etc is super nice, but I don't see why RObots and AI would lead to that stuff becoming LESS accessible.


They essentially extrapolate from what the most intelligent species on this planet did to the others.


It’s not AI itself that’s the bad part, it’s how the world reacts to white collar work being obliterated.

The wealth hasn’t even trickled down whilst we’ve been working, what’s going to happen when you can run a business with 24/7 autonomous computers?


I kind of get it. A super intelligent AI would give that corporation exponentially more wealth than everyone else. It would make inequality 1000x worse than it is today. Think feudalism but worse.


Feudalism but without people actually having to work doesn't sound as bad.


Not just any AI. AGI, or more precisely ASI (artificial super-intelligence), since it seems true AGI would necessarily imply ASI simply through technological scaling. It shouldn't be hard to come up with scenarios where an AI which can outfox us with ease would give us humans at the very least a few headaches.


Potentially wreck the economy by causing high unemployment while enabling the technofeudalists to take over governments. Even more doomer scenario is if they succeed in creating ASI without proper guardrails and we lose control over it. See the AI 2027 paper for that. Basically it paper clips the world with data centers.


Make money exploiting natural and human resources while abstracting perceived harms away from stakeholders. At scale.


Act coherently in an agentic way for a long time, and as a result be able to carry out more complex tasks.

Even if it is similar to today's tech, and doesn't have permanent memory or consciousness or identity, humans using it will. And very quickly, they/it will hack into infrastructure, set up businesses, pay people to do things, start cults, autonomously operate weapons, spam all public discourse, fake identity systems, stand for office using a human. This will be scaled thousands or millions of times more than humans can do the same thing. This at minimum will DOS our technical and social infrastructure.

Examples of it already happening are addictive ML feeds for social media, and bombing campaigns targetting based on network analysis.

The frame of "artificial intelligence" is a bit misleading. Generally we have a narrow view of the word "intelligence" - it is helpful to think of "artificial charisma" as well, and also artificial "hustle".

Likewise, the alienness of these intelligences is important. Lots of the time we default to mentally modelling AI as human. It won't be, it'll be freaky and bizarre like QAnon. As different from humans as an aeroplane is from a pigeon.


be used to convince people that they should be poor and happy while those leveraging the tools hoard the world's wealth and live like kings.


One of two things:

1. The will of its creator, or

2. Its own will.

In the case of the former, hey! We might get lucky! Perhaps the person who controls the first super-powered AI will be a benign despot. That sure would be nice. Or maybe it will be in the hands of democracy- I can't ever imagine a scenario where an idiotic autocratic fascist thug would seize control of a democracy by manipulating an under-educated populace with the help of billionaire technocrats.

In the case of the latter, hey! We might get lucky! Perhaps it will have been designed in such a way that its own will is ethically aligned, and it might decide that it will allow humans to continue having luxuries such as self-determination! Wouldn't that be nice.

Of course it's not hard to imagine a NON-lucky outcome of either scenario. THAT is what we worry about.


e.g. design a terrible pathogen


LLMs do not know the evolutionary fitness of pathogens for all possible genomes & environments. LLMs have not replaced experimental biology.


Note that we aren't talking about risks of LLMs specifically here, they embody what I said in the ancestor comment: "current technological paradigm".


Take 30 minutes and watch this:

https://www.youtube.com/watch?v=5KVDDfAkRgc




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: