That logic applies for AI-cynics rather than AI-doomers — the latter are metaphorically the equivalent of warning about CO2-induced warming causing loss of ice caps and consequent sea level rises and loss of tens of trillions of dollars of real estate as costal cities are destroyed… in 1896*, when it was already possible to predict, but we were a long way from both the danger and the zeitgeist to care.
But only metaphorically the equivalent, as the maximum downside is much worse than that.
> That logic applies for AI-cynics rather than AI-doomers
Fwiw, I don't believe that there are any AI doomers. I've hung out in their forums for several years and watched all their lectures and debates and bookmarked all their arguments with strangers on X and read all their articles and …
They talk of bombing datacentres, and how their children are in extreme danger within a decade or how in 2 decades, the entire earth and everything on it will have been consumed for material or, best case, in 2000 years, the entire observable universe will have been consumed for energy.
The doomers have also been funded to the tune of half a billion dollars and counting.
If these Gen-X'ers and millennials really believed their kids right now were in extreme peril due to the stupidity of everyone else, they'd be using their massive warchest full of blank cheques to stockpile weapons and hire hitmen to de-map every significant person involved in building AI. After all, what would you do if you knew precisely who are in the groups of people coming to murder your child in a few years?
But the true doomer would have to be the ultimate nihilist, and he would simply take himself off the map because there's no point in living.
> or, best case, in 2000 years, the entire observable universe will have been consumed for energy
You're definitely mixing things up, and the set of things may include fiction. 2000 years doesn't get you out of the thick disk region of our own galaxy at the speed of light.
> The doomers have also been funded to the tune of half a billion dollars and counting.
> If these Gen-X'ers and millennials really believed their kids right now were in extreme peril due to the stupidity of everyone else, they'd be using their massive warchest full of blank cheques to stockpile weapons and hire hitmen to de-map every significant person involved in building AI. After all, what would you do if you knew precisely who are in the groups of people coming to murder your child in a few years?
The political capital to ban it worldwide and enforce the ban globally with airstrikes — what Yudkowsky talked about was "bombing" in the sense of a B2, not Ted Kaczynski — is incompatible with direct action of that kind.
And that's even if such direct action worked. They're familiar with the luddites breaking looms, and look how well that worked at stopping the industrialisation of that field. Or the communist revolutions, promising a great future, actually taking over a few governments, but it didn't actually deliver the promised utopia. Even more recently, I've not heard even one person suggest that the American healthcare system might actually change as a result of that CEO getting shot recently.
But also, you have a bad sense of scale to think that "half a billion dollars" would be enough for direct attacks. Police forces get to arrest people for relatively little because "you and whose army" has an obvious answer. The 9/11 attacks may have killed a lot of people on the cheap, but most were physically in the same location, not distributed between several in different countries: USA (obviously), Switzerland (including OpenAI, Google), UK (Google, Apple, I think Stability AI), Canada (Stability AI, from their jobs page), China (including Alibaba and at least 43 others), and who knows where all the remote workers are.
Doing what you hypothesise about would require a huge, global, conspiracy — not only exceeding what Al Qaida was capable of, but significantly in excess of what's available to either the Russian or Ukrainian governments in their current war.
Also:
> After all, what would you do if you knew precisely who are in the groups of people coming to murder your child in a few years?
You presume they know. They don't, and they can't, because some of the people who will soon begin working on AI have not yet even finished their degrees.
If you take Altman's timeline of "thousands of days", plural, then some will not yet have even gotten as far as deciding which degree to study.
I somehow accidentally made you think that I was trying to have a debate about doomers, but I wasn't which is why I prefixed it with "fwiw" (meaning for-what-it's-worth; I'm a random on the internet, so my words aren't worth anything, certainly not worth debating at length) Sorry if I misrepresented my position. To be clear, I have no intense intellectual or emotional investment in doomer ideas nor in criticism of doomer ideas.
Anyway,
> You're definitely mixing things up, and the set of things may include fiction. 2000 years doesn't get you out of the thick disk region of our own galaxy at the speed of light.
Here's what Arthur Breitman wrote[^0] so you can take it up with him, not me:
"
1) [Energy] on planet is more valuable because more immediately accessible.
2) Humans can build AI that can use energy off-planet so, by extension, we are potential consumers of those resources.
3) The total power of all the stars of the observable universe is about 2 × 10^49 W. We consume about 2 × 10^13 W (excluding all biomass solar consumption!). If consumption increases by just 4% a year, there's room for only about 2000 years of growth.
"
About funding:
>> The doomers have also been funded to the tune of half a billion dollars and counting.
> I've never heard such a claim. LessWrong.com has funding more like a few million
"
A young nonprofit [The Future of Life Institute] pushing for strict safety rules on artificial intelligence recently landed more than a half-billion dollars from a single cryptocurrency tycoon — a gift that starkly illuminates the rising financial power of AI-focused organizations.
"
> But only metaphorically the equivalent, as the maximum downside is much worse than that.
Maybe I'm a glass-half-full sort of guy, but everyone dying because we failed to reverse man-made climate change doesn't seem strictly better than everyone dying due to rogue AI
Everyone dying from a rogue AI would be stupid and embarrasing: we used resources that would've been better used fighting climate change, but ended up being killed by an hallucinating paperclip maximizer that came from said resources.
Stupid squared: We die because we gave the AI the order of reverting climate change xD.
Given the assumption that climate change would kill literally everyone, I would agree.
But also: I think it extremely unlikely for climate change to do that, even if extreme enough to lead to socioeconomic collapse and a maximal nuclear war.
Also also, I think there are plenty of "not literally everyone" risks from AI that will prevent us from getting to the "really literally everyone" scenarios.
So I kinda agree with you anyway — the doomers thinking I'm unreasonably optimistic, e/acc types think I'm unreasonably pessimistic.
But only metaphorically the equivalent, as the maximum downside is much worse than that.
https://en.m.wikipedia.org/wiki/Svante_Arrhenius