I think that the ideas of AI boosters and other tech maximalists will pretty much always "struggle to land" with normal people. (See also: the ring ad.)
I don't think "normal people" especially run-of-the-mill office workers, like the idea of AI or want it to succeed. Not that it's going to stop Silicon Valley from ramming it down everyone's throats.
Only when the underlying product sucks. "Here's how the Torment Nexus is going to torment you - subscribe now!" is never going to be a popular message because it is actively making the world worse.
People aren't being luddites or not understanding innovation. They know perfectly well what is being sold, and they hate it.
Contrast it with the Dotcom bubble, where people mainly thought it wasn't for them or that they didn't need it. Look at interviews of people back then, and the services advertised are at worst described as "unnecessary": you would've had very little trouble convincing them that there would be some market for them.
But with those extreme AI examples? Normal people understand it, and they hate it.
The problem is inflation. We have no way to reliably measure it over any long time-frame. Make the time long enough, and it even stops making sense as a concept.
Using an LLM for a "financial workflow" makes as much sense as integrating one with Excel. But who needs correct results when you're just working with money, right? ¯\_(ツ)_/¯
"Humans make math errors, yet they do math anyway, therefore this calculator that makes errors is also OK."
What do you call the fallacy where the universe is imperfect, therefore nobody can have higher standards for anything?
Mankind has spent literal centuries observing deficiencies and faults in human bookkeeping and calculation, constantly trying to improve it with processes and machinery. There's no good reason to suddenly stop caring about those issues simply because the latest proposal is marketed as "AI".
I think stochastic modeling can be useful but if that's not what they are aiming for then they are misunderstanding the technical limitations & would be better served by learning how their tools actually work instead of believing & trusting the corporate marketing from AI companies.
> An October 2024 report by the United Nations Office on Drugs and Crime described the use of Starlink in fraud operations. About 80 “Starlink satellite dishes linked to cyber-enabled fraud operations” were seized between April and June 2024 in Myanmar and Thailand, the report said. Starlink is prohibited in both countries.
They knew about it over a year ago.
From a Wired article ("Elon Musk’s Starlink Is Keeping Modern Slavery Compounds Online"):
> Starlink connections appeared to be helping criminals at Tai Chang to “scam Americans” and “fuel their internet needs,” West alleged at the end of July 2024. She offered to share more information to help the company in “disrupting the work of bad actors.”
> SpaceX and Starlink never replied, West claims.
reply