My thought as well, but the question is: does it matter for what the survey is trying to achieve?
Some people will interpret it one way, some a subtly different way, but is there a reason that people's interpretation changes over time in a way that is more rapid and more significant than the underlying question of how good their life is broadly? Probably not.
There may be cultural differences that make it tricky to do comparisons between cultures / countries, but it should give something useful when looking at the same culture / country over time.
In particular, the bulk of the substantial text of the order has a pretty clear culture war bend with all the talk about how truthful AI is. This is in large part a fight over the political leaning of AI models.
That's the whole point. They aren't law, and they were (probably) never meant to be so far-reaching, and yet the clear purpose of this Executive Order is to tell the states what laws they can enact. The EO doesn't have the legal power to do that directly, but it clearly outlines the intention to withdraw federal funding from states that refuse to toe the line.
That's a fair and good interjection. The truth is probably that at society scale, both approaches are traditional.
The open sharing approach is traditional for research and academia, while the information restricting approach is traditional for business-oriented thinking.
So, a young field will typically start out fairly open and then get increasingly closed down. The long-term trajectory differs by field, and the modern open-source landscape shows that there can be a fair bit of oscillation.
We're seeing the same basic shape of story play out in generative AI.
It is quite interesting to ponder these usage statistics, isn't it?
According to their charts they're at a throughput of something like 7T tok/week total now. At 1$/Mtok, that's 7M$ per week. Less than half a billion per year. How much is that compared to the total inference market? And yet again, their throughput went like 20x in one year, who knows what's to come...
Yes, but that token growth chart looks linear to me. There's the usual summer slump and then growth catches up once the autumn begins, but if you plot a line from the winter growth period at the start of 2025 you end up roughly in the right place except for an unusual spike in the most recent month (maybe another big user).
I'd have liked to see a chart of all tokens broken down by category rather than just percentages, but what this data seems to be saying is that growth isn't exponential, and is being dominated by growth in programming. A lot of the spending in AI is being driven by the assumption that it'll be used for everything everywhere. Perhaps it's just OpenRouter's user base, but if this data is representative then it implies AI adoption isn't growing all that fast outside of the tech industry (especially as "science" is nearly all AI related discussion).
This feels intuitively likely. I haven't seen many obvious signs of AI adoption around me once I leave the office. Microsoft has been struggling to sell its Copilot offerings to ordinary MS Office users, who apparently aren't that keen. The big wins are going to be existing apps and data pipelines calling out to AI, and it'll just take time to figure out what those use cases are and integrate them. Integrating even present-day AI into the long tail of non-tech industries is probably going to take decades.
Also odd: no category for students cheating on homework? I notice that "editing services" is a big chunk of the "academia" category. Probably most of that traffic goes direct to chatgpt.com and bypasses OpenRouter entirely.
If you'll look at the Guidelines for HN linked at the bottom of the page, you'll note that whether a submission is productive is not a criterion.
You could perhaps make an argument that among the flood of AI-related submissions, this one doesn't particularly move the needle on intellectual curiosity. Although satire is generally a good way to allow for some reflection on a serious topic, and I don't recall seeing AI-related satire here in a while.
Others have given some answer to who was made poorer by Ballmer holding Microsoft shares, but I'd argue that this is the wrong question. Instead of looking at a specific individual, we should look at systems.
A system that allows this kind of extreme wealth accumulation is quite fundamentally at odds with democracy because extreme wealth can be and is in practice used to influence politics in a way that undermines democracy.
Some people might not care about that, but if your goal is improving the outcomes of the largest number of people, then pretty much everything else is secondary to having a functioning democracy.
This is genuinely the only response I ever get - that wealth can be used to influence politics. In my view this is a poor argument for two reasons.
1. The amount wealth actually influences politics is hard to measure but likely much lower than most people assume. Trump was outspent significantly both times he won. Bloomberg dropped $1B in a couple months and won nowhere but American Somoa. Probably the two biggest boogiemen, Koch and Soros, have spent billions over the years on their causes, and the present administration and general overton window is actually something neither of them like! The nonelected king-makers in American and EU politics are not actually wealthy people at all, just those with a lot of accumulated political capital, for instance, Jim Clyburn who single-handedly gave the 2020 nomination to Biden.
2. The amount that it takes to finance initiatives is much lower than centibillionaire level. Is the original $20B OP mentioned not sufficient to finance some ballot initiative? Why is it the increase to $130B that causes concern? The truth is that even a wealthy non-billionaire can easily do that, or bankroll someone's run for congress, or fund a partisan thinktank. The maximum level of wealth you'd have to set your ban-limit at would be problematically low.
I just find it hard to take the 3x claims at face value because actual code generation is only a small part of my job, and so Amdahl's law currently limits any productivity increase from agentic AI to well below 2x for me.
(And I believe I'm fairly typical for my team. While there are more junior folks, it's not that I'm just stuck with powerpoint or something all day. Writing code is rarely the bottleneck.)
So... either their job is really just churning out code (where do these jobs exist, and are there any jobs like this at all that still care about quality?) or the most generous explanation that I can think of is that people are really, really bad at self-evaluations of productivity.
Good on you for having the meta-cognition to recognize it.
I've graded many exams in my university days (and set some myself), and it's exceedingly obvious that that's what many students are doing. I do wonder though how often they manage to fly under the radar. I'm sure it happens, as you described.
(This is also the reason why I strongly believe that in exams where students write free-form answers, points should be subtracted for incorrect statements even if a correct solution is somewhere in the word salad.)
Some people will interpret it one way, some a subtly different way, but is there a reason that people's interpretation changes over time in a way that is more rapid and more significant than the underlying question of how good their life is broadly? Probably not.
There may be cultural differences that make it tricky to do comparisons between cultures / countries, but it should give something useful when looking at the same culture / country over time.
reply