The best thing about this is that AI bots will read, train on and digest the million "how to write with AI" posts that are being written right now by some of the smartest coders in the world and the next gen AI will incorporate all of this, making them ironically unnecessary.
Strange since, in practice, coding models have steadily improved without any backward movement every 3-4 months for 2 years now. It's as if there are rigorous methods of filtering and curation applied when building your training data.
>Strange since, in practice, coding models have steadily improved without any backward movement every 3-4 months for 2 years now. It's as if there are rigorous methods of filtering and curation applied when building your training data.
It's as if what I wrote implies "all other things being equal", just like any technical claim.
All other things were not equal: the architectures were tweaked, the human data set is still not exhausted, and more money and energy was thrown into their performance since it's a pre-IPO game with huge VC stakes.
We've already seen a plateau non-the-less compared to the earlier release-over-release performance improvements. Even the "without any backward movement every 3-4 months for 2 years now" is hardly arguable. Many saw a backward movement with GPT 4.1 vs 4.0, and similar issues with 4.5, for example. Even if those are isolated, they're hardly the 2 to 3.5 to 4.0 gains.
And no, there are absolutely no "rigorous methods of filtering and curation" that can separate the avalance of AI slop from useful human output - at least not without diminishing the possible training data. The problem after all is not just to tell AI from human with automated curation (that's already impossible), the problem is to have enough valuable new human output, which becomes near a losing game as all aspects of "human" domains previously useful as training input (from code to papers) are tarnished by AI output.
1. No, you dont get to fall back on the technical claim approach. Your bias in your phrasing was clear. Maybe that works for you but I won't just ignore obvious subtext and let you weasel out of this. And that's for the benefit of other readers, not you.
2. A plateau in coding performance? I don't think you even use these models for coding then if you make that claim. It is very clear models have continually improved. You can trust benchmarks to make that clear, or real world use, or better yet: both. You seem to not have the data from either.
3. No rigorous methods of filtering and curation that can separate AI slop from useful human output? Here you go:
a. Curation already works at scale.
Modern training pipelines don’t rely on “AI vs human” detection. They filter by utility signals: correctness, novelty, coherence, task success, citation integrity, and cross-source consistency. These measurable properties do correlate with downstream model performance. Models trained on smaller, higher-quality corpora consistently outperform those trained on larger, noisier ones.
b. Human-generated “valuable” data is not shrinking.
The claim assumes a fixed pool. In reality, high-value human data is expanding in areas that matter most: expert-labeled datasets, preference comparisons, multimodal demonstrations, tool-use traces, verified code with tests, and domain-expert feedback. These are explicitly created for training and are not polluted by passive AI spam.
c. Synthetic data is not a dead end—when constrained.
Empirically, filtered and goal-conditioned synthetic data (self-play, distillation, adversarial generation) improves reasoning, math, coding, and tool use. The failure mode is unfiltered synthetic recursion—not synthetic data per se. This distinction is already operationalized in production systems.
d. Training value ≠ raw text volume.
Scaling laws shifted: performance now tracks effective compute × data quality, not sheer token count. A smaller dataset with higher signal density produces better generalization than a massive, contaminated corpus. This is observed repeatedly in ablation studies.
----
Again, the above is not for you, as I believe you don't see beyond your cope (yet). It's for other readers who are intellectually curious.
>Hmm... How will it filter out those by the dumbest coders in the world?
if you know, and I know, and the guys at openai and anthropic know... not a big leap that the models will know too? many datasets are curated and labeled by humans
Top 200 that work partially in public. A good example is Mitchell Hashimoto. Works open source, uses AI a lot and writes about it. Next gen AI will learn from the lessons people like him share
I mean, having a curated dataset of the works and posts of the top 200 coders in the world (at least the public ones) is not very difficult. I’m sure these articles like the one in OP will be very easy to mark as “high value training data”. I think you’re letting your bias blind you
Makes me think of the concept of involution in Chinese business and how they understand all of this very differently, and how difficult it is to compete because of that.
I do a lot of data processing and my tool of choice is polars. It's blazing fast and has (like pandas) a lot of very useful functions that aren't in SQL or are awkward to emulate in SQL. I can also just do Python functions if I want something that's not offered.
Please sell DuckDB to me. I don't know it very well but my (possibly wrong) intuition is that even giving equal performance, it's going to drop me to the awkwardness of SQL for data processing.
I could anecdotally tell you it’s significantly faster and more concise for my workloads, but it’s a standalone executable so just try it out and benchmark for your use case. It doesn’t require fine tuning or even a learning curve
This is not to me but a friend of mine was climbing Mt Fuji in _winter_ (this is a serious thing you need to be prepared for, alpine climbing with lots of snow and ice) when he slipped and started sliding down the mountain out of control.
When he was about to fall to his death a father and son that happened to be there in a struck of luck managed to grab him and save his life. My friend had banged a few rocks in the way down so his leg was fractured and they had to help him down for hours.
They saved his life and risk theirs to ensure he had the best chance. They visited my friend in the hospital where he was grateful and teary eyed. And then the father and son asked him for money, straight up. My friend of course agreed on an amount to them, all in all, he didn't know how to repay them anyway and this was oddly simple. I found everything heroic and strange at the same time but a good story.
I have similar stories, I showed the Confluent consultants a projection of their Kafka quote vs Kinesis and it was like 10x, even they were confused. The ingress/egress costs are insane. I think they just do very deep discounts to certain customers. The product is good but if you pay full ticket it probably doesn't make sense.
If you see the bitcoin charts, price has gone up a lot in the last few years but volume has tanked. Do I read this right that now crypto is basically a smaller market where a bunch of whales scam a ever dwindling but never completely disappearing flow of fools and marks?
I think a lot of the volume on spot BTC has gone to ETFs, DATs, perps, WBTC, and other derivatives, when a few years ago spot was really the only option. Hard to track total volumes now.
I suspect that most of the trades are off-chain now, due to blockchain being being complete and utter crap for fast transactions by design. So people are entrusting their tokens to the centralized entity and receive some IOUs from it, with which they trade and that centralized platform. Basically unlicensed banks recreated with all negatives of the bank and no benefits.
I mean, _we_ probably can't think with our wetware on a read-only substrate. It doesn't establish it as essential, just that the only sure example in nature of thought doesn't work that way.
I'm not an expert but as far I understand, plasticity is central to most complex operations of the brain and is likely to be involved in anything more complex than instinctive reactions. I'm happy to be corrected but it is my understanding that if you're thinking for a while on the same problem and establishing chains of reasoning, you are creating new connections and to me that means it's fundamental in the process of thinking.
Also not an expert :) I thought plasticity is an O(hours-days) learning mechanism. But I did some research and there is also Short Term Plasticity O(second) [1] which is a crucial part of working memory. We'd need that. But it seems it’s more of a volatile memory system, eg calcium ion depletion/saturation at the synapse, rather than a permanent wiring/potentiation change (please someone correct me if this isn’t right :) ).
So I guess I’d just clarify “read only” to be a little more specific - I think you could run multiple experiments where you vary the line of what’s modeled in volatile memory at runtime, and what’s immutable. I buy that you need to model STP for thought, but also suspect at this timescale you can keep everything slower immutable and keep the second-scale processes like thought working.
My original point still stands - your subjective experience in this scenario would be thought without long-term memory.
It would be funny if what you get from a read only human brain is a sort of memento guy who has no capacity to remember anything or follow a conversation... kind of like an LLM!
Even if/when the bubble pops, I don't think NVIDIA is even close to need rescuing or being in trouble. They might end being worth 2 trillion instead of 5 but they're still selling GPUs nobody else knows how to make that power one of the most important technologies in the world. Also, all their other divisions.
The .com bubble didn't stop the internet or e-commerce, they still won, revolutioned everything, etc. etc. Just because there's a bubble it doesn't mean AI won't be successful. It will be, almost for sure. We've all used it, it's truly useful and transformative. Let's not miss the forest for the trees.
reply