My take: Any gains from an "LLM-oriented language" will be swamped by the massive training set advantage held by existing mainstream languages. In order to compete, you would need to very rapidly build up a massive corpus of code examples in your new language, and the only way to do that is with... LLMs. Maybe it's feasible, but I suspect that it simply won't be worth the effort; existing languages are already good enough for LLMs to recursively self-improve.
Blackpill is that, for this reason, the mainstream languages we have today will be the final (human-designed) languages to be relevant on a global scale.
Eventually AIs will create their own languages. And humans will, of course, continue designing hobbyist languages for fun. But in terms of influence, there will not be another human language that takes the programming world by storm. There simply is not enough time left.
My impression is that AI models need large amounts of quality training data. "Data contamination", i.e. AI output in the training data set has been a problem for years.
The skill isn’t being right. It’s entering discussions to align on the problem.
Clarity isn’t a style preference - it’s operational risk reduction.
The punchline isn’t “never innovate.” It’s “innovate only where you’re uniquely paid to innovate.”
This isn’t strictly about self-promotion. It’s about making the value chain legible to everyone.
The problem isn’t that engineers can’t write code or use AI to do so. It’s that we’re so good at writing it that we forget to ask whether we should.
This isn’t passive acceptance but it is strategic focus.
This isn’t just about being generous with knowledge. It’s a selfish learning hack.
Insist on interpreting trends, not worshiping thresholds. The goal is insight, not surveillance.
Senior engineers who say “I don’t know” aren’t showing weakness - they’re creating permission.
There’s some really solid insights here, but the editing with AI to try to make up for an imperfect essay just makes the points they’re trying to convey less effective.
The lines between what is the author’s ideas and what is AI trying to finish a half or even mostly baked idea just removes so much of the credibility.
And it’s completely counter to the “clarity vs cleverness” idea and the just get something out there instead of trying to get it perfect.
Thank you for doing this. It allowed me to skip reading the article altogether immediately knowing it is AI generated slop. Usually I'm a little ways into it before my LLM detector starts going off, but these "This isn't X. It's Y." phrases are such a dead giveaway.
This is conflating two things: The stuck, and the suck.
As the author says, the time you spend stuck is the time you're actually thinking. The friction is where the work happens.
But being stuck doesn't have to suck. It does suck, most of the time, for most people; but most people have also experienced flow, where you are still thinking hard, but in a way that does not suck.
Current psychotechnology for reducing or removing the suck is very limited. The best you can do is like... meditate a lot. Or take stimulants, maybe. I am optimistic that within the next few decades we will develop much more sophisticated means of un-suckifying these experiences, so that we can dispense with cope like "it's supposed to be unpleasant" once and for all.
You certainly do not need to play music at the speed the performer intended! There are whole genres (and subgenres) based on this. :) Personally, I have found that slowing a familiar piece down by ~5% tricks my brain into perceiving it as novel again, which helps me attend to it more closely and appreciate it more.
Conversely, I use this "block pattern" a lot, and sometimes it causes lifetime issues:
let foo: &[SomeType] = {
let mut foo = vec![];
// ... initialize foo ...
&foo
};
This doesn't work: the memory is owned by the Vec, whose lifetime is tied to the block, so the slice is invalid outside of that block. To be fair, it's probably best to just make foo a Vec, and turn it into a slice where needed.
Unless I'm misunderstanding, you'd have the same lifetime issue if you tried to move the block into a function, though. I think the parent comment's point is that it causes fewer issues than abstracting to a separate function, not necessarily compared to inlining everything.
Years ago there was a YouTuber, "Surveillance Camera Man," who went around pointing a camera at people with no pretense. Frequently the subjects were upset by this and became aggressive, even violent. I believe the intended message was that this is a natural and justified reaction to being surveilled, and yet there is little outcry because public surveillance is largely invisible and/or faceless (e.g. just a CCTV camera mounted on a building, rather than a stranger invading your personal space).
My take on that is that they're different situations because a CCTV camera has 1000s of hours of footage to scrub through and will likely only be looked at if/when something bad happens. Whereas the guy pointing a camera at me probably only has a couple hours which means I'm likely relevant to the cameraman (ie, I'll go into that final video) whereas I'm not that relevant to the CCTV.
I know more recent cameras are using AI analysis to constantly track and catalog people which is more worrying but the old school surveillance cameras don't bother me as much.
I like the OP's idea for an art project more because it's showing your what is really happening (rather than convincing people that filming someone on a 4k camera is the same as CCTV surveillance) - CCTV cameras are constantly monitoring and many can be publicly accessed.
I don't even think that is the best defense because it takes a very passive acceptance to it. On the flip side, if someone steal my bike or assaults me in public, I'd like there to be some accountability which would otherwise never happen (and vice versa). In the past, if a white lady were to accuse a black man of some crime, then it was practically impossible to fight it. With CCTV, you can prove innocence and guilt a lot more conclusively.
"Arguing that you don't care about the right to privacy because you have nothing to hide is no different than saying you don't care about free speech because you have nothing to say." -Edward Snowden
This year I've been working on a bytecode compiler for it, which has been a nice challenge. :)
When I want to get on the leaderboard, though, I use Go. I definitely felt a bit handicapped by the extra typing and lack of 'import solution' (compared to Python), but with an ever-growing 'utils' package and Go's fast compile times, you can still be competitive. I am very proud of my 1st place finish on Day 19 2022, and I credit it to Go's execution speed, which made my brute-force-with-heuristics approach just fast enough to be viable.
yep, https://github.com/lukechampine/slouch. Fair warning, it's some of the messiest code I've ever written (or at least, posted online). Hoping to clean it up a bit once the bytecode stuff is production-ready.
There was that Friends episode where Joey got halfway through Little Women thinking "Jo" was a guy and he had to start over. Blood Meridian was kinda like that where I realized I had lost track of what was happening in a chapter, who was killing who for what reason, and had to backtrack. So yeah the book makes you feel like Joey on Friends.
Maybe this is sacrilege but I find audiobooks help for this because the narrator just keeps going with little effort on my part. Even if I miss things it’s okay, and getting through it helps get into it.
My friend who recommended it to me told me the best way to consume it was via audiobook even though he prefers to read. He said it's the one book he prefers that way.
My take: Any gains from an "LLM-oriented language" will be swamped by the massive training set advantage held by existing mainstream languages. In order to compete, you would need to very rapidly build up a massive corpus of code examples in your new language, and the only way to do that is with... LLMs. Maybe it's feasible, but I suspect that it simply won't be worth the effort; existing languages are already good enough for LLMs to recursively self-improve.
reply