Yeah many programming languages have been advertised to fulfil precisely this goal, that people can program computers via natural language instead of having to think hard and too much about details.
Usually programming languages intend to make editing as easy as possible, but also understanding what the program does, as well as reasoning about performance, with different languages putting different emphasis on the various aspects.
It's the induced demand or river length/flow/sediment kind of situation. Doesn't matter what level of abstraction the language provides, we always write the code that reaches the threshold of our own mental capacity to reason about it.
Smart people know how to cap this metric in a sweet spot somewhere below the threshold.
If you are an ISP running dual stack ipv4 with NAT plus ipv6, the more connections happen via ipv6 and the more traffic happens via ipv6, the better, because it doesn't have to go through the NAT infrastructure which is more expensive, and cost scales with traffic (each packet needs its header to be modified) and number of parallel open connections (each public v4 address gives you only 65k port numbers, plus this mapping needs to be stored in RAM and databases).
7621 devices include hardware NAT. And anything Qualcomm in the recent past does. Most home WiFi 5 and above routers can do hardware NAT just fine. Hardware NAT allows for using cheap and old cpus for CPE. ISP hardware is a different story. Some decent routers that can do that which don’t cost a lot.
> Not really, this is only true for mobile devices.
Tell that to my fixed line provider, with their CGNAT ... And its just about every provider in Germany pulling that crap. O, and dynamic IPv6 pre-fix also, because can't have you run any servers!
Yes, plenty of ways to bypass it but when you have ISP's still stuck in 1990's attitude, with dynamic IPv4/IPv6, limited upload (1/3 to 1/5 of your download), etc ...
I have mostly been writing Rust in the last 10 years, but recently (1 year) I have been writing Go as well as Rust.
The typical Go story is to use a bunch of auto generation, so a small change quickly blows up as all of the auto generate code is checked into git. Like easily a 20x blowup.
Rust on the other hand probably does much more such code generation (build.rs for stuff like bindgen, macros for stuff like serde, and monomorphized generics for basically everything). But all of this code is never checked into git (with the exception of some build.rs tools which can be configured to run as commands as well), or at least 99% of the time it's not.
This difference has impact on the developer story. In go land, you need to manually invoke the auto generator and it's easy to forget until CI reminds you. The auto generator is usually quite slow, and probably has much less caching smartness than the Rust people have figured out.
In Rust land, the auto generation can, worst case, run at every build, best case the many cache systems take care of it (cargo level, rustc level). But still, everyone who does a git pull has to re-run this, while with the auto generation one can theoretically only have the folks run it who actually made changes that changed the auto generated code, everyone else gets it via git pull.
So in Go, your IDE is ready to go immediately after git pull and doesn't have to compile a tree of hundreds of dependencies. Go IDEs and compilers are so fast, it's almost like cheating from Rust POV. Rust IDEs are not as fast at all even if everything is cached, and in the worst case you have to wait a long long time.
On the other hand, these auto generation tools in Go are only somewhat standardized, you don't have a central tool that takes care of things (or at least I'm not aware of it). In Rust land, cargo creates some level of standardization.
You can always look at the auto generated Go code and understand it, while Rust's auto generated code usually is not IDE inspectable and needs special tools for access (except for the build.rs generated stuff which is usually put inside the target directory).
I wonder how a language that is designed from scratch would approach auto generation.
> On the other hand, these auto generation tools in Go are only somewhat standardized, you don't have a central tool that takes care of things (or at least I'm not aware of it).
Yeah, this is a hard problem, and you're right that both have upsides and downsides. Metaprogramming isn't easy!
I know I don't want to have macros if I can avoid them, but I also don't forsee making code generation a-la-Go a first class thing. I'll figure it out.
> The typical Go story is to use a bunch of auto generation, so a small change quickly blows up as all of the auto generate code is checked into git. Like easily a 20x blowup.
Why do you think the typical Go story is to use a bunch of auto generation? This does not match my experience with the language at all. Most Go projects I've worked on, or looked at, have used little or no code generation.
I'm sure there are projects out there with a "bunch" of it, but I don't think they are "typical".
Same here. I've worked on one project that used code generation to implement a DSL, but that would have been the same in any implementation language, it was basically transpiring. And protobufs, of course, but again, that's true in all languages.
The only thing I can think of that Go uses a lot of generation for that other languages have other solutions for is mocks. But in many languages the solution is "write the mocks by hand", so that's hardly fair.
Me neither. My go code doesn't have any auto-generation. IMO it should be used sparingly, in cases where you need a practically different language for expressivity and correctness, such as a parser-generator.
Anything and everything related to Kubernetes in Go uses code generation. It is overwhelmingly "typical" to the point of extreme eye-rolling when you need to issue "make generate" three dozen times a day for any medium sized PR that deals with k8s types.
The "just generate go code automatically then check it in" is a massive miswart from the language, and makes perfect sense because that pathological pattern is central to how google3 works.
A ton of google3 is generated, like output from javascript compilers, protobuf serialization/deserialization code, python/C++ wrappers, etc.
So its an established Google standard, which has tons of help from their CI/CD systems.
For everyone else, keeping checked-in auto-generated code is a continuous toil and maintenance burden. The Google go developers don't see it that way of course, because they are biased due to their google3 experience. Ditto monorepos. Ditto centralized package authorities for even private modules (my least fave feature of Go).
> For everyone else, keeping checked-in auto-generated code is a continuous toil and maintenance burden. The Google go developers don't see it that way of course, because they are biased due to their google3 experience.
The golang/go repo itself has various checked-in generated repo
When Go was launched, it was said it was built specifically for building network services. More often than not that means using protobuf, and as such protobuf generated code ends up being a significant part of your application. You'd have that problem in any language, theoretically, due to the design of protobuf's ecosystem.
Difference is that other languages are built for things other than network services, so protobuf is much less likely to be a necessary dependency for their codebases.
What I've found over the years is that protobuf is actually not that widespread, and, given that, if you ignore gogoprotobuf package, it would generate terrible (for Go's GC) structs with pointers for every field, it's not been terribly popular in Go community either, despite both originating at Google
Machine translation has certainly become better, and that's amazing and wonderful to see. Definitely an amazing thing that has come out of the AI boom.
However, it has led to many websites to automatically enable it (like reddit), and one has to find a way to opt out for each website, if one speaks the language already. Especially colloquial language that uses lots of idioms gets translated quite weirdly still.
It's a bit sad that websites can't rely on the languages the browser advertises as every browser basically advertises english, so they often auto translate from english anyways if they detect a non-english IP address.
The problem is that websites don't respect the browser language but translate based on IP. Which is stupid for people who want to read the original English content in English and not their native language.
Early in my career I spent a lot of time thinking that HTML was antiquated. "Obviously they had 20th century ideas on what websites would be. As if we're all just publishing documents." But the beauty of HTML eventually clicked for me: it's describing the semantics of a structured piece of data, which means you can render a perfectly valid view of it however you want if you've got the right renderer!
I imagine language choice to be the same idea: they're just different views of the same data. Yes, there's a canonical language which, in many cases, contains information that gets lost when translated (see: opinions on certain books really needing to be read in their original language).
I think Chrome got it right at one point where it would say "This looks like it's in French. Want to translate it? Want me to always do this?" (Though I expect Chrome to eventually get it wrong as they keep over-fitting their ad engagement KPIs)
This is all a coffee morning way of saying: I believe that the browser must own the rendering choices. Don't reimplement pieces of the browser in your website!
The parent comment is essentially correct that translations of the same material into different languages represent different views of the same data. A human translator must put in quite a bit of effort establishing what underlying situation is being described by a stretch of language.
Machine translations don't do this; they attempt to map one piece of language to another piece of (a different) language directly.
Relatedly, I tend to think of translations somewhat similar to a lossy system like those used in (say) image compression.
ie a compressed jpg of an image can retain quite a lot of the detail of the original, but it can introduce its own artifacts and lose some of the details
For things where the overall shape and picture is all that's required, that's fine. For things where the fine details matter, it's less fine.
Not sure that every browser advertises English, but mine certainly does. However, as I'm in Portugal, many websites ignore what my browser says and send me to translated versions, I assume based on my IP. That causes problems because the translations are often quite bad, and they do it with redirects to PT URLs so I can't share links with people who don't speak the language.
Does "advertises" in this context mean what's put in the "Accept-Language" HTTP header? Might be worth seeing what that value specifically is the next time this happens. A "clever" IP-based language choice server-side seems far too complicated and error prone, but I guess that's what makes it so "clever."
Yeah I've seen this a few times on the backend that decides this. The standard should be to use the accept-language header, but all the time when people write their own code on top of frameworks (or maybe use niche shitty ones) they just geoip for language.
For business use cases sometimes it's based on the company's default language that you're an employee for.
Try to use any Google site while traveling. I have two languages in my Accept-Language header, but Google always give me language based on location if I'm not logged in. There are also many other sites that does the same, often without any option to change language
I have the same problem in Argentina. Worse, I'm pretty sure that Google and other search engines decide that I don't deserve to receive good information because I live in a Spanish-speaking country, so they send me to terrible low-quality pages because often that's all that's available in Spanish.
Centralized planning is needed in any civilization. You need some mechanism to decide where to put resources, whether it's to organize the annual school's excursion or to construct the national highway system.
But yeah in the end companies behave in trends, if some companies do it then the other companies have to do it too, even if this makes things less efficient or is even hurtful. We can put that onto the human factor, but I think even if we replaced all CEOs with AIs, those AIs would all see the same information and make similar decisions on those information.
There is pascal's wager arguments to be had: for each individual company, the punishment of not playing the AI game and missing out on something big is bigger than the punishment of wasting resources by allocating them towards AI efforts plus annoying customers with AI features they don't want or need.
> Right now are living through a new gilded age with a few barons running things, because we have made the rewards too extreme and too narrowly distributed.
The usa has rid itself multiple times of its barons. There is mechanisms in place, but I am not sure that people really are going to exercise those means any time soon. If this AI stuff is successful in the real world as well, then increasing amounts of power will shift away from the people to the people controlling the AI, with all the consequences this has.
All information technology is killing privacy, deriving from the trend that it's getting easier to collect and copy data.
Of course it doesn't help that people tell their most secret thoughts to an LLM, but before ChatGPT people did that to Google.
The recent AI advancements do make it easier though to process large amounts of data that is already being collected through existing means, and distill them, which has negative consequences on privacy.
But the distillation power of LLMs can also be used for privacy preserving purposes, namely local inference. You don't need to go to recipe websites any more, or go to wikipedia, or stack overflow, but can ask your local model. Sadly though, the non-local ones are still distinguishably better than the locally running ones, and this is probably going to stay.
Another instance of GEMA fighting an american company. Anyone who was on the german internet in the first half of the last decade remembers the "not available in your country" error messages on youtube because Google didn't make a deal with GEMA.
I don't think that we will end up here with such a scenario: lyrics are pervasive and probably also quoted in a lot of other publications. Furthermore, it's not just about lyrics but one can make a similar argument about any published literary work. GEMA is for music but for literary publications there is VG Wort who in fact already have an AI license.
I rather think that OpenAI will license the works from GEMA instead. Ultimately this will be beneficial for the likes of OpenAI because it can serve as a means to keep out the small players. I'm sure that GEMA won't talk to the smaller startups in the field about licensing.
Is this good for the average musician/author? these organizations will probably distribute most of the money to the most popular ones, even though AI models benefit from quantity of content instead of popularity.
LLMs (or LLM assisted coding), if successful, will more likely make the number of compilers go down, as LLMs are better with mainstream languages compared to niche ones. Same effect as with frameworks. Less languages, less compilers needed.
First, LLMs should be happy to use made up languages described in a couple thousand tokens without issues (you just have to have a good llm-friendly description, some examples). That and having a compiler it can iterate with / get feedback from.
Second, LLMs heavily reduce ecosystem advantage. Before LLMs, presence of libraries for common use cases to save myself time was one of the main deciding factors for language choice.
Now? The LLM will be happy to implement any utility / api client library I want given the API I want. May even be more thoroughly tested than the average open-source library.
Have you tried having an LLM write significant amounts of, say, F#? Real language, lots of documentation, definitely in the pre-training corpus, but I've never had much luck with even mid sized problems in languages like it -- ones where today's models absolutely wipe the floor in JavaScript or Python.
Even best in class LLMs like GPT5 or Sonnet 4.5 do noticeably worse in languages like C# which are pretty mainstream, but not on the level of Typescript and Python - to the degree that I don't think they are reliably able to output production level code without a crazy level of oversight.
And this is for generic backend stuff, like a CRUD server with a Rest API, the same thing with an Express/Node backend works no trouble.
I’m doing Zig and it’s fine, though not significant amounts yet. I just had to have it synthesize the latest release changelog (0.15) into a short summary.
To be clear, I mean specifically using Claude Code, with preloaded sample context and giving it the ability to call the compiler and iterate on it.
I’m sure one-shot results (like asking Claude via the web UI and verifying after one iteration) will go much worse. But if it has the compiler available and writes tests, shouldn’t be an issue. It’s possible it causes 2-3 more back and forths with the compiler, but that’s an extra couple minutes, tops.
In general, even if working with Go (what I usually do), I will start each Claude Code session with tens of thousands of tokens of context from the code base, so it follows the (somewhat peculiar) existing code style / patterns, and understands what’s where.
See, I'm coming from the understanding that language development is a dead-end in the real world. Can you name a single language made after Zig or Rust? And even those languages haven't taken over much of the professional world. So when I say companies will maintain compilers, I mean DSLs (like starlark or RSpec), application-specific languages (like CUDA), variations on existing languages (maybe C++ with some in-house rules baked in), and customer-facing config languages for advanced systems and SaaS applications.
Yes, several, e.g., Gleam, Mojo, Hare, Carbon, C3, Koka, Jai, Kotlin, Reason ... and r/ProgrammingLanguages is chock full of people working on new languages that might or might not ever become more widely known ... it takes years and a lot of resources and commitment. Zig and Rust are well known because they've been through the gauntlet and are well marketed ... there are other languages in productive use that haven't fared well that way, e.g., D and Nim (the best of the bunch and highly underappreciated), Odin, V, ...
> even those languages haven't taken over much of the professional world.
Non sequitur goalpost moving ... this has nothing to do with whether language development is a dead-end "in the real world", which is a circular argument when we're talking about language development. The claim is simply false.
Bad take. People said the same about c/c++ and now rust and zig are considered potential rivals. The ramp up is slow and there's never going to be a moment of viral adoption the way we're used to with SaaS, but change takes place.
AIs replacing jobs is not the only way those companies can see a return on investment, it's not necessarily zero sum. If the additional productivity given by AI unlocks additional possibilities of endeavor, jobs might stay, just change.
Say idk, we add additional regulatory requirements for apps, so even though developers with an AI are more powerful (let's just assume this for a moment), they might still need to solve more tasks than before.
Kind of how oil prices influence whether it makes sense to extract it from some specific reservoir: if better technology makes it cheaper to extract oil, those reservoirs will be tapped at lower oil prices too, leading to more oil being extracted in total.
When it comes to the valuations of these AI companies, they certainly have valuations that are very high compared to their earnings. It doesn't necessarily mean though that replacement of jobs is priced in.
But yeah, once AI is capable enough to do all tasks humans do in employment, there will be no need to employ any humans at all for any task whatsoever. At that point, many bets are off how it will hit the economy. Modelling that is quite difficult.
> once AI is capable enough to do all tasks humans do in employment, there will be no need to employ any humans at all for any task whatsoever
AI has no skin, you can't shame it, fire it, jail it. In all critical tasks, where we take risk on life, health, money, investment or resources spent we need that accountability.
Humans, besides being consequence sinks, are also task originators and participate in task iteration by providing feedback and constraints. Those come from the context of information that is personal and cannot be owned by AI providers.
So, even though AI might do the work, humans spark it, maintain/guide it, and in the end receive the good or bad outcomes and pay the cost. There are as many unique contexts as people, contextual embeddedness cannot be owned by others.
>But yeah, once AI is capable enough to do all tasks humans do in employment,
Also at this point the current ideas of competition go wonky.
In theory most companies in the same industry should homogenize at a maxima which leads to rapid consolidation. Lots of individual people think they'll be able to compete because they 'also have robots', but this seems unlikely to me except in the case of some boutique products. Those companies with the most data and the cheapest energy costs will win out.
Usually programming languages intend to make editing as easy as possible, but also understanding what the program does, as well as reasoning about performance, with different languages putting different emphasis on the various aspects.
reply