Oracle’s massive bet on OpenAI might be financially risky, but its investments in AI farms could accelerate Java’s evolution for AI. While Python dominates training, inference is where the money is. Projects like OpenJDK Babylon hint at a future where the JVM becomes a serious player in AI inference.
I sometimes wonder whether the political attitudes of some immigrants and their descendants are shaped by negative experiences or family histories from their countries of origin. This might help explain why people with backgrounds in places like South Africa, the Levant, or Eastern Europe often express strong libertarian views or criticize institutions such as the European Union, progressive politics, or European bureaucratic culture.
Many people don’t realize how much European countries have advanced. Poland and Romania, for example, have been among the major beneficiaries of EU integration. At the same time, American tech companies enjoyed relatively easy access to the European market for years, operating with limited regulation, while countries like Russia and China restricted foreign platforms early on and invested heavily in developing their own cloud and digital infrastructures.
I like to think their economy would be on par with France and Germany if it wasn't for the Russian occupation. But with the present growth, they'll get there soon enough!
Clojure has never been a popular language, nor has it aimed to be mainstream. That is the Lisp curse. It has never positioned itself as a "better Java". It shines in applications where immutable, consistent, and queryable data is crucial, and it has found another niche in UIs through ClojureScript.
Kotlin hasn’t made much of an impact in server-side development on the JVM. I’m not sure where this perception comes from, but in my experience, it’s virtually nonexistent in the local job market.
Maven is excellent! Once you understand it, you can work with almost any Maven project without needing to learn the specifics. I’d take Maven or Cargo any day over anything in the JavaScript or Python ecosystem.
This has always been the case. The Java and C# ecosystems prioritise stability and scale. They wait for ideas to prove themselves in other languages like Erlang, Python, Go, Scala, and so on, and then adopt the successful ones. Last-mover advantage. That said, there are some caveats. Java is still missing value types, while C# has moved quickly with async/await rather than adopting models like goroutines or virtual threads, which can sometimes limit concurrency ergonomics for the developer.
Java was doing "cloud-native, stripped down (jlink) image, self-contained runtime with batteries included" long before Bun existed. There's also GraalVM for one executable binary if one's ambitious.
The real barrier is the C++ ecosystem. It represents the cost of losing decades of optimized, highly integrated, high-performance libraries. C++ maps almost perfectly to the hardware with minimal overhead, and it remains at the forefront of the AI revolution. It is the true engine behind Python scientific libs and even Julia (ex. EnzymeAD). Rust does not offer advantages that would meaningfully improve how we approach HPC. Once you layer the necessary unsafe operations, C++ code in practice becomes mostly functional and immutable, and lifetimes matter less beyond a certain threshold when building complex HPC simulations. Or even outsourced to a scripting layer with Python.
> C++ maps almost perfectly to the hardware with minimal overhead
Barely.
The C++ aliasing rules map quite poorly into hardware. C++ barely helps at all with writing correct multithreaded code, and almost all non-tiny machines have multiple CPUs. C++ cannot cleanly express the kinds of floating point semantics that are associative, and SIMD optimization care about this. C++ exceptions have huge overhead when actually thrown.
> C++ exceptions have huge overhead when actually thrown
Which is why exceptions should never really be used for control flow. In our code, an exception basically means "the program is closing imminently, you should probably clean up and leave things in a sensible state if needed."
Agree with everything else mostly. C/C++ being a "thin layer on top of hardware" was sort of true 20? 30? years ago.
In simulations or in game dev, the practice is to use an SoA data layout to avoid aliasing entirely. Job systems or actors are used for handling multithreading. In machine learning, most parallelism is achieved through GPU offloading or CPU intrinsics. I agree in principle with everything you’re saying, but that doesn’t mean the ecosystem isn’t creative when it comes to working around these hiccups.
> The C++ aliasing rules map quite poorly into hardware.
But how much does aliasing matter on modern hardware? I know you're aware of Linus' position on this, I personally find it very compelling :)
As a silly little test a few months ago, I built whole Linux systems with -fno-strict-aliasing in CFLAGS, everything I've tried on it is within 1% of the original performance.
If they somehow magically didn't, how much could be gained?
I've never seen an attempt to answer that question. Maybe it's unanswerable in practice. But the examples of aliasing optimizations always seem to be eliminating a load, which in my experience is not an especially impactful thing in the average userspace widget written in C++.
Don't get me wrong, I'm not saying aliasing is a big conspiracy :) But it seems to have one of the higher hype-to-reality disconnects for compiler optimizations, in my limited experience.
Back in 2015 when the Rust project first had to disable use of LLVM's `noalias` they found that performance dropped by up to 5% (depending on the program). The big caveat here is that it was miscompiling, so some of that apparent performance could have been incorrect.
Of course, that was also 10 years ago, so things may be different now. There'll have been interest from the Rust project for improving the optimisations `noalias` performs, as well as improvements from Clang to improve optimisations under C and C++'s aliasing model.
Is there a better way to test the contribution of aliasing optimizations? Obviously the compiler could be patched, but that sort of invalidates the test because you'd have to assume I didn't screw up patching it somehow.
What I'm specifically interested in is how much more or less of a difference the class of optimizations makes on different calibers of hardware.
Well, the issue is that "aliasing optimizations" means different things in different languages, because what you can and cannot do is semantically different. The argument against strict aliasing in C is that you give up a lot and don't get much, but that doesn't apply to Rust, which has a different model and uses these optimizations much more.
For Rust, you'd have to patch the compiler, as they don't generally provide options to tweak this sort of thing. For both rust and C this should be pretty easy to patch, as you'd just disable the production of the noalias attribute when going to LLVM; gcc instead of clang may be harder, I don't know how things work over there.
Your post could be (uncharitably) paraphrased as: "once you have written correct C++ code, the drawbacks of C++ are not relevant". That is true, and the same is true of C. But it's not really a counterargument to Rust. It doesn't much help those us who have to deliver that correct code in the first place.
https://openjdk.org/projects/babylon/articles/auto-diff