Hacker Newsnew | past | comments | ask | show | jobs | submit | tw061023's commentslogin

Except he's not even a successful developer, just a figurehead.

Linux is, just like most of successful open source projects, a resource pool. Without corporate code donations and support it would be just another hobby OS, and his personal contribution to success of Linux is negligible.

Linus just got lucky with timing and license, and that is pretty much the extent of his personal contribution.

Which begs the question, why exactly the community has tolerated him for so long?


Yeah except he pulled it off twice with git, which happened after he crawled into a hole for a few weeks. He’s proof of greatness if I’ve ever seen it. Not an anonymous worm like you.


The comment you're replying to was definitely a bad one, but we need you to resist the urge to respond with a personal attack. Your comment would be fine without the last sentence.

We appreciate your many positive contributions to HN, and we just ask you keep in mind that HN can only be a good place to participate if enough people make the effort to keep it that way.


Sure thing Tom! Appreciate your work too!


Many thanks!


It's the other way around. You are the real programmer and the committee and the "modern C++" crowd are more interested playing with legos instead of shipping actual software.

No way anything std::meta gets into serious production; too flexible in some ways, too inflexible in others, too much unpredictability, too high impact on compilation times - just like always with newer additions to the C++ standard. It takes one look at coding standards of real-world projects to see how irrelevant this stuff is.

And like always, the problem std::meta is purported to solve has been solved for years.


The stream of modern C++ features have been a god-send for anyone that cares about high-performance, high-reliability software. Maybe that doesn’t apply to your use case but C++ is widely used in critical data infrastructure. For anyone that does care about things like performance and reliability, the changes to modern C++ have largely been obvious and immediately useful improvements. Almost all C++ projects I know in the high-performance data infrastructure space live as close to the bleeding edge of new C++ features as the compiler implementations make feasible.

And no, reflection hasn’t “been solved for years” unless you have a very misleading definition of “solved”. A lot of the C++ code I work with is heavily codegen-ed via metaprogramming. Despite the relative expressiveness and flexibility of C++ metaprogramming, proper reflection will dramatically improve what is practical in a strict and type-safe way at compile-time.


You are sounding like rose tinted glasses are on. I think your glass is half full if you recheck actual versions and features. And mine is half empty in gamedev.

Anecdata: A year or so ago I have been in discussion if beta features of C++20 on platforms are good to be used on large scale. It makes it not a sum but an intersection of partial implementations. Anyway it looked positive until we needed a pilot project to try. One of the projects came back with 'just flipping C++20 switch with no changes causes significant regression on build times'. After confirming it that it is indeed not an error on our side it was kinda obvious. Proportional increase of remote compilation cloud costs for few minor features is a 'no'. After a year the beta support is no longer beta but still partial on platforms and no improvements on build times in community. YMMV of course because gamedev mostly supports closed source platforms with closed set of build tools.


> One of the projects came back with 'just flipping C++20 switch with no changes causes significant regression on build times'.

I think this just proves that your team is highly inexperienced in C++ projects, which you implicitly attest by admitting this was your first C++ upgrade you had to go through.

Let me be very clear: there is never an upgrade of the C++ version targeted by a project that does not require full regression tests and a few bugs to squash. Why? Because even if the C++ side of things is perfectly fine, libraries often introduce all sorts of unexpected issues.

For example, once I had to migrate a legacy project to C++14 and flipping the compiler flag to c++14 caused a wall of compiler errors. It turned out the C++ was perfectly fine, but a single library behaved very poorly with a constexpr constructor they enabled conditionally with C++14.

You should understand that upgrades to the core language and standard libraries are exceptionally stable, and a clear focus of the standardization committee. But they only have a say in how the core language and standard libs should be. The bulk of the code any relatively complex project consumes is not core lang+ stdlib, but third-party libraries and frameworks. These often are riddled with flags to toggle whole components only in specific versions of the C++ language, mainly for backwards compatibility. Once you target a new version of C++, often that means you replace whole components of upstream dependencies. This often requires fixing your code. This happens very frequently, even with the likes of Boost.

So, what you're complaining about is not C++ but your inexperience in software engineering in general. I mean, what is the rule of thumb about major version upgrades?


I am sorry for the confusion. It's fine to have some downvotes if its not what ppl like to see. I was not complaining. Message was purely informational from single point of view that a) game platforms have only partial C++20 support in 2025. b) there are features that are in C++ standard that do not fit description 'god-send'.


> One of the projects came back with 'just flipping C++20 switch with no changes causes significant regression on build times

Given that C++20 introduced modules, which are intended to make builds faster, I think just flipping C++20 switch with no changes and checking build times should not be the end of checking whether C++20 is worth it for your setup.


> Given that C++20 introduced modules, which are intended to make builds faster

Turning on modules effectively requires that all of your project dependencies themselves have turned on modules. Fail to do so, and a lot of the benefits start to become hindrances (Clang is currently debating going to 64-bit source locations because modularizing in this manner tends to exhaust the current 32-bit source locations).


> Proportional increase of remote compilation cloud costs for few minor features is a 'no'.

How high are those compilation costs compared the developer time that might be saved with even minor features?


Tbh I dont have exact numbers from 2024 at hand. I remember that decision was unanimous. A build times increase is a very sensitive topic for us in gamedev.


I still have to learn C++20 concepts and now we have a full-fledged reflection system?

Good, but I think what happens is there are people on the bleeding edge of C++, usually writing libraries that ship with new code. Each new feature is a godsend for them -- it's the reason why the features are proposed in the first place. It allows you to write libraries more simply, more generally, more safely, and more efficiently.

The rest of us are dealing with old code that is a hodgepodge of older standards and toolchains, that has to run in multiple environments, mostly old ones. It's like yeah, this C++26 feature will come in handy for me someday, but if that day comes then it will be in 2036, and I might not be writing C++ by then.


>The rest of us are dealing with old code that is a hodgepodge of older standards and toolchains, that has to run in multiple environments, mostly old ones. It's like yeah, this C++26 feature will come in handy for me someday, but if that day comes then it will be in 2036, and I might not be writing C++ by then.

Things seem to be catching up. I had the same view up until recently, but now I'm able to use most of the C++23 features in an embedded platform (granted, some are still missing (limited to GCC 11.2).


I am interested; could you provide some links, articles, etc?


[flagged]


You sound like you subscribe to "Orthodox C++".

Speaking seriously, I agree there's definitely a lot of bloat in the new C++ standards. E.g. I'm not a fan of the C++26 linalg stuff. But most performance-focused trading firms still use the latest standard with the latest compiler. Just a small example of new C++ features that are used every day in those firms:

Smart pointers (C++11), Constexpr and consteval (all improvements since C++11), Concepts (C++20), Spans (C++20), Optional (C++17), String views (C++17)


> I'm not a fan of the C++26 linalg stuff.

I don't agree at all. For most, linear algebra is the primary reason they pick up C++. Up until now, the best option C++ newbies had was to go through arcane processes to onboard a high performance BLAS implementation which then requires even more arcane steps such as tuning.

With C++26, anyone can simply jump into implementing algorithms.

If anything, BLAS support was conspicuously missing from C++ (and also C).

This blend of comments is more perplexing given that a frequent criticism of C++ is its spartan standard lib, and how the selling point of some commercial software projects such as Matlab is that, unlike C++, linear algebra work is trivial.


Except the devil is in the details as usual, the way linalg is specified doesn't guarantee numeric stability across library implementations or compilers.

Just like the std::random mess, most people are in for a surprise when they attempt to write portable numeric code with it.


> Except the devil is in the details as usual, the way linalg is specified doesn't guarantee numeric stability across library implementations or compilers.

I think this criticism is silly. You're complaining about the C++ standard not enforcing something that is virtually impossible and no implementation under the sun even suggests it would conceivably support. And we should just assume C++ would be able to force it across implementations, target platforms, and even architectures? Ridiculous.

> Just like the std::random mess (...)

The comments I've seen regarding std::random mainly refer to gotchas, such as std::rand returning values in [0, RAND_MAX] and the latter being a platform-specific constant.

There is a reason after all why you only throw vague complains of "mess" instead of pointing out specific concerns or grievances. You need to complain, regardless of merit.

Overall, this blend of criticism is no different than the cliche criticism targeting the STL. Sure, some niche applications can and do have better ways to scratch their niche itches. In the meantime the STL perfectly serves the need of 99.9% of all common usages. Is this not the point? Doesn't linalg and rand achieve this?

Of course vapid nitpickers can still pull out the last resort card of complaining that the implementation is too complicated, a grievance also directed at the STL. But that's just the extent where this silliness goes.


Not silly at all, if it can't be enforced in a standard portable way, its place doesn't belong in the standard, at all.

I would have voted SA on this matter, if I had a life that would allow me to go around voting at WG21 meetings.

We have vcpkg and conan now, the standard library cannot be a distribution vehicle for organisations that refuse to adopt C++ package managers.


> I don't agree at all. For most, linear algebra is the primary reason they pick up C++.

Out of hundreds of hundreds of projects I've interacted with, maybe less than 1% have used linear algebra in any non-basic capacity (e.g. more than multiplying two 4x4 matrices) and had to use Eigen or BLAS


> Out of hundreds of hundreds of projects I've interacted with, maybe less than 1% have used linear algebra in any non-basic capacity (e.g. more than multiplying two 4x4 matrices)

Are you really trying to argue that if you ignore the bulk of applications that have linear algebra then only a few have linear algebra?

> and had to use Eigen or BLAS

What if I told you that if C++ provided basic support for linalg, your average C++ developer wouldn't ever have to hear about Eigen or blas?

It's perfectly fine if you want to use the likes of Eigen. It's also perfectly fine if any developer opts to ditch the STL and use any performance-oriented container.

But is it ok to force everyone to onboard a third party library just to be able to do very basic things like use a stack or do a dense matrix-vector multiplication? I don't think that leads to a good developer experience.


> This blend of comments is more perplexing given that a frequent criticism of C++ is its spartan standard lib

The frequency doesn't make the criticism more valid and those repeating it would be better served to let go of their fear of third-party libraries.


> The frequency doesn't make the criticism more valid (...)

The criticism is not valid. It's specious reasoning at best, fueled by a hefty dose of gatekeeing.

The only rationale that is relevant is whether the standard library provides a way to do a very basic and fundamental task instead of having to force anyone to onboard and manage third party dependencies. That's the whole point of a standard library, isn't it?


No the point of a standard library is to provide vocabulary types (so that third-party libraries can interoperate) as well as basic operations that are essentially set in stone. Anything beyond that needs to have its usefulness weighted against its maintenance burden, which for a standard library that is serious about backwards compatibility is enormous. C++ is already also heavily criticized for being complex with many problems having multiple outdated solutions that you're not supposed to use.

"Onboarding" a third party library isn't this herculean task that you make it out to be but is in fact a very basic part of software development that almost any project will have to deal with anyway unless you are into reinventing the wheel - even excessively bloated standard libraries don't manage to cover everything.


Prediction: it will be used heavily for things like command line arg parsing, configuration files, deserialization, reflection into other languages. It will probably be somewhat a pain to use, but better than the current alternative mashup of macros/codegen/template metaprogramming that we have now for some of these solutions. It will likely mostly be used for library code, where someone defines some nice utilities for you, that do something useful, so that you don't have to worry about it. I don't think for the most part it has to hurt compile times - it might even be faster than the current mess, as well - less use of templates.

I don't think the "legos" vs "shipping" debate here is really valid. One can write any type of code in any language. I'm a freak about C++, but if someone wants to ship in Python or JS, the more power to them - one can write code that's fast enough to not matter, but takes advantage of those languages' special features.


I know the trading firm I work at will be making heavy use of reflection the second it lands… we had a literal party when it made it into the standard.


sure, but instagram was created by a handful of people with python and got a billion dollar exit in 2012.


What has that to do with the topic? Warren Buffet made billions without any do knowledge about programming or deeper knowledge about computers.


Mansa Musa was so rich he decreased the local gold to silver exchange rate in Egypt by 12% without ANY modern technology!

https://en.wikipedia.org/wiki/Mansa_Musa#Wealth


> sure, but instagram was created by a handful of people with python and got a billion dollar exit in 2012.

Facebook famously felt compelled to hire eminent C++ experts to help them migrate away from their PHP backend. I still recall reading posts on the Instagram Engineering blog on how and where they used C++.


And then HipHop failed to provide as much gains as they hoped for versus the Hack JIT implementation, thus Facebook keeps writing mostly PHP like code in many of their workloads.


> And then HipHop failed to provide as much gains as they hoped (...)

What point do you think you're making?


That the C++ migration in the end did not achieve everything they were trying to get out of it, and another more productive approach was chosen in the end.

https://en.wikipedia.org/wiki/HipHop_for_PHP

https://en.wikipedia.org/wiki/HHVM


What is this culture of judging everything by amount of money.

No one needs a billion dollars, it is practically irrelevant unless you are running on greed


money is a proxy for value. the post i was responding to seemed to be pointing out how little value there is in something else.


The post you responded to said they were very happy about their tools becoming better and your reply read as a dismissal of that, citing someone having made billions by writing an advertisement platform in Python.

So either I and others misread you or it is just a matter of different views on value.


And Youtube used Python almost exclusively at the start AFAIK.

Then again Scott Meyers said he's never written a C++ program professionally.


> Then again Scott Meyers said he's never written a C++ program professionally.

I think you're inadvertently misrepresenting Scott Meyers' claim.

Cited from somewhere else:

> I'll begin with what many of you will find an unredeemably damning confession: I have not written production software in over 20 years, and I have never written production software in C++. Nope, not ever.

He went on to clarify that he made a living out of consultancy, not writing software. He famously retired from C++ in 2015, too.


> And like always, the problem std::meta is purported to solve has been solved for years.

It is rare to read something more moronic than that

The Rust equivalent of std::meta (procedural macros) are heavily used everywhere including in serialization framework, debugging and tracers.

And that's not surprising at all: Compile time introspection is much more powerful and lightweight than codegen for exactly the same usage.


> It is rare to read something more moronic than that

It's not actually wrong though is it - real codebases have been implementing reflection and introspection through macro magic etc. for decades at this point.

I guess it's cool they want to fix it in the language, but as always, the approach is to make the language even more complex than it already is - e.g. two new operators (!) in the linked article


> been implementing reflection and introspection through macro magic etc. for decades at this point.

Having a flaky pile of junk as an alternative is never been an excuse to not fix the problem properly.

Every proper modern language (Rust, Kotlin, Zig, Swift, even freaking Golang) has a form of runtime reflection or static introspection.

Only C++ does not. It was done historically with a mess of macros or a pre-compiler (qt-moc) that all have an entire pile of issue.

> the approach is to make the language even more complex than it already is - e.g. two new operators

The problem of rampant complexity in C++ is not so much about the new features when they bring something and make sense.

It is about its inability to remove the old stuff even if it is consensual that it is garbage (e.g iostreams).


> Having a flaky pile of junk as an alternative is never been an excuse to not fix the problem properly.

Thank you. Some people use the phrases "real projects" and "production code" as if they imply some standard of high quality.


I embrace Modern C++, but slower than the committee, when the big three have the feature.

I really think reflection + annotations will give us the chance to have much better serialization and probably something more similar to Python decorators.

That will be plenty useful and it is going to transform a part of C++ ecosystem, for example I am thinking of editors that need to reflect on data structures or web frameworks such as Crow or Drogon, Database access libraries...


I bet CERN might eventually replace their Python based code generators with C++26 reflection.


Which problem would this solve for them?


It would standardize something they've done in an ad-hoc way for decades. They have a library called "reflex" [1] which adds some reflection, and which was (re)written by cannibalizing a lot of llvm code. They actually use the reflection to serialize a lot of the LHC data.

It's kind of neat that it works. It's also a bit fidgety: the cannibalized code can cause issues (which, e.g. prevented C++11 adoption for a while in some experiments), and now CERN depends on bits of an old C++ compiler to read their data. Some may question the wisdom of making a multi-billion dollar dataset without a spec and dependent on internals of C++ classes (indeed experiments are slowly moving to formats with a clear spec), but for sure having a standard for reflection is better than the home-grown solution they rely on now.

[1]: https://indico.cern.ch/event/408139/contributions/979831/att...


The library that you refer is not in use for a long time already. The document you pointed out is from 2006 (you can check the creation date).

Since then, a lot has changed, and now it is all based on cling ( https://root.cern/cling/ ), that originates from clang and llvm. cling is responsible generates the serialization / reflection of the classes needed within the ROOT framework.


Good catch: I'm confusing reflex and the cling code that came later. All the issues I mentioned are still there in (or caused by) cling though. Either way standardization in reflection would help.


Two language problem, kind of well known issue in engineering tradeoffs.


As an example, most of the big js/ts ecosystem expansion to the server (RSC/Next/RR7/Expo/...) over the last few years is driven by the wish to have everything under one roof and one language.

People just don't want to maintain two completely different stacks (one on the server, one on the client).


> No way anything std::meta gets into serious production

Rust proc macros get used in serious production, even though they're quite slow to compile. Sure, std::meta is probably a bit clunkier, but that's expected from new C++ features as you say.


Sadly, Rust proc macros operate on tokens and any serious macro implementation needs third-party crates.

Compile-time reflection, with good, built in API, akin to C# Roslyn would be a real boon.


Any serious anything needs third party crates. Coming from c++ this has been the most uncomfortable aspect of rust to me, but I am acclimating.


Every problem is solved. We should stop making anything. Specially CRUD apps, because how is that even programming? What does it solve that hasn't been solved?

This line of thinking is not productive. It is a mistake to see yourself as what you do, because then you're cornering yourself into defending it, no matter what.


> the problem std::meta is purported to solve has been solved for years.

What solution is that? A Python script that spits out C++ code?


Yeah, wait till you find out what's behind the curtain in your web engine and AI.

Hint: it's C++, and yes, it will eventually use stuff like std::meta heavily.


If you would check my comments, you would see I am quite aware. And no, it will not, just like it was with streams, ranges and whatever else.


What's the solution that's been around for years?

> ... just like always with newer additions to the C++ standard.

This is objectively laughable.


I was literally running into something a couple of days ago on my toy C++ project where basic compile-time reflection would have been nice to have for some sanity checking.

And even if it's true that some things can be done already with specific compilers and implementation-specific hacks, it would be really nice to be able to do those things more straightforwardly.

My experience with C++ changes has been that the recent additions to compile-time metaprogramming operations is that they improve compile times rather than make it worse, because you don't have to do things like std::enable_if<> hacks and recursive templates to do things that a simple generic lambda or constexpr conditional will do, which are more difficult for both you and the compiler.


Constexpr if and fold expressions have been a god send!


The history of C++ has been one long loop of:

1. So many necessary common practices of C++ are far too complicated!

2. Std committee adds features to make those practices simpler.

3. C++ keeps adding features. It’s too big. They should cut out the old stuff!

4. The std committee points at the decade-long Python 3 fiasco.

5. Repeat.


Do they point at python 3? They were committed to backward compatibility long before python3 happened.

To me it feels like they have fleshed out key paradigms so that is not a mess anymore. They are not there yet with compile time evaluation (constexpr consteval,...), at least with C++20, not sure if it's mostly finished with C++23/26.

The language itself and std is quite bloated but writing modern C++ isn't that complicated anymore in my experience.


It's pure Stockholm syndome. There's even a nice C++ committee paper that summarizes this as "Remember the Vasa!" https://open-std.org/JTC1/SC22/WG21/docs/papers/2018/p0977r0...


> What's the solution that's been around for years?

Build tools to generate C++ code from some other tool. Interface description languages, for example, or something like (going back decades here) lex and yacc even.


Great. But you can do anything you want by generating code. Why not have a standard solution instead of everyone doing their own, possibly buggy thing complicating their build process even more?


Reframe it as "you can do precisely what you need by generating code" and there is your answer.

Which is far better than to rely on a party which, as I said, has precisely nothing to do with what anyone needs. Which will inevitably produce solutions that can only partially (I am being generous here) be used in any particular situation.

As for "possibly buggy" - look, I can whip up a solid *DL parser complete with a C++ code generator in what, a week? And then polish it from that.

The committee will work for several years, settle on a barely working design, then it will take some years to land in major compilers, then it will turn out it is unusable because someone forgot a key API or it was unfeasible on VAX or something like that.

And my build process is not complicated, and never will be. It can always accomodate another step. Mainly because I don't use CMake.


My perception is that C++XY features are wildly used in general. Of course there are some nobody uses, but that's not generally true. So your basic assumption is wrong.

We are at C++20 and I wouldn't like to work for a company that uses an earlier standard.


Well, either you carefully vet which C++ features you use and my assumption still stands, or you don't - in which case I would rather not like to work in your company.


You can write a parser for an IDL, but you can’t reasonably write a parser for C++. So you have to move the definition of whatever types of methods or fields you want to reflect on into the IDL, instead of defining them natively in C++. (Or worse, have separate definitions in both the IDL and C++.) Which tends to be cumbersome – especially if you want to define a lot of generic types (since then the code generator can’t statically determine the full list of types). It can work, but there’s a reason I rarely see anyone using this approach.


Why would I want to write a C++ parser?

IDL/DDL is the source of truth, moving the type definitions there is the whole point. There is only one definition for each type, which is in the *DL, corresponding C++ headers are generated and everything is statically known.


Debugging/modifying code generated from someone's undocumented c++ code generator is pretty close to the top of my list of unpleasant things to do. Yes, you can eventually figure out what to do by looking at the generated code and taking apart the code generator and figuring out how it all works but I'll take built-in language features any day.


I've been down this road. I ended up with a config YAML (basically - an IDL) that goes into a pile of jinja files and C++ templates - and it always ended up better and easier to read to minimize the amount of jinja (broken syntax highlighting, the fact that you are writing meta meta code, it's a hot mess). I'd much prefer to generate some bare structs with some minimal additional inline metadata than to generate both those structs and an entire separate set of structs describing the first ones. std::meta lets me do the former, the latter is what's possible right now.


For example, boost library's "describe" and similar macro based solutions. Been using this for many years.


Whip up some kind of in-house IDL/DDL parser, codegen from that.

Which, precisely, additions do not fit my points?


Completely inadequate for many use cases. IDL/DDL is one of the least interesting things you could do with reflection in C++. You can already do a lot of that kind of thing with existing metaprogramming facilities.


Which use cases? What exactly you can do with "existing metaprogramming facilities"?


Most of the time, I will prefer standard C++ over a full hand made layer of complexity that needs maintenance.


> It's the other way around. You are the real programmer and the committee and the "modern C++" crowd are more interested playing with legos instead of shipping actual software.

I think this is the most clueless comment I ever read in HN. I hope the site is not being hit with it's blend of September.

I was going to explain to you how fundamentally wrong your comment was, but it's better to just kindly ask you to post in Reddit instead.


Sure, it took only what, 40 years of intensive hardware improvements* for assembly to move to the fringe? And we still reach out to it more often that I would like to because reasons?

Yep, I guess you can train an LLM on a bunch of binaries to get it to mimic a SotA compiler with some accuracy, which may or may not improve over time, but come on. Times where there were free performance increases are gone, and this is not the area where shipping any bullshit real fast will get you any sort of advantage.

* Which are unlikely to happen again in the foreseeable future.


Rust is - by design - antithetical to pretty much every idea of rapid application development paradigm Delphi/VCL and to lesser extent Qt adhere to.

It doesn't matter how many of Rust UI toolkits there are. Consider that there are a lot of Rust game engines, and pretty much zero games written in it, because even C++ gives you better trade-offs in that particular space.


LLVM is basically a resource pool for C++ compiler development. As such, it is highly C++ specific and leaks C++ semantics everywhere.

It's especially funny when this happens in Rust, which is marketed as a "safer" alternative.

Would you like a segfault out of nowhere in safe Rust? The issue is still open after two years by the way: https://github.com/rust-lang/rust/issues/107975


It's not clear to me what you mean by default with regards to that issue. As far as I can tell, there's not really any indication that this is undefined behavior. Yes, there seems to be to a bug of some sort in the code being generated, but it seems like a stretch to me to imply that any bug that generates incorrect code is necessarily a risk of UB. Maybe I'm missing some assumption being made about what the pointers not being equal implies, but given that you can't actually dereference `*const T` in safe Rust, I don't really see where you're able to draw the conclusion that having two of them incorrectly not compare as equal could lead to unsafety.


If you read the Github issue, this one was weaponized fairly straightforwardly by taking the difference between the two pointers.

The difference is zero, but the compiler thinks it is non-zero because it thinks they are unequal.

From there you turn it into type confusion through an array, and then whatever you want. Almost any wrong compiler assumption can be exploited. This particular way to do it has also been used several times to exploit bugs in Javscript engines.


I did read through the issue, and reading through it again, I still see nothing about how a segfault can be generated from safe Rust. I'm not saying it can't happen from this bug, but it's not obvious to me what exact code I could write that could cause this to happen, because none of the examples in that issue seem to be doing that.



Yeah, using LLVM for anything trying to avoid UB is crazy.

I got involved in a discussion with a Rust guy when trying to get C with SIMD intrinsics into wasi-libc where something that the C standard explicitly state is “implementation defined” (and so, sane, as we're targeting a single implementation - LLVM) can't be trusted, because LLVM may turn it back into UB because “reasons.”

At this point Go and Zig made the right choice to dump it. I don't know about Rust.

https://github.com/WebAssembly/wasi-libc/pull/593


It sounds like you have a fundamental misunderstanding about undefined behavior. It's easy to emit LLVM IR that avoids undefined behavior. The language reference makes it quite clear what constitutes undefined behavior and what does not.

The issue is that frontends want to emit code that is as optimizeable as possible, so they opt into the complexity of specifying additional constraints, attributes, and guarantees, each of which risks triggering undefined behavior if the frontend has a bug and emits wrong information.


Hi Andy. Did you read the linked thread?

I was not the one making this claim:

> However, I believe that currently, there is no well-defined way to actually achieve this on the LLVM IR level. Using plain loads for this is UB (even if it may usually work out in practice, and I'm sure plenty of C code just does that).

My claim is that the below snippet is implemention defined (not UB):

  // Casting through uintptr_t makes this implementation-defined,
  // rather than undefined behavior.
  uintptr_t align = (uintptr_t)s % sizeof(v128_t);
  const v128_t *v = (v128_t *)((uintptr_t)s - align);
Further, that this is actually defined by the implementation to do the correct thing, by any good faith reading of the standard:

> The mapping functions for converting a pointer to an integer or an integer to a pointer are intended to be consistent with the addressing structure of the execution environment.

I further suggested laundering the pointer with something like the below, but was told it would amount to nothing, again the blame being put on LLVM:

  asm ("" : "+r"(v))
I honestly don't know if LLVM or clang should be to blame. I was told LLVM IR and took it in good faith.


No, I hadn't read the linked thread until you prodded me. Now I have and I understand the situation entirely. I'll give a brief overview; feel free to ask any followup questions.

A straightforward implementation of memchr, i.e. finding the index of a particular byte inside an array of bytes, looks like this:

    for (bytes, 0..) |byte, i| {
        if (byte == search) return i;
    }
    return null;
This is trivial to lower to well-defined LLVM IR.

But it's desirable to use tricks to make the function really fast, such as assuming that you can read up to the page boundary with SIMD instructions[1]. This is generally true on real world hardware, but this is incompatible with the pointer provenance memory model, which is load-bearing for important optimizations that C, C++, Rust, and Zig all rely on.

So if you want to do such tricks you have to do it in a black box that is exempt from the memory model rules. The Zig code I link to here is unsound because it does not do this. An optimization pass, whether it be implemented in Zig pipeline or LLVM pipeline, would be able to prove that it writes outside a pointer provenance, mark that particular control flow unreachable, and thereby cause undefined behavior if it happens.

This is not really LLVM's fault. This is a language shortcoming in C, C++, Rust, Zig, and probably many others. It's a fundamental conflict between the utility of pointer provenance rules, and the utility of ignoring that crap and just doing what you know the machine allows you to do.

[1]: https://github.com/ziglang/zig/blob/0.14.1/lib/std/mem.zig#L...


Thanks for taking the time!

I was the original contributor of the SIMD code, and got this… pushback.

I still don't quite understand how you can marry ”pointer provenance” with the intent that converting between pointers and integers is “to be consistent with the addressing structure of the execution environment” and want to allow DMA in your language, but then this is UB.

But well, a workable version of it got submitted, I've made subsequent contributions (memchr, strchr, str[c]spn…), all good.

Just makes me salty on C, as if I needed more reasons to.


That's totally fair to be salty about a legitimately annoying situation. But I think it's actually an interesting, fundamental complexity of computer science, as opposed to some accidental complexity that LLVM is bringing to the table.


Which is why nowadays most frontends have been migrating to MLIR, and there is also ongoing work for clang as well.


How does migrating to MLIR address the problem?


The higher abstraction level it provides over the LLVM IR, making language frontends and compiler passes less dependent on its semantics.


As the guy currently handling Zig's LLVM upgrades, I do not see this as an advantage at all. The more IR layers I have to go through to diagnose miscompilations, the more of a miserable experience it becomes. I don't know that I would have the motivation to continue doing the upgrades if I also had to deal with MLIR.


LLVM project sees that otherwise, and the adoption across the LLVM community is quite telling where they stand.


That doesn't seem like a good argument for why Zig ought to target MLIR instead of LLVM IR. I think I'd like to see some real-world examples of compilers for general-purpose programming languages using MLIR (ClangIR is still far from complete) before I entertain this particular argument.


Would Flang do it? Fortran was once general purpose.

https://github.com/llvm/llvm-project/blob/main/flang/docs/Hi...

Maybe the work in Swift (SIL), Rust (MIR), Julia (SSAIR) that were partially the inspiration for MLIR, alongside work done at Google designing Tensorflow compiler?

The main goal being an IR that would accomodate all use cases of those high level IRs.

Here are the presentation talk slides at European LLVM Developers Meeting back in 2019,

https://llvm.org/devmtg/2019-04/slides/Keynote-ShpeismanLatt...

Also you can find many general purpose enough users around this listing,

https://mlir.llvm.org/users/


Are you saying that Fortran was once a general purpose programming language, but somehow changed to no longer be one?


Yes, because we are no longer in the 1960's - 1980's.

C and C++ took over many of the use cases people where using Fortran for during those decades.

In 2025, while it is a general purpose language, its use is constrained to scientific computing and HPC.

Most wannabe CUDA replacements keep forgetting Fortran is one of the reasons scientific community ignored OpenCL.


So you're saying that the changes made to Fortran have made it more specialized?


Huh?? That can only make frontends' jobs more tricky.


Yet is being embraced by everyone since its introduction in 2019, with its own organization and conference talks.

So maybe all those universities, companies and the LLVM project know kind of what they are doing.

- https://mlir.llvm.org/

- https://llvm.github.io/clangir/

- https://mlir.llvm.org/talks/


No need to make a weird appeal to authority. Can you just explain the answer to my question in your own words?


I am only familiar with MLIR for accelerator-specific compilation, but my understanding is that by describing operations at a higher level, you don’t need the frontend to know what LLVM IR will lead to the best final performance. For instance you could say "perform tiled matrix multiplication" instead of "multiply and add while looping in this arbitrary indexing pattern", and an MLIR pass can reason about what pattern to use and take whatever hints you’ve given it. This is especially helpful when some implementations should be different depending on previous/next ops and what your target hardware is. I think there’s no reason Zig can’t do something like this internally, but MLIR is an existing way to build primitives at several different levels of abstraction. From what I’ve heard it’s far from ergonomic for compiler devs, though…


You see it as appeal to authority, I see it as the community of frontend developers, based on Swift and Rust integration experience, and work done by Chris Lattner, while working at Google, feedbacking into what the evolution of LLVM IR is supposed to look like.

Mojo and Flang, were designed from scratch using MLIR, as there are many other newer languages on the LLVM ecosystem.

I see it as the field experience of folks that know a little bit more than I ever will about compiler design.


It seems the community is severely overexposed to bad practices and implementations of OOP and conversely severely underexposed to the success stories.

153 comments as of time of writing, let's see.

C-F: Java: 21 C++: 31 Python: 23 C#: 2

And yet: Pascal: 1 (!) Delphi: 0 VCL: 0 Winforms: 0 Ruby: 2 (in one comment)

This is not a serious conversation about merits of OOP or lack thereof, just like Casey's presentation is not a serious analysis - just a man venting his personal grudges.

I get that, it's completely justified - Java has a culture of horrible overengineering and C++ is, well, C++, the object model is not even the worst part of that mess. But still, it feels like there is a lack of voices of people for whom the concept works well.

People can and will write horrible atrocities in any language with any methodologies; there is at least one widely used "modern C++" ECS implementation built with STL for example (which itself speaks volumes), and there is a vast universe of completely unreadable FP-style TypeScript code out there written by people far too consumed by what they can do to stop for a second and think if they should.

I don't know why Casey chose this particular hill to die on, and I honestly don't care, but we as a community should at least be curious if there are better ways to do our jobs. Sadly, common sense seems to have given way to dogma these days.


WebGPU is supposed to be properly sandboxed.

The bigger issue is that WebGPU is basically dead on arrival for the same reasons WebGL is - it is impossible to get enough data to the client for it to actually matter, Awwwards-style design tricks notwithstanding.

I suppose browser vendors understand this and don't really care for either.


“WebGPU is supposed to be properly sandboxed.“

GPU must therefore provide open API for proprietary processing space.

Magic “packets” are therefore possible to execute arbitrary functions on “sandboxed” DMA devices.

Still a problem until we can audit the hardware. NV, and to a lesser extent AMD and ARC play somewhat open with a few omnipotent cards in their pockets. The prime of the issue is that gamers don’t care, only security professionals do. Because they’re the ones who see the 0-days fly by every day.


What do you think could fix this? Access to genuinely permanent storage?


That's one part of it, yes. A browser API providing a few GBs of persistent storage with proper isolation and user management, obviously with some kind of compression/decompression going on to save both download times and loading times.

As an example, consider Infinity Blade, the poster child of mobile gaming: released in 2010, 595 MB download, 948 MB installed. Even the first version of WebGL is capable of providing this kind of experience, we just cannot get it to the user via browser.


I found out another fellow soul, that is exactly one of my complaints with Web 3D, the failure to 15 years later to provide the same experience as Infinity Blade, used by Apple to show off iPhone's OpenGL ES 3.0 capabilities.

Or Unreal Engine Citadel demo, originally done in Flash / C++.


That may be true (for now) for web apps, but what if you serve your app/game as a desktop app, for example with tauri?


Could you please tell what languages and technologies other teams use?

Comparison statements are meaningless if there's nothing to compare to.


Have you considred the bicameral mind hypothesis? Because what you are describing sounds like a strong evidence supporting it.


From someone who had been lonely for a better part of his life, there is a solution that works.

Build a bit of confidence (gym being the simplest way) and get out there, because whoever you are, there is someone else out there who looks precisely for you. But you aren't gonna meet them if you sit tight, so get out.

Internalize this: you are attractive and you have worth, right now. You are good enough. And someone is waiting for you out there.


"Just b urself"


Build confidence _and then_ just be yourself. Because if you lack confidence in yourself, how can you convince another person to have confidence in you?


You imply that lack of confidence is the most common issue one has to fix.

What if you're introverted, intellectual and not part of the bread & circus tribe (which should be the case for some people on HN)? What does "get out" mean, then? You won't see me at a nightclub, a bar or any place made for people to socialize over inane discussions, alcohol and crap "music".


Going out mainly means going out through your door and to nice and interesting places.

That can be parks, museums, gym, hacker spaces and yes also clubs.

A new (ancient) trend, that maybe works for you, is called ecstatic dance. The basic idea is, no alcohol, no drugs, no talking (inside the main area) - just good music and dancing - till ecstasy. Expressing the language of the body. This connects people.


I was feeling lonely and hopeless then I read your post, saw the cure was “ecstatic dance” and lol’d, loudly. I was reminded of the episode of peep show (the great British sitcom) where the lads try out ecstatic dance. I will have to rewatch that today. I thank you for the smile and for one of the most unique loneliness cures I’ve ever read, and I’ve read many. For me the super market has been the easiest way to encounter other humans, but maybe I’ll try ecstatic dance.


Glad I could make you laugh. I have not seen the peep show, so no idea what was in it, or whether it has anything to do with what I know as ecstatic dance, but in either case I do recommend to go for it, if you have the chance. Worst case, you don't like it and go away. Best case, you have fun and connect with just the right people.

Every ecstatic dance I attended was different (also on different places, organised by different people) but all of them were worth it.


When you are in a place with perhaps the most mundane music, if it gets even the least bit decent and you get up there and dance ecstatically, the band will love it.

If there are no dancers and you get up there at all the band will probably love it.

They might just not play so mundane after that.

And others who may be the least bit inspired will often get right up there with you, hesitating much less than they would have normally done.

Even if it's all by yourself.

There's a song about that, Dancing with Myself by Billy Idol:

https://www.youtube.com/watch?v=1s2qZ6zc6ts

Where he basically draws the dead up onto their feet, he ecstatically blows them all away, they love him anyway and everybody ends up boogieing like zombies.

>You won't see me at a nightclub, a bar or any place made for people to socialize over inane discussions, alcohol and crap "music".

You won't see most people, they're usually hanging out socially with others they met like that or at equivalent places they prefer to gather whether or not alcohol was prominent, discussions were inane or music was crap.

Or whether there was anything like music at all.

But if there is music, you know what to do ;)

Trust me, I'm a scientist.

OTOH there's a lot to be said for striving to widen social circles using remote technology more so than direct contact. Haven't gotten around to that yet so I don't have much to add there.

EDIT added anyway: Helpful tip: It's probably better to leave your phone at home so it doesn't get broken while ecstatically dancing or anything else. People that are interested in what they see will often be understanding and more than willing to text your phone while it is still in repose back in its coffin. You can then power back up and raise the phone from the grave when the time is right. But it's well recognized that a lot of people need to raise their gaze and their fingers well above the plane of a touchscreen, to further points of interest more than they do.


"When you are in a place with perhaps the most mundane music, if it gets even the least bit decent and you get up there and dance ecstatically, the band will love it. If there are no dancers and you get up there at all the band will probably love it.

They might just not play so mundane after that.

And others who may be the least bit inspired will often get right up there with you, hesitating much less than they would have normally done."

Definitely, but it takes a lot of courage to do that. I am a very good dancer and I often was the only one dancing - and yes, the band of course loves it and usually also most other people.

But it is always a struggle to really let go and ignore all the thoughts of what others might think and just take the empty space in front of the band and go wild.

Ecstatic dance in the way I experienced it, is specially made to not have that crowd of judging outsiders and rather tries to create a safe space where everyone can just move how he or she feels like without being judged. (also phones are banned there, so no fear that someone might take a video of you, which is something that defninitely happens when dancing in public spaces).


You are onto something here. I myself have overcome this horrible state and the no alcohol thing is pretty key, along with, some better drugs/scene & the essence of hacker spirit: curiosity. My story is a bit long, but Iam willing to share if it is of interest or possibly even of help to someone. It is a horrible state and not easily solved by someone deep in that hopeless state.


“Get out there” means “just pick something—anything!!—and show up.”

Go to a local gathering place (cafe, bar, church, literally wherever people hang out), look at the pinboard to find an upcoming thing to go to. Ask people what they do for fun. Whatever you do, don’t look for the “perfect” thing. It doesn’t exist, and waiting for/seeking it gets in the way of you actually meeting people.

Be curious.

Isn’t that a fundamental trait of the intellectual? Consider everything, turn over every stone?

The world is crawling with interesting people who would be your friends.

Socialization is a give-and-take; expect to give (maybe listen to some “crap music” with others) before you can take.

One more thing: it sounds like you’ve built a superiority complex. Kill that. It’s a facade you’ve built to insulate yourself. You’ll never meet others with it … or you’ll just meet other snobs.

Consider that there may be someone out at those “inane” events who feels the same as you, but is out there looking for you to show up!

Addendum: this has momentum. Once you start meeting people and feeling more confident, it won’t feel like work anymore. At that point, you may actually find yourself engaging people like your former self.


I am a loner/socially awkward and used to run a bit before 2020. Then in 2020, since we were confined I used to run close to my house. I used to see a bunch of folks regularly running but was apprehensive about approaching them. One day one of them said Hi and now I am thankful and happy to be part of that group. There are still periods when I like to run alone and avoid the group but they welcome me whenever I am ready. I am a lot happier since joining this group. More people have joined this group in last two and a half years and am good friends with some of them. I am not sure but I think the group has helped some of them with their loneliness like me. There is some sense of satisfaction when you improve your timing but a bigger source of satisfaction is when you are helping others with their running or just spending time with them. I was lucky to be found by the group and sometimes I wish we can find more people who would be interested even a little bit but are apprehensive like I was/am.


YES! Wonderful!

Two things you mention that are powerful:

1. Keep it really simple. Just show up. You don’t have to solve what happens next, or make a perfect first impression, or be a perfect person. Give yourself the same grace you would others.

2. The gratitude you feel for having found the group. Imagine how the others you’ve helped feel having found the group.

A great story—thank you for sharing.


A lot of people won't get your sarcasm, but I agree with this. Not that simple, life isn't fair, some people are more attractive/charismatic/interesting than others. Also, if you are geeky or considered weird by others, you can only find comfort with other people like you.

And the blanket suggestion that gym somehow solves everything is stupid. Personall experience: I tried it and ended up no better and with knee issues with which doctors can't help (just rest some, sure, like 8 months now since it began). Started playing computer games again and giving less shit about everything, feel happy again. Escapism does work.


Life is far from fair indeed - I have a deformed hip from a childhood illness that has caused me a lifetime of pain. I should probably have already had it replaced at an age that would be several decades younger than average for that surgery, but I'm too scared of the recovery and living with parts of my skeleton replaced by metal and plastic.

That said, I started a gym routine last year to try and relieve anxiety. I'm quite limited (you have no idea how much you use your hips until you get 8/10 pain for a few days after aggravating them). So it means low-impact activities and upper-body workouts. I didn't think it would make any difference in my appearance; I just needed to move in any way I could. After a few months I started getting comments about putting on muscle and overall looking better. (I lost of lot of weight which is more attributable to cleaning up my diet concurrently to a gym routine) A co-worker I met assumed I must be ex-military which I found funny because the childhood illness I had is specifically named in their document of medical exclusions. I don't think it's any secret that people treat you better when you appear more fit, and it is something that you have some control over.

But I don't believe that looks are motivation enough to get you to the gym every day (at least for me it is not); it's much better to do it for your well-being. Ideally, you can find exercises that both workaround your injuries and that you enjoy. Also, it's not like you need to go hard at the gym for two hours a day every day of the week; just do what you can and take pride in what you are able to achieve within your limits.

I've spent a lot of my life regretting my circumstances in unhealthy ways which I ultimately regret. I can't say that everything is wonderful now or even good, but they are better.


"with knee issues with which doctors can't help (just rest some, sure, like 8 months now since it began)."

Usually knee issues do not heal with resting, but with moving. And rather get worse over time while remaining passive (in resting position only some blood flows through the whole knee, and the muscle detoriates quickly and without a strong muscle, even more stress gets onto the knee)

So unless you have a very special condition where competent doctors especially said, only resting will help, rather get active again with light activity. Walking, cycling, dancing, climbing, ...

Every exercise that moves the body, without putting too much stress on it.


Competent doctor told to rest. Left knee MRT showed a "stress break" (don't know correct terminology in English). Problem is there has been no stress for last 6-7 months apart from time I was trying to start running (ironic). Either it doesn't heal for half a year or it's something else. I also had to wear this cloth knee support thing for a couole of weeks after some extra long walking with friends, as the knee didn't like any load at all.

It's not over yet, there will be excercise, waiting in line for a specialist, but not walking/cycling/dancing/climbing, execises you do at home when you have an injury.

Anyway, I tried to exercise, I got an injury, and I felt worse than without exercise, cause something was taken away from me.


"I also had to wear this cloth knee support thing for a couole of weeks after some extra long walking with friend"

My experience with bandages is, only use them when you absolutely have to. Otherwise your body gets used to them and sees even less need to make the knee work on its own.

In general, I am not a doctor, but went through years of working out how to fix bad knees and went from doctor to doctor, operation etc. (most doctors were actually very bad for anything not routine)

What helped in the end, was establishing a habit of doing special knee exercises whenever possible. And lots of adequate sport in moderation. Climbing and Trampolin may sound crazy, but are actually pretty good for the knees if done on a light level and also walking barefeet helped a lot. In general, trying to get the muscles around the knee as strong as possible, as then the muscles can hold your knee to replace whatever is broken inside your knee.

"Anyway, I tried to exercise, I got an injury, and I felt worse than without exercise, cause something was taken away from me."

And I would be careful with your conclusion, in my opinion you just did too much of wrong exercise. That is bad, yes (and is also what messed up my knees initially). So maybe try to find activities or exercises that are fun, but not too hard. And yes, it sucks not being able to do, what you could do before. At some point I allmost thought I would never be able to run again. And I definitely will never get back to the state of before - but I managed to do quite a lot again. Anyway, all the best for your recovery.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: