My electricity and water is much more reliable than my Internet service. Then again, I've never called my ISP about an issue that wasn't 100% on them, but the HN crowd is more exceptional in that sense than most people.
My car beeps occasionally, but it certainly doesn't rise to the level you describe. Some of them represent real safety issues, even if you don't think they do in that moment. For example, my TPMS has saved alerted me to low tire pressure when I didn't know I had a leak, and the fact that it beeps a couple times every time I start the car is both helpful ("oh right, I need to get that fixed soon") and far from annoying.
Accidentally leaving a kid locked in a car on a hot Summer day is beyond horrific. How many kids should die before we think the annoyance of an extra beep would be worth it?
> Accidentally leaving a kid locked in a car on a hot Summer day is beyond horrific. How many kids should die before we think the annoyance of an extra beep would be worth it?
It's not about annoyance, it's about whether it's effective at all.
If the car dings every time you turn it off to remind you "check back seat", it doesn't matter if the alert is completely unique and obnoxious and annoying, you will be trained to ignore it and it will quickly become ineffective.
There's a whole field of study here ("alarm fatigue" or "alert fatigue") that's generally looked at in terms of things like healthcare or aerospace. For example, there's a study in healthcare[0] where they found that when dealing with a system warning about drug interactions (including critical dosing errors, fatal interactions, etc) providers overrode 96% of alerts. Their "high priority drug-drug interaction" alerts were overridden 87% of the time, and on review only 0.5% of those were deemed appropriate. Other studies[1] have directly attributed this to repeated exposure desensitizing people and training them to ignore the alerts. People have died because of this.
I have a kid. I can't imagine the horror of being in that situation. I am certain that it would completely and utterly break me. I am fully onboard with a system that prevents this happening. I would be fully supportive of regulating a system that prevents this from happening. More unspecific beeps and dings is not that system.
The problem is that the driver might have so many beeps, that they decide to ignore yet another beep.
I suppose a beep that sounds very different would get their attention, like for pilots in plane cockpits. A terrible stand-up comedian suggestion would be to reuse the plane's "retard, retard!" for parents who forget their kids...
Prusa. Made in Europe, from quality components (or buy it as a kit from them and build it yourself, which is a really fantastic experience). Hardware is repairable and upgradable and the firmware is open source.
But they cost more than Bambu. Most Chinese things tend to cost less than alternatives, for obvious reasons.
Note that Prusa recently opened a US-based factory according to their blog, so in addition to EU-made they also got US-made going.
As a big fan of the company I'm hoping this will make them price-competitive to Bambu (or even considerably cheaper) while the tariffs rage. I'm not a fan of the tariffs, but if it gives a boost to the Core ONE launch, welp ... good for them.
I think as a technical term, pausless (as used here) has a specific meaning, just like real-time has a specific technical meaning (not literally 0 latency since that's generally physically impossible).
I would be surprised if much of that funding went to constructing those fancy buildings - donors who want their name over the door like to do that. Keeping the lights on, air heated/conditioned, stockrooms stocked, etc don't come cheap. Let's also remember that the government doesn't _just_ get all the research output from that grant money, they get a pipeline of researchers (and undergraduates) that feed industry (plus world class research facilities that can do more research, beyond what the government directly funds). And that's a large part of what has made the US so economically successful, and such a desirable place for people to learn and work.
(Also professors and post docs in many areas can make a lot more in industry, so let's not knock them too much if a university wants to look at least a little attractive to them)
Programming languages are complicated. Their standard libraries are extensive. Real world application are usually not trivial, because they often model real world processes, and the real world is messy.
Anything we can do to make writing software easier and more reliable, and reduce cognitive load, is going to benefit the software developers who are involved, and will make the systems better.
I'm sorry, if you can't jump-to-definition then you are wasting the company's time. It's something that all developers need to do and there's no reason we should be wasting time navigating a codebase when what you are looking for can be found instantly.
Same with autocomplete. I can type very fast, but autocomplete can type faster. Plus with the size of most APIs (even the ones built in to most programming languages) I have better things to spend brain space on. Is it list.empty(), clear(), empty(), truncate() or something else? With autocomplete I can find the function in want in 3 seconds (and read the docs inline, so I can tell that empty() doesn't empty the list, it tells me whether or not the list is empty) without lifting my fingers off my keyboard. Should I remember which is which? Maybe, but I don't care, and a jump between languages frequently enough that it's not worth the effort to keep track of silly things like that
That's a contrived example, but hopefully you get the point.
We're SWEs; the temptation to min/max ourselves is quite high. And it's not a bad impulse! I'm a Vim user; I've obviously invested a lot of time in editing efficiency (e.g. I'm great at looking up docs in the terminal). Oftentimes when I need to code I can't dilly dally. I also prefer a style of dumping out an implementation as quickly as possible to prove/disprove the idea, which of course requires a lot of editing efficiency.
But the way I've gotten here is by assiduously removing things that slowed me down. Sure popping docs in your editor is faster than switching to a browser, but just remembering is even faster. If your goal is really to not "waste the company's time" then you're putting the standard library, your dependencies, your app, etc. into Anki and memorizing. Since almost nobody (some people are and bless them) is doing this, I think we should admit we're fully in the realm of personal aesthetic preferences here.
And I take a broader view of the whole thing besides. I start from the perspective that engineers are whole people with histories, futures, goals, features, interests, and opinions. For example, Go wasn't built with autocomplete and go to definition. You're more or less arguing that Rob Pike should've been forced to setup VSCode. I think that's an express ticket to pissing off and burning out your engineers; just like I'd never micromanage Pike to that degree, I'd never micromanage you to the point where I'd force you to learn Acme.
The value we bring to our companies isn't just the speed with which we crank out code. The value our companies give to us isn't just a salary and benefits. We have more nuanced and complex human needs, and sacrificing those for extreme coding efficiency may provide short-term gains, but the long-term effect is pretty grim (ponder for a moment working at a company that actually cares about efficiency to the degree they'll micromanage your editing workflow).
Something that Rust got _really_ right:
Editions. And not just that they exist, but that they are specified per module, and you can mix and match modules with different Editions within a bigger project. This lets a language make backwards incompatible changes, and projects can adopt the new features piecemeal.
If such a thing came to C++, there would obviously be limitations around module boundaries, when different modules used a different Edition. But perhaps this could be a way forward that could allow both camps to have their cake and eat it too.
Imagine a world where the main difference between Python 2 and 3 was the frontend syntax parser, and each module could specifically which syntax ("Edition") it used...
But Edition can exist only because Rust intrinsically has the concept of package, which naturally defines the boundary. C++ has nothing. How do you denote a.cpp be of cpp_2017 edition which b.cpp be cpp_2026? Some per-file comment line at top of each file?
C++ is a mess in that it has too much historic baggage while trying to adapt to a fiercely changing landscape. Like the article says, it has to make drastic changes to keep up, but such changes will probably kill 80% of its target audiences. I think putting C++ in maintenance mode and keep it as a "legacy" language is the way to go. It is time to either switch to Rust, or pick one of its successor languages and put effort into it.
Rust doesn't have the concept of package. (Cargo does, but Cargo is a different thing from Rust, and it's entirely possible to use Rust without Cargo).
Rust has the concept of _crate_, which is very close to the concept of compilation unit in C++. You build a crate by invoking `rustc` with a particular set of arguments, just as you build a compilation unit by invoking `g++` or `clang++` with a particular set of arguments.
One of these arguments defines the edition, for Rust, just like it could for C++.
That only works for C++ code using C++20 modules (i.e. for approximately nothing).
With textual includes, you need to be able to switch back and forth the edition within a single compilation unit.
It's not clear that modules alone will solve One Definition Rule issues that you're describing. It's actually more likely that programs will have different object files building against different Built Module Interfaces for the same module interface. Especially for widely used modules like the standard std one.
But! We'll be able to see all the extra parsing happen so in theory you could track down the incompatibilities and do something about them.
Modules are starting to come out. They have some growing pains, but they are now ready for early adopters and are looking like they will be good. I'm still in wait and see mode (I'm not an early adopter), but so far everything just looks like growing pains that will be solved and then they will take off.
I expect modules to follow a S curve of growth. Starting in about 2 years projects will start to adopt in mass, and over the next 5-10 years there will be fast growth and then (in about 12 years!) on a few stragglers will not use modules. They are not free to adopt but there appear to be a lot of long term savings from paying the price.
I'll mention that library maintainers/authors can't even _consider_ modules unless they set C++20 as a requirement. Many/most popular libraries will not do that anytime soon. I maintain a moderately-popular library and my requirement is C++11... now, to be fair, I started it back in 2016-2017; but still, I won't even consider requiring C++20 until C++17-and-earlier application code is close to disappearing.
Mixing editions in a file happens in Rust with the macro system. You write a macro to generate code in your edition and the generation happens in the callers crate, no matter what edition it is.
> I think putting C++ in maintenance mode and keep it as a "legacy" language is the way to go
I agree but also understand this is absolutely wishful thinking. There is so much inertia and natural resistance to change that C++ will be around for the next century barring nuclear armageddon.
Cobol's still around. Just because a language exists doesn't mean that we have to keep releasing updated specifications and compiler versions rather than moving all those resources to better languages.
I think the existence of COBOL-2023 actually suggests that it's not merely possible that in effect C++ 26 is the last C++ but that maybe C++ 17 was (in the same sense) already the last C++ and we just didn't know it.
After all doubtless COBOL's proponents did not regard COBOL-85 as the last COBOL - from their point of view COBOL-2002 was just a somewhat delayed further revision of the language that people had previously overlooked, surely now things were back on track. But in practice yeah, by the time of COBOL-2002 that's a dead language.
Fully agree, because for the use cases of being a safer C, and keeping stuff like LLVM and GCC running, that is already good enough.
From my point of view C++26 is going to be the last one that actually matters, because too many are looking forward to whatever reflection support it can provide, otherwise that would be C++23.
There is also the whole issue that past C++17, all compilers seem like a swiss cheese in language support for the two following language revisions.
> I think putting C++ in maintenance mode and keep it as a "legacy" language is the way to go
That is not possible. The the following function in C++ std::vector<something> doSomething(std::string); Simple enough, memory safe (at least the interface, who knows what happens inside), performant, but how do you call that function from anything else? If you want to use anything else with C++ it needs to speak C++ and the means vector and string needs to interoperate.
You can interoperate via C ABI and just not use the C++ standard types across modules - which is the sane thing to do. Every other language that supports FFI via C linkage does this, only C++ insists on this craziness.
Also I wouldn't start by rewriting the thing that calls do_something, I'd start by rewriting do_something. Calling into rust from c++ using something like zngur lets you define rust types in c++ and then call idiomatic rust. You can't do it in the opposite direction because you cannot safely represent all c++ types in rust, because some of them aren't safe.
I have millions of lines of C++. do_something exists and is used but a lot of those lines and works well. I have a new feature that needs to call do_something. I'm not rewriting any code. My current code base was a rewrite of previous code into C++ started before rust existed), and it costs a nearly a billion dollars! I cannot go to my bosses and say that expensive rewrite that is only now starting to pay off because of how much better our code is needs to be scrapped. Maybe in 20 years we can ask for another billion (adjust for inflation) to rewrite again, but today either I write C++, or I interoperate with existing C++ with minimal effort.
I'm working on interoperation with existing C++. It is a hard problem and so far every answer I've found means all of our new features still needs to be written in C++ but now I'm putting in a framework where that code could be used by non-C++. I hope in 5 years that framework is in place by enough that early adopters can write something other than C++ - only time will tell though.
Yeah that use case is harder, but I'm involved in a similar one. Our approach is to split off new work as a separate process when possible and do it entirely in rust. You can call into c++ from rust, it just means more unsafe code in rust wrapping the c++ that has to change when you or your great grandchild finally do get around to writing do_something in rust. I am super aware of how daunting it is, especially if your customer base isn't advocating for the switch. Which most don't care until they get pwned and then they come with lawyers. Autocxx has proven a painful way to go. The chrome team has had some input to stuff and seem to be making it better.
Sure I can do that - but my example C++ function is fully memory safe (other than don't go off the end of the vector which static rules can enforce by banning []). If I make a C wrapper I just lost all the memory safety and now I'm at higher risk. Plus the effort to build that wrapper is not zero (though there are some generators that help)
How about going off the end of the vector with an iterator, or modifying the vector while iterating it, or adding to the vector from two different threads or reading from one thread while another is modifying it or [...].
There is nothing memory safe whatsoever about std::vector<something> and std::string. Sure, they give you access to their allocated length, so they're better than something[] and char* (which often also know the size of their allocations, but refuse to tell you).
> going off the end of the vector with an iterator,
The point of an iterator is to make it hard to do that. You can, but it is easy to not do that.
> modifying the vector while iterating it
Annoying, but in practice I've not found it hard to avoid.
> adding to the vector from two different threads or reading from one thread while another is modifying it
Rust doesn't help here - they stop you from doing this, but if threads are your answer rust will just say no (or force you into unsafe). Threads are hard, generally it is best to avoid this in the first place, but in the places where you need to modify data from threads Rust won't help.
This is just not accurate, you can use atomic data types, Mutex<> or RwLock<> to ensure thread-safe access. (Or write your own concurrent data structures, and mark them safe for access from a different thread.) C++ has equivalent solutions but doesn't check that you're doing the right thing.
> And not just that they exist, but that they are specified per module
Nitpick: editions are specified per crate, not per module.
---
Also note that editions allow to make mostly syntactic changes (add/remove syntax or change the meaning of existing ones), however it is greatly limited in what can be changed in the standard library because ultimately that is a crate dependency shared by all other crates.
My C++ knowledge is pretty weak in this regard but couldn't you link different compilation units together just like you link shared libraries? I mean it sounds like a nightmare from a layout-my-code perspective, but dumb analogy: foo/a/* is compiled as C++11 code and foo/b/ is compiled as C++20 code and foo/bin/ uses both? (Not fun to use.. but possible?)
Is that an ABI thing? I thought all versions up to and including C++23 were ABI compatible.
How does foo/bin use both when foo/a/* and foo/b/ use ABI-incompatible versions of stdlib types, perhaps in their public interfaces? This can easily lead to breakage in interop across foo/a/* and foo/b/ .
libc++ and glibc++ both break ABI less frequently than new versions of C++ come out. As far as I'm aware, libc++ has never released a breaking change to its ABI.
There’s only ever one instance of the standard library when a program is compiled, so an and b cannot depend on different versions of it.
For normal libraries, an and b could depend on different versions, so this could be a problem. The name mangling scheme allows for a “disambiguator” to differentiate the two, I believe that the version is used here but the documentation for it does not say if there’s more than that.
By linking both and not allowing mixing types, i.e. it considers types from a totally unrelated with types from b.
Also, Rust compiles the whole world at once, so any ABI breakage from mixing code from different compiler versions doesn't happen. (Editions are different thing from compiler versions, a single version of the compiler supports multiple editions.)
Yeah, I know, but I mean that it's normal to link together crates compiled with the same compiler, unlike with C, where ABIs are stabler and binary dependencies are more common.
What is the point? C++ is mostly ABI compatible (std::string broke between C++98 and C++11 in GNU - but we can ignore something from 13 years ago). The is very little valid C++11 code that won't build as C++23 without changes (I can't think of anything, but if something exists it is probably something really bad where in C++11 you shouldn't have done that).
Now there is the possibility that someone could come up with a new breaking syntax and want a C++26 marker. However nobody really wants that. In part because C++98 code rebuilt as C++11 often saw a significant runtime improvement. Even today C code built as C++23 probably runs faster than when compiled as C (the exceptions are rare - generally either the code doesn't compile as C++, or it compiles but runs wrong)
Sure. But per your own other posts in this thread, you've got > 10 million lines of "legacy C++". Probably those bad practices are present in that code and not automatically fixable. So switching to compiling everything with a C++23 compiler is every bit as much not an option for you as switching to Rust, no?
If I turn on C++23 and get a handful of errors over those million lines of code I will spend the week or two needed to rewrite just those areas of code. That is much easier than rewriting everything from scratch in rust. Even if we just wrap all needed C++ APIs in C so we can use rust that is a lot of effort before all our frameworks have the needed interfaces (this is however my current plan - it will just be a few years before I have enough wrappers in place that someone can think about Rust!)
Note too that we are reasonably good (some of us are experts) at C++ and not Rust. Like any other human when we first do Rust we will make a mess because we are trying to do things like C++ - I expect to see too much unsafe just to shut up Rust about things that work in C++ instead of figuring out how to do it in safe rust. (there will also be places where unsafe is really needed) I want to start slow with Rust so that as we learn what works we don't have too much awful code.
> Like any other human when we first do Rust we will make a mess because we are trying to do things like C++ - I expect to see too much unsafe just to shut up Rust about things that work in C++
C++ has its own Core Guidelines that are pretty rusty already (to be fair, in more ways than one). There's just no automated compiler enforcement of them.
There is no inherent point, I was just wondering, if it's possible, why people don't use such a homegrown module layout like Rust editions in C++.
I only ever worked in a couple of codebases where we had one standard for everything that was compiled and I suppose that's what 90% of people do, or link static libs, or shared libs, so externalize at an earlier step.
The C++ profiles proposal is something like an "editions lite". It could evolve into more fully featured editions some day, though not without some significant tooling work to support prevention of severe memory and type safety issues across different projects linked into the same program.
That's irrelevant. Look, the C++ committee has decided yet again not to break ABI. That is to say, they have affirmed that they DO NOT want backwards incompatible changes. So suggesting a way to make backwards incompatible changes is of no interest to the C++ committee. They don't want it and they have said so more than once.
I self-host Tiny Tiny RSS (https://tt-rss.org/). I think it will do everything you want (and more). The web UI is fine, and the Android app is great. It's actively developed, has been around for over a decade (I have been using it since Google Reader shut down) and has been super stable.
I guess the only thing it doesn't have that a SaaS offering could do would be some sort of recommendation engine (which I have no interest in).