Yes, many or even most domains where C++ sees a large market share are domains with no other serious alternative. But this is an indictment of C++ and not praise. What it tells us is that when there are other viable options, C++ is rarely chosen.
The number of such domains has gone down over time, and will probably continue to do so.
The number of domains where low-level languages are required, and that includes C, C++, Rust, and Zig, has gone down over time and continues to do so. All of these languages are rarely chosen when there are viable alternatives (and I say "rarely" taking into account total number of lines of code, not necessarily number of projects). Nevertheless, there are still some very important domains where such languages are needed, and Rust's adoption rate is low enough to suggest serious problems with it, too. When language X offers significant advantages over language Y, its adoption compared to Y is usually quite fast (which is why most languages get close to their peak adoption relatively quickly, i.e. within about a decade).
If we ignore external factors like experience and ecosystem size, Rust is a better language than C++, but not better enough to justify faster adoption, which is exactly what we're seeing. It's certainly gained some sort of foothold, but as it's already quite old, it's doubtful it will ever be as popular as C++ is now, let alone in its heydey. To get there, Rust's market share will need to grow by about a factor of 10 compared to what it is now, and while that's possible, if it does that it will have been the first language to ever do so at such an advanced age.
There's always resistance to change. It's a constant, and as our industry itself ages it gets a bit worse. If you use libc++ did you know your sort didn't have O(n log n) worst case performance until part way through the Biden administration? A suitable sorting algorithm was invented back in 1997, those big-O bounds were finally mandated for C++ in 2011, but it still took until a few years ago to actually implement it for Clang.
Except, as you say, all those factors always exist, so we can compare things against each other. No language to date has grown its market share by a factor of ten at such an advanced age [1]. Despite all the hurdles, successful languages have succeeded faster. Of course, it's possible that Rust will somehow manage to grow a lot, yet significantly slower than all other languages, but there's no reason to expect that as the likely outcome. Yes, it certainly has significant adoption, but that adoption is significantly lower than all languages that ended up where C++ is or higher.
[1]: In a competitive field, with selection pressure, the speed at which technologies spread is related to their relative advantage, and while slow growth is possible, it's rare because competitive alternatives tend to come up.
This sounds like you're just repeating the same claim again. It reminds me a little bit of https://xkcd.com/1122/
We get it, if you squint hard at the numbers you can imagine you're seeing a pattern, and if you're wrong well, just squint harder and a new pattern emerges, it's fool proof.
Observing a pattern with a causal explanation - in an environment with selective pressure things spread at a rate proportional to their relative competitive advantage (or relative "fitness") - is nothing at all like retroactively finding arbitrary and unexplained correlations. It's more along the lines of "no candidate has won the US presidential election with an approval of under 30% a month before the election". Of course, even that could still happen, but the causal relationship is clear enough so even though a candidate with 30% in the polls a month before the election could win, you'd hardly say that's the safer bet.
You're basically just re-stating my point. You mistakenly believe the pattern you've seen is predictive and so you've invented an explanation for why that pattern reflects some underlying truth, and that's what pundits do for these presidential patterns too. You can already watch Harry Enten on TV explaining that out-of-cycle races could somehow be predictive for 2026. Are they? Not really but eh, there's 24 hours per day to fill and people would like some of it not to be about Trump causing havoc for no good reason.
Notice that your pattern offers zero examples and yet has multiple entirely arbitrary requirements, much like one of those "No President has been re-elected with double digit unemployment" predictions. Why double digits? It is arbitrary, and likewise for your "about a decade" prediction, your explanation doesn't somehow justify ten years rather than five or twenty.
> You mistakenly believe the pattern you've seen is predictive
Why mistakenly? I think you're confusing the possibility of breaking a causal trend with the likelihood of doing that. Something is predictive even if it doesn't have a 100% success rate. It just needs to have a higher chance than other predictions. I'm not claiming Rust has a zero chance of achieving C++'s (diminished) popularity, just that it has a less than 50% chance. Not that it can't happen, just that it's not looking like the best bet given available information.
> Notice that your pattern offers zero examples
The "pattern" includes all examples. Name one programming language in the history of software that's grown its market share by a factor of ten after the age of 10-13. Rust is now older than Java was when JDK 6 came out and almost the same age Python was when Python 3 came out (and Python is the most notable example of a late bloomer that we have). Its design began when Java was younger than Rust is now. Look at how Fortran, C, C++, and Go were doing at that age. What you need to explain isn't why it's possible for Rust to achieve the same popularity as C++, but why it is more likely than not that its trend will be different from that of any other programming language in history.
> Why double digits? It is arbitrary, and likewise for your "about a decade" prediction
The precise number is arbitrary, but the rule is that the rate of adoption of any technology (or anything in a field with selective pressure) spreads at a rate proportional to its competitive advantage. You can ignore the numbers altogether, but the general rule about the rate of adoption of a technology or any ability that offers a competitive advantage in a competitive environment remains. The rate of Rust's adoption is lower than that of Fortran, Cobol, C, C++, VB, Java, Python, Ruby, C#, PHP, and Go and is more-or-less similar to that of Ada. You don't need numbers, just comparisons. Are the causal theory and historical precedent 100% accurate for any future technology? Probably not, as we're talking statistics, but at this point, it is the bet that this is the most likely outcome that a particular technology would buck the trend that needs justification.
I certainly accept that the possibility of Rust achieving the same popularity that C++ has today exists, but I'm looking for the justification that that is the most likely outcome. Yes, some places are adopting Rust, but the number of those saying nah (among C++ shops) is higher than that of all programming languages that have ever become very popular. The point isn't that bucking a trend with a causal explanation is impossible. Of course it's possible. The question is whether it is more or less likely than not breaking the causal trend.
Your hypothetical "factor of ten" market share growth requirement means it's literally impossible for all the big players to achieve this since they presumably have more than 10% market share and such a "factor of ten" increase would mean they somehow had more than the entire market. When declaring success for a model because it predicted that a literally impossible thing wouldn't happen I'd suggest that model is actually worthless. We all knew that literally impossible things don't happen, confirming that doesn't validate the model.
Lets take your Fortran "example". What market share did Fortran have, according to you, in say 1959? How did you measure this? How about in 1965? Clearly you're confident, unlike Fortran's programmers, users and standards committee, that it was all over by 1966. Which is weird (after all that's when Fortran 66 comes into the picture), but I guess once I see how you calculate these outputs it'll make sense right?
> means it's literally impossible for all the big players to achieve this
Only because they've achieved that 10% in their first decade or so, but what I said is the case for all languages, big and small alike (and Rust doesn't have this problem because it needs a 10x boost to approach C++'s current market share, which is already well below its peak). But the precise numbers don't matter. You can use 5x and it would still be true for most languages. The point is that languages - indeed, all technologies, especially in a competitive market - reach or approach their peak market share relatively quickly.
You make it sound like a novel or strange theory, but it's rather obvious when you look at the history. And the reason is that if a technology offers a big competitive advantage, it's adopted relatively quickly as people don't want to fall behind the competition. And while a small competitive advantage could hypothetically translate to steady, slow growth, what happens is that over that time, new alternatives show up and the language loses the novelty advantage without ever having gained a big-player advantage.
That's why, as much as I like, say, Clojure (and I like it a lot), I don't expect to see much future growth.
Yes, because I have the benefit of hindsight. Also, note that I'm not saying anything about decline (which happens both quickly and slowly), only that technologies in a competitive market reach or approach their peak share quickly. Fortran clearly became the dominant language for its domain in under a decade.
But anyway, if you think that steady slow growth is a likelier or more common scenario than fast growth - fine. I just think that thesis is very hard to support.
The new profiling.sampling module looks very neat, but I don't see any way to enable/disable the profiler from code. This greatly limits the usefulness, as I am often in control of the code itself but not how it is launched.
It is not a cast. std::pair<const std::string, ...> and std::pair<std::string,...> are different types, although there is an implicit conversion. So a temporary is implicitly created and bound to the const reference. So not only there is a copy, you have a reference to an object that is destroyed at end of scope when you might expect it to live further.
I guess this is one of the reasons, why I don't use C++. Temporaries is a topic, where C++ on one side and me and C on the other side has had disagreements in the past. Why does changing the type even create another object at all? Why does it allocate? Why doesn't the optimizer use the effective type to optimize that away?
> Why does changing the type even create another object at all?
There's no such thing as "changing the type" in c++.
Function returns an object type A, your variable is of type B, compiler tries to see if there is a conversion of the value of type A to a new value of type B
Each entry in the map will be copied. In C++, const T& is allowed to bind to a temporary object (whose lifetime will be extended). So a new pair is implicitly constructed, and the reference binds to this object.
> Both Microsoft and Google seem to do it just fine
Microsoft sends me DMARC reports saying "yes, everything was accepted 100%, all good". The delivery logs on our end look good as well. However, they silently drop a large portion of messages with a Hotmail destination.
Yeah, there is a whole layer of Rube Goldberg-esque nonsense between the public Microsoft SMTP servers and the actual destinations that not even the highest levels of their support seem to properly understand.
Like: if you truly want to ensure delivery to an Office 365 tenant, EWS is pretty much the only option. Anything else will have random gaps, even after the tenant themselves have begged everyone they could find to let that particular sender, domain, IP, and everything through no matter what...
A nice way to fix bugs is to make the buggy state impossible to represent. In cases where a bug was caused by some fundamental flaw in the original design, a redesign might be the only way to feel reasonably confident about the fix.
I.e., if I pthread_mutex_init(&some_addr, ...), I cannot then copy the bits from some_addr to some_other_addr and then pthread_mutex_lock(&some_other_addr). Hence not movable.
> Moving a mutex is otherwise non-sensical once the mutex is visible
What does "visible" mean here? In Rust, in any circumstance where a move is possible, there are no other references to that object, hence it is safe to move.
Well, technically if you only have a mutable borrow (it's not your object) then you can't move from it unless you replace it somehow. If you have two such borrows you can swap them, if the type implements Default you can take from one borrow and this replaces it with its default and if you've some other way to make one you can replace the one you've got a reference to with that one, but if you can't make a new one and don't have one to replace it with, then too bad, no moving the one you've got a reference to.
> What does "visible" mean here? In Rust, in any circumstance where a move is possible, there are no other references to that object, hence it is safe to move.
And other than during construction or initialization (of the mutex object, containing object, or related state), how common is it in Rust to pass a mutex by value? If you can pass by value then the mutex isn't (can't) protect anything. I'm struggling to think of a scenario where you'd want to do this, or at least why the inability to do so is a meaningful impediment (outside construction/initialization, that is). I understand Rust is big on pass-by-value, but when the need for a mutex enters the fray, it's because you're sharing or about to share, and thus passing by reference.
Depends on the program, and it can be a very useful tool.
Rust has Mutex::get_mut(&mut self) which allows getting the inner &mut T without locking. Having a &mut Mutex<T> implies you can get &mut T without locks. Being able to treat Mutex<T> like any other value means you can use the whole suite of Rust's ownership tools to pass the value through your program.
Perhaps you temporarily move the Mutex into a shared data structure so it can be used on multiple threads, then take it back out later in a serial part of your program to get mutable access without locks. It's a lot easier to move Mutex<T> around than &mut Mutex<T> if you're going to then share it and un-share it.
Also It's impossible to construct a Mutex without moving at least once, as Rust doesn't guarantee return value optimization. All moves in Rust are treated as memcpy that 'destroy' the old value. There's no way to even assign 'let v = Mutex::new()' without a move so it's also a hard functional requirement.
reply