Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Const all the things? (quuxplusone.github.io)
129 points by todsacerdoti on Jan 24, 2022 | hide | past | favorite | 158 comments


> Pass out-parameters by pointer: Widget*.

OP says this like this is an industry rule of sorts but I've pretty much always seen non-const references for this. Using pointers is terrible because now you have to add a check for null inside your function, as its API contract says "I can accept null values", while references make it clear that it does not make sense to call if you don't have a correct object to pass it.

> That’s right: the point of making a class with private members is to preserve invariants among those members.

How would OP make a class, which loads content on construction which must absolutely not being changed after construction then, even within its implementation ? "Being careful" is really not sufficient in my experience, and introducing child classes just to enforce the invariant that "const" provides is actively harmful to readability.


I absolutely hate non-const reference parameters. Personally, I never use them except for operator overloading or stl functions.

The reason is, as lelanthran said, that when I see func(param), I don't expect param to be modified.

The time I lost because of non-const reference parameters adds up to days, maybe weeks. In fact, bugs caused by stealthy references are often way harder to find than null pointer crashes. Null pointer crashes are among the bugs that concern me the least, sure crashes are bad, but the core dump often tells you exactly what's wrong, and they rarely lead to non-DoS exploits. See: offensive programming.

Now that I think of it, maybe IDEs should highlight non-const references in function calls.

I think it is unfortunate because I like the extra guarantees that references offer over pointers but the lack of explicit (de)referencing is enough for me to avoid.


> The time I lost because of non-const reference parameters adds up to days, maybe weeks.

Well, we have wildly different experiences, I can't remember a single instance where this was an issue in the projects I've been involved with.


And I believe you, here are the possible explanations:

- Personal experience: it is not a construct I use, and most of the code I work with don't use it either, so when it happens, it is a surprise. People who come from a pointerless language like Java may expect it.

- I work with a lot of messy code: large code bases I barely know, written by many people from different companies, without a style guide. So one function may use references and the next one may use pointers, adding to the confusion.

- Of course, documentation is misleading (to the point that I make a conscious effort not to read it), and so are function names. Things like getChild(n) that increments n when pretty much every getChild(n) function everywhere passes n by value. Terrible code, that's why there is a bug in the first place, bugs are usually not in the best written parts.

- I actually like jumping in other people terrible code and fix bugs, it is an interesting challenge and I became rather good at it where I work. As a result, I am often given these kinds of job. A few days is not that much compared to my entire experience of debugging terrible code, but enough to be significant.

- Of course I spend a lot more time fixing memory corruption (and the problem with stealthy reference is that it can look like memory corruption). But in my experience, references over pointers don't help that much with it. It helps against null pointer dereferencing, but as I said before, these are usually easy bugs to fix.


> Now that I think of it, maybe IDEs should highlight non-const references in function calls.

CLion highlights this with an off-color "&:" in front of the parameter.


C# requires explicit "out" on call site, I wonder if you could make a macro and lint rules for C++ to require this :) Haven't touched C++ in almost 10 years so not up-to-date on how good/extensible linters are.


You can create a trivial wrapper:

   https://godbolt.org/z/c5YeP6feG
The usual issue with these sort of things is that they are not idiomatic (but then again, there is no single idiomatic C++).

edit: std::reference_wrapper is of course the standard blessed way to do this, but still not idiomatic.


> Now that I think of it, maybe IDEs should highlight non-const references in function calls.

This is a great idea. When I type `ls` in a shell, I get important information from the colors of the output: one color for normal files, another for executables, another for directories, etc.

This would work well for parameter types, even if it were as simple as "white means it cannot be modified (pass by value, const reference, etc.) and yellow means it can (non-const reference, pointer, etc.)"

I wonder if this exists?


I believe it's called "semantic highlighting".


It would be really nice for C++ to add a mandatory “out” keyword for these parameters, just like C#. In that way you are forced to specify that this is an output parameter both at the caller and the callee, which makes things much more explicit and readable.


This is a great talk by Herb Sutter that goes into this in detail https://www.youtube.com/watch?v=6lurOCdaj0Y


Or, don't use out-parameters unless you really have to.

If a function returns a value, have it return that value, instead of using an out-parameter. Passing in-parameters by const reference rather than by value is a nice win because the semantics of the function remain clear. Input parameters still look like input parameters, you're just preventing unnecessary copies where it's easy to do so. Returning a value as an out-parameter rather than just returning it is a performance hack that prevents a copy, but at the expense of muddying the semantics of the function.

As the prophet says: we should forget about small efficiencies, say about 97% of the time. Write code that clearly expresses what it does. Return your return values. If you run a profiler over your code and you notice that some returns are causing a performance problem - sure, at that point, add an overload that takes an out-parameter and use it at the frequenty-executed hot spot.


The most common usage of out-parameters is when you want to return more than one thing from a function, in which case they can't be trivially converted to return values.


Surely in C++ you can return a tuple. It's only in C you can't.


Sure, but that doesn't always help readability/clarity. Compare this:

    SomeType result;
    if (!do_something(&result))
      return;
    process(result);
with this:

     std::tuple<bool, SomeType> ret = do_something();
     if (!std::get<0>(ret))
       return;
     process(std::get<1>(ret));
You could improve it a bit by using `auto` or returning a struct with named members, but quite often I still find it less readable. Especially in situations that are more complex than this example, where e.g. you fallback to another function to fetch `result` if `do_something` failed.


If there are only two status codes (success/failure) then it seems like optional would be more appropriate than a tuple of bool and SomeType.

    const std::optional<SomeType> result = do_something();
    if (!result.has_value()) {
        return;
    }
    process(result.value());
This is nicer than the output parameter in my opinion, because it may be unclear what the meaning of a default constructed SomeType is. Default values are sometimes dangerous if the default is a real, valid input - errors and invalid states can end up being passed through.

C++ doesn't have great support for sum types, so this is all always going to be pretty non-ergonomic, and the compiler doesn't always warn if you forget to do things. I might even say that the main advantage of using sum types is gone in C++ - you actually can forget to check has_value and call operator* on an empty optional.

It would be so easy for someone to make a mistake and write:

    process(*do_something());
And invoke undefined behaviour. Whereas processing a default-constructed SomeType is at least not undefined behaviour (depending on what SomeType is, and what process does...).

This whole thing is a mess, I hate C++, thinking about UB all the time...


for performance-oriented code, the original example, where the argument is stack-allocated in the caller, passed by reference, and used based on the return value, has benefits that can't be replicated in the std::optional<> version.

I like the std::optional<> version and use that pattern some of the time. But if the object being passed/returned to/from do_something() is non-trivial, i don't want the copy overhead.


> for performance-oriented code, the original example, where the argument is stack-allocated in the caller, passed by reference, and used based on the return value, has benefits that can't be replicated in the std::optional<> version.

Can you go into detail? What are the benefits that can't be replicated?

If anything, the std::optional version seems better, since we can avoid an invocation of the SomeType constructor. Otherwise, everything is the same - everything's allocated on the stack, there is no dynamic allocation.

> If an optional<T> contains a value, the value is guaranteed to be allocated as part of the optional object footprint, i.e. no dynamic memory allocation ever takes place. Thus, an optional object models an object, not a pointer, even though operator*() and operator->() are defined.

That's from here: https://en.cppreference.com/w/cpp/utility/optional

Since C++17, RVO is required by the standard too:

> Return value optimization is mandatory and no longer considered as copy elision; see above.

From here: https://en.cppreference.com/w/cpp/language/copy_elision

I admit I'm not really a C++ expert, so maybe I'm missing something.


Yes, in C++17, RVO would fix the issue. Alas, some of us are stuck with project conventions that have not reached "use C++17". Without RVO, there's an extra copy.


RVO is not new in C++17 though, only the standard requiring it in some cases.

Unless your compiler is truly ancient odds are good it RVOs just fine no?


There is no copy involved with the std::optional version.


The thing is, are we sure?

Probably many C++ aficionados will run to godbolt.org to prove this in a simple testcase, and they will probably be right. But then maybe it’s because you’ve only tested this for primitive or POD types. Maybe this might not be guaranteed for non-POD types with custom constructors? Or maybe this might be okay because of RVO or something? Hmm, let us read the specs again…

The problem is, C++ is a language that is very unintuitive about how your code is going to get mapped to actual hardware (assembly code). For many systems developers, it isn’t enough that the compiler might optimize this smartly; we instead want predictable compiler behavior. So if you want to make sure, and you’re not confident about the gnarly details of the C++ specification, it will be better for you just use out parameters instead of std::optional, if you are in the situation where you really need to squeeze out performance (which happens a lot for low-level systems programming).


Yes we are sure and it has nothing to do with optimizations, RVO, Godbolt or any of the completely irrelevant details you are talking about.

The implementation of std::optional is not permitted by the standard to allocate memory.


Note that I’m not talking about if std::optional itself allocates more than it needs to at construction (which I know it doesn’t, as cppreference.com says so), but if the compiler might generate an additional copy constructor call if it is value-returned from a function (which I’m worried about since NRVO isn’t in the spec and you can’t rely on it).

It’s more specific to the example code the original commenter posted (and about general out-parameter usage) than the flaws of std::optional itself. Maybe this didn’t get communicated properly from my comment.


For C++17, true. Not so if you're using an earlier compiler and/or language standard. Yes, yes, I know we should all get with the program, but that's alas not how the world operates.


There is no std::optional in earlier language standards. std::optional is a C++17 library feature.


Even for C++ 17, this is only guaranteed if you don't name the variable in your function. NRVO is not guaranteed at all. There is even an example downthread where gcc can't do it even at -O3.


boost::optional has existed for many, many years and does not require compiler/language support. AFAIK, the behavior is substantially identical to std::optional.


There are numerous ways of handling this that are significantly better, for example using an std::optional yields this code:

    do_something().and_then(process);
Or structured bindings:

    auto [error_code, result] = do_something();
    if(error_code == 0) {
      process(result);
    }


In the presence of types which are truthy (ie, contextually convertible to bool), its too easy to accidentally swap the unpacking:

    // Oops, backwards!
    auto [result, error_code] = do_something();
    if(error_code) {
      process(result);
    }


To me that's not easy...

It's a very well established pattern to have (error, result) I'd immediately see something off with (result, error)

And clang-tidy can catch this: https://clang.llvm.org/extra/clang-tidy/checks/readability-i...


Fair enough, in my codebase we always order data in terms of dependencies so that independent data comes before dependent data. In this case the result depends on the error_code, and the error_code is independent, so the error_code must be listed first.


    auto [error, value] = do_something()
    if(!error)
         process(value)
Is there something wrong with this?


Hmm, is it error, value or value, error? In this situation, I'd prefer something like std::optional or std::variant, which you can't get wrong as easily as swapping two variables in a structured bind.

Those have their problems too, though, and I'd never claim std::variant is "nice" to use.


tuples are unfortunately terrible. Its almost always better to make another type with the fields you need than use a tuple.


You can return a struct in C, but that's as unconventional as it gets.


I still don't really grok move semantics in C++ but is there a difference between assigning output to an out parameter and returning a value using std::move? Conceptually they seem the same to me but from what little I understand std::move doesn't quite do what I think it does.


You must not use std::move when returning a value, just let RVO do its job. The compiler will make it so that the variable you are returning from your function will be constructed in-place at the variable you are assigning the function call to.

However, if you are compiling at -O0 without any optimizations this may not happen.


Since C++17, compilers are now required to elide copies and use RVO: https://en.cppreference.com/w/cpp/language/copy_elision

There are probably exceptions and loopholes just like with literally everything in C++, but there's this line:

> Return value optimization is mandatory and no longer considered as copy elision; see above.


That's only in the unnamed case. Here's a simple case than GCC does not optimize even in -std=c++20 (and even at -O3 LOL): https://gcc.godbolt.org/z/jKPr98Mhx


Isn't it kind of confusing that RVO is mandatory, but it turns out RVO only refers to unnamed RVO and not NRVO? I would've thought that since NRVO is a type of RVO...

Thanks for the clarification. Clang does do NRVO in that case but I guess it's obviously not guaranteed by the standard.


Yes, I believe that explicitly qualifying unnamed and named rvo should be the norm and would clarify a lot of confusion on these topics.

I guess it makes sense that an optimization that GCC is not able to do at all should not be part of the standard ; I'm pretty sure it should be possible to write cases that clang wouldn't be able to optimize either.


Oof, that’s a bit painful. I thought you could rely on NRVO for recent versions of gcc and clang, but feel I should be a lot more careful after seeing that example.


From what I could see through hundreds of godbolt snippets, if you declare the variable you'll return at the very top of the function before any control flow, NRVO will pretty much always happen.


That only applies when you're returning a prvalue. NRVO isn't guaranteed: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58055


Since C++17 I believe this is guaranteed.


Just like you want to avoid a copy of the input arguments, you may also want to avoid a copy of the output/return value.


I really like the way C# did this:

    void foo<T>(T by_value, ref T by_ref, in T by_immutable_ref, out T out_only);
Append `?` to any type to make it optional. (Though that is more of a lint than truly part of the type system).

`out` parameters pass no value in and must pass a value out. Isn't that just a return value, semantically? Yes, but we didn't used to have value tuples and destructuring so we had to make do. It can still be nicer for some things:

    bool TryParse(string input, out int value)

    if (int.TryParse("123", out int value))
        // Use value


Then async comes along and you can't use `out` in async methods :)

Thus (anonymous) tuples were introduce so we can do it Go/Rust-style:

    async Task<(bool, T)> Foo<T>(T value) {
        /* do something async like a web request or file read etc. */
        var aThing = await GetAThing<T>(value);
        return (aThing != null, aThing);
    }


> OP says this like this is an industry rule of sorts but I've pretty much always seen non-const references for this. Using pointers is terrible because now you have to add a check for null inside your function, as its API contract says "I can accept null values", while references make it clear that it does not make sense to call if you don't have a correct object to pass it.

The other side of the coin is that it hurts readability if you are passing non-const references. When a function takes a non-const reference, the caller looks like this:

    foo (bar); // Have to read the header to know if bar is modified.
When a function takes a pointer, the caller looks like this:

    foo (&bar); // Immediately clear that we cannot gloss over this line in code-review.
Since I read code much more than I write it, it's easier for me to determine that local non-const variables that are used as arguments to a function are not getting modified.

A better (but not best) solution might be to have the language recognise and enforce nullability on arguments. Still not perfect (pointer may be non-null but invalid), but better.

One other benefit pointer parameters have over reference parameters is that the API contract is more flexible. For example, I have a function that accepts a network connection,

    mynet_accept(int listener_fd, char **remote_ip, uint16_t *remote_port);
Having the caller specify NULL for `remote_ip` means that the callee just ignores the parameter. Same for `remote_port`.

If that was changed to use references, the caller will have to declare variables just to satisfy the parameter list, even if they never intend to use them.


> The other side of the coin is that it hurts readability if you are passing non-const references. When a function takes a non-const reference, the caller looks like this:

This has honestly never ever been an issue for me. I don't remember the last time where I passed an argument to a C++ function and actually had to wonder if the argument was modified or not. Like, what if they are being modified if that's the called function has to do anyways ? And even if it's const, the implementation can just remove the const which while not a very nice thing to do, is legal in many cases.

    $ cd
    $ rg "const_cast" -g '*.cpp' | wc -l 
    2004
(and that does not even cover the c-style casts)

> mynet_accept(int listener_fd, char *remote_ip, uint16_t *remote_port);

For this exact case my go-to solution nowadays is

    struct connection_info { 
      int fd;
      std::string ip;
      uint16_t port;
    };  

    mynet_accept({ .fd = the_socket, .port = 4556 });
which is I think much more readable than any of the alternatives ( actual example for pretty much this exact case: https://github.com/ossia/libossia/blob/master/examples/Netwo... )


> I don't remember the last time where I passed an argument to a C++ function and actually had to wonder if the argument was modified or not. Like, what if they are being modified if that's the called function has to do anyways

That's not what I was complaining about. I am saying that, as a reader of the code, it is easier to reason about what happens in this situation:

    foo (&bar);
    ...
    if (bar) { do something }
than in this situation:

    foo (bar);
    ...
    if (bar) { do something }


> mynet_accept({ .fd = the_socket, .port = 4556 });

That's not functionally the same as the example I posted. In any case, I was using the example to point out that using pointers provides a more flexible API contract.

When using pointers, the contract can state that any parameters that are NULL are ignored, whether they are out parameters or not.


> Like, what if they are being modified if that's the called function has to do anyways ?

The idea is that you have some code like this:

  x = some_func();
  do_something(x);
  if x == 0 {
    do_something_else();
  }
You're trying to track down why do_something_else() wasn't called, let's say. Do you have to look at the definition of do_something()? If your coding standards say "no non-const references", then no. If do_something() was modifying x, it would have looked differently, so you can ignore it.


To stay on topic, if x was declared const, you wouldn't have to guess. The compiler would tell you.


> To stay on topic, if x was declared const, you wouldn't have to guess. The compiler would tell you.

Frequently it can't be declared constant, because it is being changed in that scope as it runs.


This was a discussion about passing out (or inout) parameters as `T&` vs `T*`. By definition, these parameters can't be const, so const is irrelevant to this thread.


Good point.


> You're trying to track down why do_something_else() wasn't called, let's say

Then I put a breakpoint at that line and hit f5 and don't try to guess what may or may not happen because in the end that's the only way to know what actually happens


> Then I put a breakpoint at that line and hit f5 and don't try to guess what may or may not happen because in the end that's the only way to know what actually happens

There's no need to guess if the parameter is passed by value only. When pointers are used visual inspection alone is enough to ensure that the reader knows what he value of the argument is after the callee returns.

I said that it's easier to read when pointers are used, and you propose stepping through the code as an alternative. Surely you see my point now?


You can't always reproduce a bug. Also, do you Step Over do_something(x) or Step Into? You definitely step into if it's do_something(&x).


> $ rg "const_cast" -g '*.cpp' | wc -l

not really a fair count though, since those casts may also be adding const


I am missing your readability point here: If I am reading you correctly, you seem to be saying that if you come across foo(&bar) then you need to look at what foo is doing in order to know if bar might be changed, while in the case of foo(bar), looking at foo's declaration might be sufficient. Is the latter not sometimes more readable than the former, and never less?


> I am missing your readability point here: If I am reading you correctly, you seem to be saying that if you come across foo(&bar) then you need to look at what foo is doing in order to know if bar might be changed, while in the case of foo(bar), looking at foo's declaration might be sufficient. Is the latter not sometimes more readable than the former, and never less?

Almost never, in my experience. During code review, when I see this:

    foo(&bar);
I know quite well the author of that code knows that bar might be changed. There are no questions to ask him here, and no feedback to be given.

When I see this:

    foo(bar);
I've no idea if the author realises that `bar` might be have a different value after `foo()` returns. I have to ask the question "Do you know that `var` might be changed by `foo`? Did you test with the different program states that result in the value that `foo` may assign to `bar`?"

In a place where the convention is to use pointers instead of mutable references I never have to ask the question - the code review of the implementation of `foo` would have changed the mutable reference parameter to a pointer parameter.

I read code much more often than I write it. References hurt readability.


That is an interesting point of view. I am usually reading code in order to understand what it does (or where it goes wrong) without the author being present, where the proper use of const in declarations can save a lot of time.


In our corner of said industry it's

   void foo(const T & in, const T * optional_in, T & out, T * optional_out);


> How would OP make a class, which loads content on construction which must absolutely not being changed after construction then, even within its implementation ? "Being careful" is really not sufficient in my experience, and introducing child classes just to enforce the invariant that "const" provides is actively harmful to readability.

Agreed. Also what did you mean by introducing child classes? If you move const fields from your class to a field which is an inner class, either it has an assignment operator (so you can overwrite the entire class by mistake) or not (so the outer class can't be assigned over). If you move all fields from your class to a field, it becomes substantially more tedious to access them. And I suppose you could move immutable fields to be private in a base class supplying a getter, but this feels like a hacky use of inheritance (you lose aggregate initialization, you expose an implementation detail in the header, etc).

Personally I want C++'s const fields to allow moving and swapping to replace them, but not regular methods. Is it possible for the compiler to distinguish moves/swaps and regular methods, or provide an escape hatch to allow `operator=() = default` or manually defining moves/swaps? And would the standards committee accept that type of change?


I can't see how that would make it better than simply not marking them const. Now you still have to be careful and keep edge cases in mind and deal with "const" fields actually changing.

There's one exception, though. I'd support const fields being non-const in the constructor and destructor. But then that should be coupled with the constructor being able to run code before constructing members (in a non-hacky way). The constructor's task is to establish class invariants, so while the constructor is running, it's reasonable to assume all invariants haven't been established, yet, including the immutability of some fields.

This isn't nearly as clear-cut for move construction/assignment, which would need to modify a different object's const members, which at the point of invocation is not at any special stage of its lifetime.


> Using pointers is terrible because now you have to add a check for null inside your function, as its API contract says "I can accept null values", while references make it clear that it does not make sense to call if you don't have a correct object to pass it.

You need no such NULL check. Specify as part of the function's contract that a pointer be non-NULL: if a caller breaks the contract, behavior is undefined. An unexpected NULL is just like any other violation of the function's contract: a programming error.

Spell out the contract in the function's documentation. A contract is more than the signature. There is absolutely no need for a runtime check (certain special cases aside) to enforce function contracts.

In some cases, assert()ing that a parameter that is expected to be non-NULL is in fact non-NULL can be useful --- but you don't need such checks for correctness of the program, and use of assertions in general ought to be a matter of taste and discretion.

Knee-jerk, automatic checking by random internal functions of a module of pointer parameters for NULL in every single place suggests that the author doesn't fully comprehend the concept of interface contracts generally.


It's wildly useful to have interface contracts that can be verified by the compiler. In my experience this benefit of using pointers (vs references) only when the argument is optional way outweighs the drawback of not seeing an '&' at the call site.


> In my experience this benefit of using pointers (vs references) only when the argument is optional way outweighs the drawback of not seeing an '&' at the call site.

Maybe so. Reasonable people differ on this subject.

My point is different: it's just not true that if a parameter is a pointer, then as a general requirement you have to check at runtime whether that pointer is non-NULL. It drives me nuts when I see these checks sprinkled throughout a codebase.


> My point is different: it's just not true that if a parameter is a pointer, then as a general requirement you have to check at runtime whether that pointer is non-NULL. It drives me nuts when I see these checks sprinkled throughout a codebase.

My point is that if you only ever use pointer arguments to indicate that the argument is optional, then the checks are required.


But it is a reasonable rule, hence it being a general requirement in a lot of places.

More nuanced, it's more important to have these null checks on a library's public interfaces than a library's private interfaces. Private interfaces can do with asserts for null check.


> it's more important to have these null checks on a library's public interfaces

No it isn't.


Library authors and users would disagree with you.


> Spell out the contract in the function's documentation

The sibling subthread is all about people not wanting to read the function's documentation.


I would love to have compiler enforced contracts. However I'm not sure how to make that happen or even if it is technically possible. The current contracts proposal is a step in the right direction, but just barely a step and I'm not sure if it can go far enough.

Until then I avoid adding undefined behavior to my code for good reasons.


> now you have to add a check for null inside your function

While I generally agree that non-const references make more sense for out-parameters (although I actually prefer return values), don't forget that it's possible for a reference to become "null" too. It's quite unlikely to happen, but if it happens it's much worse to debug than a null pointer.

Oversimplified example:

    const int& bla = *((int*)nullptr);


Dereferencing a null pointer is Undefined Behavior.


The case against local variables basically boils down to:

1. "I don't need the compiler to avoid writing bugs"

> I don’t need a keyword just to tell me that a variable isn’t modified during its lifetime. I can see that fact at a glance [...]

Not terribly convincing.

2. might inhibit automatic move of return value

This is an edge case that does not affect correctness, only performance, and rarely pops up unless the code has other, worse problems.

(also, when it does happen, it is trivially found by static analysis tools, and clang-tidy will point it out to you)

> But suppose there were many more lines of code interspersed between the [definition of fullName] and the final call to e->setName(fullName). That call makes an unnecessary copy. The programmer might try to fix it by writing e->setName(std::move(fullName))… but guess what? fullName was const-qualified, so the move does nothing!

If there are "many more lines of code interspersed between the variable definitions and the final call", this implies that fullName is used somewhere between its definition and the final call (it makes no sense to declare fullName before it is required).

But after fullName is moved from, it should not be referenced again. This means that rearranging the function (so that fullName is used after the move) would introduce a correctness bug.

This is in itself a bad situation which should be avoided by refactoring, so it's not a great example (it's also not great that the actual example is not written out as code, merely handwaved). But this is actually an example where defaulting to const could be at least a bit helpful, because doing so makes all non-const variables stand out and warrant extra attention.


> I don’t need a keyword just to tell me that a variable isn’t modified during its lifetime.

I agree that this isn't very well argued, and variable re-use is a common source of bugs – especially variables that are used several times in a single function for different purposes.

On a similar point, Java has `final` instead of `const` and it does apply to local variables, including to guarantee that a variable is not only not modified, but set exactly once. Something like this:

    final String foo;
    if (condition) {
        foo = someValue();
    } else {
        foo = otherValue();
    }
Omitting any of the assignments leads to a compilation error. It works with complex if conditions with multiple branches, and even with `switch`. If a branch throws an exception, that's also acceptable as the variable is guaranteed not to be used below.

I find it pretty useful; doesn't C++ provide a similar construct?


> I find it pretty useful; doesn't C++ provide a similar construct?

static const?


For the most part I agree with the article, but yes, for long functions I will often const locals that I want to make sure they stay constant (i.e. there is some cross variable invariant that I want to make sure always hold). I won't bother for functions of less of 10 lines (but it is not a rule and I wouldn't want it to be enforced).

I also see no issues in consting members of classes that don't have value semantics.


Interestingly that’s a pretty rust-ic view of things: Rust parameters can be passed either by value (`T`), by reference (`&T`) or by mutable reference (`&mut T`). This differs slightly from TFA because a difference is made (at callsite) between by-val and by-ref, but since Rust uses destructive moves that makes a lot of sense.

As TFA recommends, rust data members can’t be made readonly, that’s just not part of their properties.

Diverging from TFA, local bindings are immutable (by default), however there have been rumblings about dropping this, and essentially always making them mutable because the signal is pretty low.

While it’s possible to make parameters mutable e.g.

    pub fn foo(mut self)
AFAIK that is not exposed in any way to the caller, as far as Rust is concerned, a move (pass-by-value) is a move, whoever the owner is can do whatever they want e.g. this is entirely valid:

    pub fn foo(self) {
        let mut self = self;
And the previous is just a shortcut for that.


> Diverging from TFA, local bindings are immutable (by default), however there have been rumblings about dropping this, and essentially always making them mutable because the signal is pretty low.

I hope that never happens; one of the best things in Rust is that it's easier (less typing) to make things non-mutable, which in the case of local bindings tends to encourage a more functional programming style (for instance, it encourages "let x = if foo { bar } else { baz };" instead of "let mut x; if foo { x = bar; } else { x = baz; }").


Agreed, but for that particular example, the compiler is smart enough to see that x is assigned exactly once, so you can have x be immutable in both examples.

https://play.rust-lang.org/?version=stable&mode=debug&editio...


> there have been rumblings about dropping this

Really? I don’t pay attention to everything that goes on, but keep an ear to the general pulse, and I haven’t heard anything even vaguely serious about this since the mutpocalypse back in ¿2014?, which failed (sadly, in my opinion—though I was undecided back then). But I wouldn’t mind dropping immutable local bindings; as you say, it’s low signal and mostly just an annoyance. I’m interested if you have any citation, however loose; I’m curious.

—⁂—

I think the interesting difference between Rust and C/C++ in this space is the spelling, and just how significant its impact is. There are two parts to this:

① The reading order. C/C++ reading order is a hodge-podge of right to left and left to right. And what’s the difference between `T const * name` and `T * const name`? There’s a fair chance one who doesn’t know and hasn’t developed the appropriate intuition will guess incorrectly. (In pseudo-Rust terms: they’re `mut name: const T` and `const name: mut T`, and I think I got that the right way around.) See the worked example in https://en.wikipedia.org/wiki/Const_(computer_programming)#C... for a particularly painful example: `double (*const (fun(int))(double))[10]`. I believe that’s equivalent to `fun: fn(int) -> mut fn(double) -> const mut [double; 10]` in Rust (keeping raw pointers and C++ type names), which is much clearer, reading purely left to right.

② The default for constness. C/C++ default to mutable and require that you spell out any qualifiers (const in one directions, restrict in the opposite, dunno if there are others). Rust goes with a more sound hierarchical model instead, defaulting to immutable (the more restrictive) and requiring that you spell out any functional additions you want (mutability, which actually corresponds more to restrict), and I’m glossing over references versus raw pointers but because of having taken a different foundational design it’s all more sound and sensible to my eyes, with the simplest spelling providing the least privilege, and more keywords adding more privilege. (Raw pointers are spelled const/mut rather than just /mut as with &/&mut, but that’s to avoid confusion or error, given their role in FFI.) It’s an important philosophical difference; C++ templates versus Rust generics exhibit much the same difference. Well, is it any wonder that Rust has kept on surfacing LLVM bugs because noalias wasn’t exercised much in C/C++ code? When you have to add a restrict keyword, especially when the compiler doesn’t help you…

Well, the glib summary is that Rust is better in this area, with a better theoretical foundation and lessons learned from the likes of C/C++, but of course it’s more nuanced, and there are some types of architectures that don’t translate well to Rust. But it’s interesting to reflect on the differences. Because Rust basically does const all the things, and makes it easier and safer to do so. Kinda funny that I started the second half of this comment saying the spelling had a particularly significant impact, and ended up back at the memory/pointer/reference model differences. That’s definitely critical, but the spelling is a major part of it too.


Most of the author's points are well-taken. But not all of them... for example:

  auto plus(std::string s, std::string t) { return s + t; }
> The above code is bad because it makes unnecessary copies

Maybe, maybe not. A compiler may inline it (although, TBH, even regardless of parameter constness, std::string is very copy-prone in general.)

> What we mean to write is:

  auto plus(const std::string& s, const std::string& t)
Not quite. That is, this implies we don't want to take rvalues (e.g. temporary strings). It will work with rvalues, but we might be making copies for them.

When you think about it, we don't even need the actual string class; we only need access to the characters. So...

  auto plus( std::string_view s, std::string_view t)
(this is C++17; before that, try: https://github.com/martinmoene/string-view-lite )

Also, why use the `auto` return type? I avoid it unless it hides some complex monstrosity, or it's a template where it's difficult to figure out. Here I would write:

  std::string plus( std::string_view s, std::string_view t)
> In function signatures: the ugly is the bad

This is... a good principle, in thoery, but in practice, not always :-)

But this principle is part of why we don't want to talk about const &'s and just pass things by value if we can.


Unfortunately there are a lot of overly-strong claims in this blog post; the choices are not as clear-cut as the author makes them seem. I'll list a few examples to illustrate (these are not exhaustive):

> What we mean to write is: auto plus(const std::string& s, const std::string& t) { return s + t; }

To a first-order approximation, sure. But arguably it's better to pass 's' by value and 't' by const reference, then do s += t and return s. That lets you reuse any existing buffer if you receive an rvalue, and lets the caller preallocate the buffer if it so desires. This can help you avoid a heap allocation.

> This function is wrong: auto plus(const std::string s, const std::string& t). The programmer meant to type const std::string& s.

Not necessarily. The const tells the programmer that s will never be modified. That's an excellent constraint that can make a nontrivial function easy to reason about. (Obviously that doesn't imply you should do this everywhere. I'm just pointing out it's not "wrong".) P.S. I hear optimizers can sometimes optimize const objects, though I don't have proof handy for that example, so someone else can fact-check that part.

> or maybe they meant const std::shared_ptr<Connection>& for efficiency (but in that case why aren’t they simply passing const Connection& from the caller?)

That "why" actually has an answer: "because the function might only copy the shared_ptr conditionally, and desire to remain efficient in the no-copy case".

> Const data members are never a good idea.

This one is correct!


> That "why" actually has an answer: "because the function might only copy the shared_ptr conditionally, and desire to remain efficient in the no-copy case".

Yeah. This blog post has a few well argued guidelines about passing smart pointers to functions, which covers this case: https://herbsutter.com/2013/06/05/gotw-91-solution-smart-poi...

> Use a const shared_ptr& as a parameter only if you’re not sure whether or not you’ll take a copy and share ownership; otherwise use widget* instead (or if not nullable, a widget&).


> Obviously that doesn't imply you should do this everywhere. I'm just pointing out it's not "wrong".

The article makes a point about contracts. In their code base, it's always wrong, since function parameters, by convention, are only allowed to be (a) non-const values, (b) const references, or (c) non-const pointers. Anything else, in their codebase indicates an error. Their point is that these three are all you need, so that getting in the habit of only using them helps maintenance, since during code review or something you know for sure that any other parameters is a mistake someone made, not an intentional choice (because of the coding standard).

> P.S. I hear optimizers can sometimes optimize const objects, though I don't have proof handy for that example, so someone else can fact-check that part.

That can happen, if a reference to your local variable is passed to a function.

For example, tried this in Godbolt [0]

  void some_func(const int& x);
  void some_other_func(int x);

  int bar() {
      int x = 123;
      some_func(x);
      volatile int y = x;
  }
The final line directly writes 123 to y if x is const (or if calling some_other_func()), but it reads again from memory as it is. This happens because some_func could do this:

  void some_func(const int& x) {
    const_cast<int&>(x) = 7;
  }
If x was `const int x`, this is UB, but if x is `int x`, the program MUST write 7 to y.

Note however that this optimization only happens (with clang) IF the value of x is a compile-time constant. If instead the value is only known at runtime, there is 0 difference - the value of x is still read from the stack, not kept in a register for example.

[0] https://godbolt.org/z/YKf4ndq1v


Passing by 'const value' is not a real thing. In the implementation you can remove the const and the compiler will not complain.

You can put them in the definition and the compiler will prevent you from mutating that argument in the body of the function however.

I remove these in declarations if I see them usually. I see it as noisy and a little distracting, the caller can already see that there's going to be no side effects to that variable, whether or not the function changes its copy of it is an implementation detail.


> Obviously that doesn't imply you should do this everywhere. I'm just pointing out it's not "wrong".)

I have to disagree here. const-int everything except stuff truly needs to be optimized significantly helps with readability, and when someone is updating their code they can other be warned of invariants if they inadvertently try to modify something that is const. Yes, it's a little verbose but worth it IMO. It also helps the compiler :)


I held the same position as you until quite recently. Unfortunately I've changed my mind on this somewhat. I've found it introduces too much friction in practice to be generous with const everywhere. A few reasons for this that I can remember off the top of my head are the following (there are probably more that I'm forgetting):

(1) Constifying variables hampers your ability to move from them (including RVO), which you still frequently want to do even in cases where you don't otherwise mutate the object. Which is obviously bad for performance. Now your solution might be "ok, so don't put const if you plan to move", and sure, you can do that, but then your variables and parameters start becoming an inconsistent mixture of const- and non-const types. Not the end of the world, but it's kind of jarring for a reader (read: increased cognitive load, confusion, etc. during maintenance) when they wonder why X is const and Y is mutable despite the fact that the problem should treat both of them as const. Now, if this was the sole downside, I would say might still be worth it on the balance, but it doesn't help the situation when there are other downsides.

(2) Conceptually, constness of object types in a function signature is an implementation detail callers should be unconcerned with, but language-wise, it's also encodable in the prototype. The practical implication of this is that the prototype ends up going out of sync with your implementation, and you'll want to fix them up so they match. (Not just for aesthetics, but because it also helps with tools, say grep.) The compiler doesn't necessarily complain for object parameters, but if you don't fix them then you end up with inconsistent signatures that are further confusing at best. Not only does this get quite annoying, but fixing it also forces the recompilation of all callers, which is yet another thing that slows down build times unnecessarily.

(3) It's practically impossible to get other developers on board with this even if you ignore all the other issues. Too many people just don't care enough for the advantages to change their habits on this issue. If anything, you'll be forced to remove "excessive" const for consistency with the existing codebase.

Now there are obvious advantages, and I could honestly argue for either side, but the friction starts getting old. It's just not a clear-cut case in practice, as much as in theory I want to agree with you (and have in the past).


Agree about 1, about 2 I Think there isn't anything wrong about the prototype differing for top level qualification from the implementation which is perfectly legal. I normally grep function names, if I want semantic search I use lsp.

Regarding 3, eh, leading programmers is like hearding cats.


For (2) the obvious solutin is to never add the const to the prototypes since it is meaningless there anyway.


It is a sign of great design of a programming language when people tell you "there's dozens ways to pass parameters but all except these 3 are always wrong".


i mean it's C++, there's like 4 ways to initialize variables that are all subtly different.



This is truly terrifying.


Today I learned that using const prevents the compiler from using move semantics. I am sad that I didn't already know this and also annoyed that C++ is so fiddly. This is less obscure than the thread about std::launder on HN a few days ago but still counts as obscure for me, and you don't even get a friendly compiler warning when you do this. It just underlines for me the amount of knowledge required to truly become an expert on C++. It's such a powerful language but I sometimes wonder if all this complexity is a necessary consequence of that power.


> I sometimes wonder if all this complexity is a necessary consequence of that power.

For many use cases you do not need all that knowledge. C++ is fast enough to not have to care. It's when you hit a wall that you need to get the extra tools to make things work.

I have seen developers discussing obscure optimizations for a piece of code that will only be called once several minutes, and only after unparsing a JSON String. That one less instruction call by the CPU makes no difference whatsoever.


I would say this is pretty obvious. Moving form an object will leave it in a different state than it was before. Obviously it can't be done for const objects.


It's not obvious at all, the issue is that move semantics in C++ don't really have much to do with the concept of moving. Most languages that have move semantics treat moves as a kind of relocation of a value from one place to another, hence the name. You're not changing the value, you're relocating it and in fact once the relocation is complete the original location is destroyed. So for example in Rust, you are absolutely welcome to move an immutable object.

In C++, a move is an awkward kind of mutable copy operation with a special overloading rule that allows binding to rvalue references. There is nothing obvious about how it works, people get tripped up by it all the time along with all the additional complexity about when moves happen vs. when an elision happens vs. when a copy happens, the fact that T&& means wildly different things depending on whether T is a template parameter, and the bloated set of value categories introduced by it: glvalues, xvalues, rvalues, prvalues.

I absolutely sympathize with anyone who doesn't feel like knowing all these C++isms.


> So for example in Rust, you are absolutely welcome to move an immutable object.

In Rust, your code won't even compile if you try to use a moved-from value. In C++ what happens depends on what the moved-from state looks like.

In C++, if you do something like use a moved-from value, it's really not obvious what "should" happen at all. It's not even obvious whether that should compile or be undefined behaviour.

> I absolutely sympathize with anyone who doesn't feel like knowing all these C++isms.

Any C++ programmer should agree with this. We don't want to know all of this bullshit and in fact would largely prefer if the compiler would know it for us...


Most languages? I only know of two languages with move semantics: C++ and Rust. The latter has the benefit of a clean slate and build upon C++ experience. C++ had to be backward compatible with pre C++11 and couldn't make destructive moves work.


I did not say most languages, I said most languages that have move semantics. Neither C++ or Rust invented the concept which dates back to the 1980s.

>C++ had to be backward compatible with pre C++11 and couldn't make destructive moves work.

This is simply untrue as per the original proposal which left the door open to destructive moves (that door is now closed). There is nothing about destructive moves that make it incompatible with C++98/03. The main concern for destructive moves was how to deal with moves across an inheritance hierarchy, certainly a reasonable concern to have but nothing insurmountable by any means.

The issue is that, as most things involving C++, decisions are pondered upon endlessly by a very small group of people not on the basis of their technical qualifications but on the basis of being able to to participate in the C++ standardization process, which requires physically travelling to numerous locations around the world at ones own expense. If C++ had a more open standardization process that embraced research from other languages and communities, this issue would have been solved.

You can read the original proposal here:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2002/n137...

Note the section on destructive moves which doesn't mention anything to do with compatibility, and finally note why it wasn't adopted:

"In the end, we simply gave up on this as too much pain for not enough gain."

By "we" the author means three people gave up on it. Three people who couldn't find a solution to a problem (which has a solution) is why C++ is forever stuck without destructive move semantics.


You could equally say that destroying a const object changes its state, so you can't destroy const objects, but that would not be true! So I don't think this is exactly obvious, although I do like it as a way to explain the decision.


But destroying an object, by definition, does not leave an object around, so no state change can be observed. If you had destructive moves, then definitely moving from const objects could be a thing.


A large amount of knowledge is required to be a great programmer. The choices of what to learn are far more than anyone can learn in a lifetime.

C++ gives a good abstraction for very low level operations, and has a great long term backward compatibility story. As such learning the details can be useful for anyone doing low level work over the long term. Rust is aiming at this space but doesn't have the long term track record (yet?). There are hundreds of other languages that you can also choose from with pros and cons. No matter what language you choose though, because low level details are important to you, you must learn fiddly low level bits to some extent as when move vs copy is used is by nature important.

Of course part of this is about age. C++ goes back to the early 1970s (the C roots). In that time a lot of things have change. We have learned not to do things that seemed good. We have found better ways. The way hardware works has changed. What is important to speed has changed. Compilers now do optimizations. Whatever language you choose today will look dated in 50 years.


> Pass out-parameters by pointer: Widget*.

I’d like to understand the rationale here better. It seems much better to pass out parameters as reference (Widget&), as it avoids having to check for nullptr. As of C++11, I was taught that raw pointers are now code smell in all but very rare cases, and you usually want to pass out parameters by reference (or, rarely, by smart ptr).


> as it avoids having to check for nullptr

Which is nice, but the real problem always are invalid references/pointers. Which you can't check with a simple ` == nullptr`. So, if an error occurred the reference could be in any state instead of a `0`, which can easily be checked.


> raw pointers are now code smell

The way I look at it is that something has to own the pointer. That ownership should be expressed via std::unique_ptr. If ownership must be shared (which is relatively rare) or you need weak pointers (even more rare), then use std::shared_ptr.

It's not a code smell to receive a pointer parameter because the function or method doesn't own the pointer. I would generally pass const references as parameters except for those cases where you may want to allow a nullptr parameter.


> avoids having to check for nullptr

You don't have to check for nullptr. It's the caller's job to adhere to the function contract. A function's caller can screw up in a thousand different ways: why check at runtime for one of those ways?

> by smart ptr

Unnecessary heap allocation (which is what you get with a scheme that forces passing of out parameters via smart pointer) is a bad thing.


Maybe the visibility at the callsite, to make it clear what the parameter is used for because you have to take a pointer to it, and since input parameters can only be passed by value or const reference the only reason to hand out a pointer is an out parameter?


A reference is just a pointer in disguise, and so can still be null if created from a pointer.

The rationale behind passing by ptr is at the call site you have to use the & operator, which makes it immediately obvious to the reader that it's possible the argument is being mutated (because it's not passed by value).

In general it's even better to avoid out parameters entirely, although there are cases where you want to reuse a containers memory where it's a useful technique.


> A reference is just a pointer in disguise, and so can still be null if created from a pointer.

References cannot be null legally.


True, but I've seen people exploit undefined behavior to make a null reference. Please don't ask me to explain it, someone might think they understand it and try it.


> A reference is just a pointer in disguise, and so can still be null if created from a pointer.

That’s an immediate UB right there.


I wonder what C++ would feel like if instead of making it a burden to add const everywhere, you made it a burden to remove it? E.g. everything was const unless you added a keyword (e.g. mut).


You'd get a language that is wildly incompatible with C and hence you may as well use a different language.


I feel like I should bookmark this page, to use as a reply to every person who says "Why use C when C++ is available?"

The cognitive overhead of figuring out when "bits are moved implicitly" vs "bits are copied implicitly" vs "bits are used after free" can be quite high in a mature codebase.

After all, all those problems above disappear when using C.


I don't have a lot of opinions about C++ (which is good, since the number of cases covered here was quite large).

For C, I really do think that const-ing all the things when possible helps, since doing so should (I don't have evidence, I'm not a CS researcher) help protect against accidental modification which can lead to dangerous bugs. More, I really think it makes the programmer's intent much more clear, which I find important especially in large and perhaps not so disciplined codebases as the article author seems to assume. Basically, in a language where all variables are modifiable by default, not consting (to me) means "I'm going to be modifying this later, watch out!".

Also for code still using C89-ish style of declaring all variables at the beginning of the scope. If you have a 150-line function with 13 lines of variables at the top, it would really make it easier to know which variables to focus on understanding if some are const and some not.

I also think it just makes the code more self-explanatory and "confident"-looking when things, even local variables that are only referenced a couple of times in the next few lines, are const.

Something like:

    // Make the string one character shorter.
    void truncate(char *s)
    {
      if (s != NULL)
      {
        const size_t len = strlen(s);
        if (len > 0)
        {
          s[len - 1] = '\0';
        }
      }
    }
This is ultra-trivial, but to me it reads much better when the 'len' is const. Sure I can see that there is only two further references to that variable, but it's just ... simpler. You could write it in a more "elite" old-sk00l C style like this:

    // Make the string one character shorter.
    void truncate(char *s)
    {
      if (s != NULL)
      {
        size_t len = strlen(s);
        if (len-- > 0)
        {
          s[len] = '\0';
        }
      }
    }
But to me that is just needless complexity. I did not Godbolt the above to see if there's any difference in the generated code, but would naively expect there not to be.

Addendum/edit: I'm still really opposed (like the article) to const:ing scalar function arguments, since I feel that leaks internal information (whether or not the code inside a function is going to change the value of an argument) to the outside which is bad. I would probably treat the value of an argument as const anyway though, it's just simpler.


Not to argue your point but I think the "elite" old-sk00l C style implementation would be this:

    void truncate(char *s) { if (s && *s) s[strlen(s)-1] = 0; }
I don't have a hard and fast rule on when to use const for locals. In bigger functions, sometimes adding a const is just a nice quick way to ask the compiler if that variable gets modified later. So it can help readability. With tiny functions, it rarely matters much, you see everything at glance anyway. That's bikeshedding territory.

I tend to prefer to elide unnecessary temporary variables (then you don't need to worry whether it's const or how it should be named.. problem eliminated) and just use expressions like above. But this is also not a hard rule. In general, any hard rule smells like cults and religion.


With the const-everywhere approach, the code reads like it's in an SSA format and is dataflow-oriented. As an extreme example, the "len - 1" expression can be substituted with yet another const variable:

    const size_t len = strlen(s);
    if (len > 0)
    {
        const size_t len_minus_one = len - 1;
        s[len_minus_one] = '\0';
    }
The additional const "variables" also add another layer of self-documentation by virtue of their names.


2 more instructions in GodBolt for the second version, in the end the urge to look cool and "elite" more often than not causes trouble. Unless there really is a need for complexity, it's almost always better to follow KISS principle


I saw no difference for between unwind's two versions with current GCC, clang, or icc with -O3: https://godbolt.org/z/n6avo5ovq. Did you possibly forgot to use optimization? I was surprised though to see that icc inlined the call to strlen!


Okay, thanks.

Obviously using post-decrement in the if like I did is a red flag of readability/complexity in many cases, and I could just have well kept the subtraction to the indexing line.


Funny. I felt attacked by the intro, as I am a very big fan of locking things down with const and getting yelled at by the compiler for attempting to modify a const. But, the article goes on to attack stupid shit that I'd never do with const. (Though, I do think that there's marginal value in non-movable classes with public const data members: you don't need to implement a getter)

We're left with three good uses:

  1. returning const references to private data
  2. passing const references as arguments
  3. const class methods
Ho-hum. I came expecting a fight, not validation.


> The above code is bad because it makes unnecessary copies. plus doesn’t need its own copies of s and t; it can get by with just references to its caller’s strings.

I was under the impression the compiler was smart enough to 'move' rather than 'copy' a value if it can prove it's not reused. And also to infer a 'const' reference if it can prove the reference is not modified.


The compiler isn't allowed to move instead of copy if the code implies a copy. What it can do however is completely elide the copy, and not call the copy constructor.

Const isn't really useful to the optimiser in any way, roughly all it does really is effect the overload resolution rules, and how the compiler maps statically initialized data in to your final binary.


> Const isn't really useful to the optimiser in any way, roughly all it does really is effect the overload resolution rules, and how the compiler maps statically initialized data in to your final binary.

const on pointers and references is not useful at all, but const on variables is useful as it means the value is never allowed to change. For example:

  const int x = 0;
  do_something(&x);
  return x; // Optimizer can assume that x == 0 here even if it knows nothing about do_something
See: https://godbolt.org/z/4T8M3eK1x


> Data members: Never const

> What good is a value-semantic Employee class if you can’t assign or swap values of that type?

Question from a Java guy: Is immutability not a thing in C(++) land?

Seems to me most modern high-level languages strive to make structs with "value" semantics immutable, i.e. members are set once on creation and then never changed. You're supposed to interact with those structs through pointers, so most usecases for copying or overwriting a struct don't apply.

I get that this doesn't translate fully well to C's programming model where you seem to interact a lot more with the actual data structures (and not just pointers). But wouldn't the ability to restrict arbitrary changes to a struct still be important?


> But wouldn't the ability to restrict arbitrary changes to a struct still be important?

Typically immutability in C++ is at the interface scope rather than structural, e.g. prefer this

    struct MyType {
        int x_;

        MyType( int x ) : x_( x ) {}
    };
   
    void do_the_thing( const MyType& mt ); // mt.x_ = 1; doesn't compile, mt is const
over this

    struct MyType {
        const int x_;

        MyType( int x ) : x_( x ) {}
    };
   
    void do_the_thing( MyType mt ); // mt.x_ = 1; doesn't compile, x_ is const


In c++ you normally interact with value types by copying them around. Same way you would do for an int for example. Immutability is enforced with a top level const.


Ok HN I am proud of you. I read the article and had a sinking feeling that the top comment would be:

"Well actually... you should really be using string_view instead of..."

Perhaps that is true but that would be intentionally missing the point of his arguments.


The const infection is real in a codebase, and it's the very definition of a bureaucratic practice that saps the fun out of coding. Hopefully people use locality of variables more.


I wouldn't choose C++ if I wanted to have fun. Practically any language (maybe not XML) is more fun to get things done in.

If using C++ is the right business decision, I'll use it, but I won't expect to enjoy it.


I don't code much in C++, but I have to imagine the verbosity of its 'const' syntax is the real problem. Syntax aside, if a variable is not actually variable, it's arguably a lie to declare it as something other than constant. In languages with better 'const' syntax, it's not just a matter of performance, it's a legibility issue.


I do admire all the different ways you can shoot yourself in the foot with using C++. The relentless pursue of 'what would the compiler do and does it costs us cycles?' is a great mindset to have - if you can afford it.

Thanks for the interesting analysis of const.


Applies to JavaScript as well. There's almost no legitimate reason to use `let` anywhere.


> “Please give me references to two of your strings, and by the way, I might modify them.”

Since C++ const is not transitive, one can still modify what the const reference indirectly refers to.



> Do pass expensive types by const reference.

Consider `foo(T t)` where `foo` takes ownership of `T` and allows move semantics, but there is risk of copy if `t` is not an r-value reference.


It's very rare where doing that is a good idea. A constructor is about the only time when you get much of a benefit.


The problem is C++ has the wrong default. You should have to opt-in to mutability. Rust gets this right.


I feel like every programmer should take a step back and see if const spamming actually helps them. I never have problems writing to memory I shouldn't but it could just be my programming style. So if you know you don't have them problem you can just stop guarding against it. You have a limited amount if life to spend on projects after all.


Same in Java. People final the hell out of _local_ variables.


Before the concept of effective finality had compiler support, it was often necessary.


In JS I prefer let over const because it is fewer characters to type.


Fewer than var or let?


What? I said I prefer let.


Author makes no mention of copy ellision. Huh.


Yes he does.


Where?


Use your browser's "Find on page" feature and search for the term "copy elision". There's an entire section on it.


Wasn't there when I searched the first time. Maybe I typo'd.


TL;DR Don’t overuse const. Agree.

  - Hur lyckades du refaktorisera om den variabeln?
  - Äh, det var väl ingen const!


Det var kult!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: