Many languages have made this mistake, despite having engineers and teams with many decades or centuries of total experience working on programming languages. Almost all languages have the loop variable semantics Go chose: C/C++, Java, C# (until 5.0), JavaScript (when using `var`), Python. Honestly: are there any C-like, imperative languages with for loops, that _don't_ behave like this?
That decision only becomes painful when capturing variables by reference becomes cheap and common; that is, when languages introduce lightweight closures (aka lambdas, anonymous functions, ...). Then the semantics of a for loop subtly change. Language designers have frequently implemented lightweight closures before realizing the risk, and then must make a difficult choice of whether to take a painful breaking change.
The Go team can be persuaded, it's just a tall order. And give them credit where credit is due: this is genuinely a significant, breaking change. It's the right change, but it's not an easy decision to make more than a decade into a language's usage.
That said, there may be a kernel of truth to what you're alluding to: that the Go team can be hard to persuade and has taken some principled (I would argue, wrong) positions. I'm tracking several Go bugs myself where I believe the Go standard library behaves incorrectly. But I don't think this situation is the right one to make this argument.
This isn't a bug in java. Java has the idea of "effectively final" variables, and only final or effectively final values are allowed to be passed into lambdas seemingly to avoid this specific defect. Ironically, I just had a review the other day that touched on this go "interaction".
The outcome of this go code in java would be as you'd expect, each lambda generated uses a unique copy of the loop reference value.
Oh, today I learned. I think this was an issue in Scala (with `var`), but this seems like a great compromise for Java core.
I suppose Java had many years after C#'s introduction of closures to reflect on what went well and what did not. Go, created in 2007, predates both languages having lightweight closures. Not surprising that they made the decision they did.
Your comment inspired me to ask what Rust does in this situation, but of course, they've opted for both a different "for" loop construct, but even if they hadn't, the borrow checker enforces a similar requirement as Java's effectively final lambda limitation.
Newcomers to Java usually dislike the "Variable used in lambda expression should be final or effectively final" compiler error, but once you understand why that restriction is in place and what happens in other languages when there's no such restriction, you start to love the subtle genius in how Java did it this way.
Go, designed between 2007 and 2009, certainly had the opportunity to look at their introduction in C# 2.0, released 2005, or its syntactic sugar added in C# 3.0, released 2007.
I think that's an ahistorical reading of events. They did have the opportunity, but there were very few languages doing what Go was at the time it was designed. My recollection of the C# 3 to 5 and .NET 3 to 4.5 is a bit muddled, but it looks like the spec supports a different reading:
C# 3.0 in 2007 introduced arrow syntax. I believe this was primarily to support LINQ, and so users were typically creating closures as arguments to IEnumerable methods, not in a loop.
C# 4.0 in 2010 introduced Task<T> (by virtue of .NET 4), and with this it became much more likely users would create a closure in a loop. That's how users would add tasks to the task pool, after all, from a for loop.
C# 5.0 in 2012 fixes loop variable behavior.
I think the thesis I have is sound: language designers did not predict how loops and lightweight closures would interact to create error-prone code until (by and large) users encountered these issues.
This bug appears to be because Go captures loop variables by reference, but C++ captures are by copy[1] unless user explicitly asked for reference (`&variable`). It seems like the same bug would be visually more obvious in C++.
The change in Javscript doesn’t have anything to do with for…of, it’s the difference between `var` and `let`. And JS made the decision to move to `let` because the semantics made more sense before Go was even created (although code and browsers didn’t update for another several years). That’s why Go is being held to a higher standard, because it’s 10+ years newer than the other languages you mentioned.
This places it nearly 10 years after the creation of Go. And with the exception of Safari, arrow functions were available for months to years prior to let and const.
This is somewhat weak evidence for the thesis though; these features were part of the same specification (ES6/ES2015), but to understand the origin of "let" we also need to look at the proliferation of alternative languages such as Coffeescript. A fuller history of the JavaScript feature, and maybe some of the TC39 meeting minutes, might help us understand the order of operations here.
(I'd be remiss not to observe that this is almost an accident of "let" as well, there's no intrinsic reason it must behave like this in a loop, and some browsers chose to make "var" behave like "let". Let and const were originally introduced, I believe, to implement lexical scoping, not to change loop variable hoisting.)
C# made the mistake not when they introduced loops, but when they introduced closures, and it didn't become evident until other features came along that propelled adoption of closures. Go had closures from the beginning and they were always central to the language design. C# fixed that mistake before the 1.0 release of Go. But the Go team didn't learn from it.
I hate to be that guy but this would not be possible with rust, as the captured reference could not escape the loop scope. Either copy the value, or get yelled at the lifetime of the reference.
This is one of the things the language was designed to fix, by people that looked at the past 50 years or so of programming languages, and decided to fix the sorest pain points.
I would argue that var is an entirely different issue. If variables last the entire function then it's far less confusing to see closures using the final value. After exiting the loop the final value is right there, still assigned to the variable. You can print it directly with no closures needed.
That decision only becomes painful when capturing variables by reference becomes cheap and common; that is, when languages introduce lightweight closures (aka lambdas, anonymous functions, ...). Then the semantics of a for loop subtly change. Language designers have frequently implemented lightweight closures before realizing the risk, and then must make a difficult choice of whether to take a painful breaking change.
The Go team can be persuaded, it's just a tall order. And give them credit where credit is due: this is genuinely a significant, breaking change. It's the right change, but it's not an easy decision to make more than a decade into a language's usage.
That said, there may be a kernel of truth to what you're alluding to: that the Go team can be hard to persuade and has taken some principled (I would argue, wrong) positions. I'm tracking several Go bugs myself where I believe the Go standard library behaves incorrectly. But I don't think this situation is the right one to make this argument.