Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One benefit of radical go-style simplicity that I haven't seen discussed much is that it forces you to focus on the task at hand. Like, you know that your code will look like shit anyway so it is pointless to fuss too much over it. Whereas many programmers using a more "clever" language like haskell will spend a lot of time trying to make code more readable, searching for the right abstraction, arguing whether something is a monad or some other gizmo. Most of it is wasted intellectual effort.

Everything in moderation of course. For me personally simplicity of go is too much and I don't feel comfortable writing it.



I agree that Go forces you to focus on mechanics of your code more so than some "fancier" languages.

However, Go's poor type system also forces you to write worse and less safe code than you want.

For example, the lack of enums and sum types means it's hard to represent, safely and efficiently, anything that should be the same type but have different values within a strict set of allowed values. The nearest you get in Go to an enum is:

  type Color int
  const (
    Red Color = iota
    Blue
    Green
  )
Alternatively, can do the same with a string:

  type Color string
  const Red Color = "red"
  // etc.
This gives you some type safety, since a Color requires casting to force an int or string or whatever to become it. But that doesn't give you real safety, since you can do:

  var c Color = Color("fnord")
This comes up whenever you're handling foreign input. JSON unmarshaling, for example. You can override the unmarshalling to add some validation, but it won't solve your other cases; it will always be possible to accidentally accept something bad. Not to mention that the "zero value" of such a value is usually wrong:

  type Color string
  var c color // Empty string, not valid!
A more safe way is to hide the value:

  type Color interface {
    isColor()
  }
  type colorValue string
  func (colorValue) isColor() {}
  var Red Color = colorValue("red") // etc. 
Now nobody can create an invalid value, and the zero value is nil, not an invalid enum value. But this is unnecessarily complicated and shouldn't be necessary in a modern language in 2019.

The case of genuine sum types is worse. Your best bet is to use sealed interfaces:

  type Action interface {
    isAction()
  }

  type TurnLeft struct{}
  func (TurnLeft) isAction() {}

  type MoveForward struct{
    Steps int
  }
  func (MoveForward) isAction() {}
There are some downsides. You have to use static analysis tools (go-sumtype is good) to make sure your type switches are exhaustive. You get a performance penalty from having to wrap all values in an interface. And if you're going to serialize or deserialize this (e.g. JSON), you will be writing a whole bunch of logic to read and write such "polymorphic" values.


Agree with this. There are couple of observations I'd make, though.

Firstly, "enums" using iota should always be defined as either

    const (
        Red Colour = iota + 1
        Blue
        Green
    )
or

    const (
        Unknown Colour = iota
        Red
        Green
        Blue
    )
to avoid the zero value problem.

Secondly, and this is a personal preference, I've really enjoyed not having to work with sum types. In practice other programmers seem to use them when a public interface would have been sufficient, and it's convenient to be able to do:

    // original package 
    type Position struct {
        X, Y int
        Rot Rotation
    } 

    type Action interface {
       Apply(Position) 
    }

    type MoveForward struct {
       Steps int
    }
    func (m MoveForward) Apply(p Position) {
        switch p.Rot {
        case Up:
            p.Y += m.Steps
            ... 
        }
    } 

    // second package wrapping the first
    type WarpToPoint struct {
        X, Y int
    }
    func (w WarpToPoint) Apply(p movement.Position) {
        p.X, p.Y = w.X, w.Y
    }


> I've really enjoyed not having to work with sum types

Your example of what you prefer uses (a shitty approximation of) a sum type in p.Rot.

(This is also the most basic possible use of a sum type; they are not only useful for enums, it's just to point out that even a large amount of "simple" Go code would benefit from them.)


I understand that p.Rot is a shitty approximation of a sum type, but it still works and almost certainly won't break anything since Go forces you to explicitly typecast. The important thing is that the list of possible actions wasn't sealed by the type system, which in the original example it was.

I want to reiterate that I am aware sum types can be useful. I just don't think they're useful _enough_ to outweigh being a footgun for calling code.


I would argue that this misses the use case of sum types, which typically don't have behaviour (or they'd just be interfaces!).

For example, consider an AST for a programming language. AST nodes don't have any behaviour (though they might have some methods for convenience, though nothing that implements behaviour). You want to do optimizations and printing and compilation and so on, but on top of the pure AST.


Enums exist specifically to be compared with one of their possible values, how can you have a zero value problem?


If the caller of your interface does not specify a value for your enum, they have implicitly specified the zero value. Whether that’s desirable behavior or not is up to you. For most clients, this behavior can be surprising if the zero value is a meaningful one (i.e. one that implies an intentional choice).

IME it’s useful to explicitly define the zero value’s meaning as “UNSPECIFIED”, to simplify the problem of trying to guess if the client intended to pass a zero value.


> Enums exist specifically to be compared with one of their possible values, how can you have a zero value problem?

Because there's not actually support for enums in go.

There's support for constant values, and automatically assigning sequential values to them. That happens to be useful for solving the same kinds of problems that enums solve, but they're not equivalent.


It probably amounts to the same thing, but I think there's a more pragmatic approach to thinking about safety. A type is a way to remember that validation has already been done. This is true for constants by inspection (code review). For dynamic code, have a validator function that takes unvalidated input and returns a Color or an error, and always use that to create a Color.

That's usually sufficient. Any "cheating" should come up in code review as a suspicious cast to Color. In an audit, you could search for casts to Color.

Safe languages often have unsafe constructs. It's the same principle. The unsafe code is signposted, and you review it.

If you want further encapsulation, another useful trick is to make Color a struct with a private field. It's not usually necessary, though.

Go does have an unfortunate quirk that you can always create a zero value without calling a constructor, so you'll need to make sure a zero Color has meaning. (An interface doesn't really change this because the then the zero value is nil. That's not an improvement over making the zero value mean "black" or "transparent" or "invalid".)


I know it’s not a proper part of the language, so it might qualify a “hack” solution, but one can get around some of the issues you describe by defining your enums in protobufs, and using/trusting the generated code, no?

It’s not a pretty solution from a language design point of view, but it’s been more than effective for us from an engineering point of view: since we’d need those proto definitions anyway, why bother writing our own?


Metaprogramming fixes everything. Why didn't they put in sum types in the first place? I think the designers probably didn't know about it at the time.


Given the linage of the designers they were pretty much aware of it.


The "linage" implies that they aren't aware of it, or at least they weren't when designing the language. Rob Pike is more of a systems guy than a type theorist.

Like most systems guys, they know of how C handles types and how C++ handles types. The concept of Parametric Polymorphism or sum types outside of OOP is most likely something Rob Pike was not familiar with as it's hard to see why sum types were not included in the language.

Having IO functions return a tuple of error and value rather then an Option type is not simplicity it's complexity arising from lack of a type primitive. The feature is ugly and very much looks like it was implemented by someone who wasn't aware of a type that can be a Value OR an Error. So instead he implemented a type that can be an Error AND a Value and left it to manual run time checks for the programmer to figure out if an error even occurred.

The other thing is that this "tuple" type of error AND value looks like a hack. Tuple types can't be saved to a variable in GO and can only be returned by functions and immediately unrolled into its separate primitive values. It's like Rob knew something was off in this area so he created a temporary concept of a tuple in the return value of a function to make up for it. A consistently designed language wouldn't have a Tuple only returnable by a function call. It seems strange and inconsistent.

Additionally, the fact that, in GO, some types can have Nils and other types default to a zero value implies that Rob Knew nulls were bad but didn't know how to completely get rid of the null.

I'm thinking that Robs initial notions of sum types and parametric polymorphism is that they can only be implemented via hierarchies of inheritance which in itself has many problems. It makes sense because this is what systems programmers are exposed to (C, C++) as opposed to typed lambda calculus or Haskell. So it's easy to see that Go is the result of Robs awareness of problems with OOP but lack of awareness of the theoretical alternative.


He was at the premiere computer science research organization for decades. And you presume that he's not familiar with sum types, because if he was, he would have included it, and therefore he can't have been familiar with it, because he didn't include it? That's the most amazingly arrogant thing I've heard in a while.

The fact is that Rob almost certainly knows more than you, rather than less. And he still made the choices he made. That should make you ask questions, not about Rob's knowledge, but about yours.


https://news.ycombinator.com/item?id=6821389

Read that quote by Rob and the responses. IN the quote rob describes what he believes Generic Types are at face value... he literally takes it into a tirade about inheritance and hierarchies of classes... something that is not part of type theory at all.

Honestly, it feels like he didn't know about Algebraic Data types. I'm not the only one who thinks this as shown from the responses to his quotation.

One of the responses:

"Or perhaps Rob Pike just hasn't explored the relevant literature in enough depth. At one point he admitted that he didn't know that structural typing had already been invented previously! This isn't to criticize Rob, I find his talks fascinating, I think he's awesome, he's a friend of my boss, etc. But he's hardly the first hard-core hacker to be ignorant of the degree to which type theory has seen dramatic advances since the 1980s."


Honestly this is really compelling evidence that Pike doesn't know much about type theory. That isn't terribly surprising, and the other early collaborators on the language that I know of also came from more of a systems background. I think its entirely likely that Go's crippled type system is partly an accident, and not entirely a design choice. It would be helpful if - with the benefit of hindsight - they would admit it, rather than invent post-hoc justifications for the way things are.


I don't know man. I feel like there are places I know more than Rob Pike almost certainly. Like, I don't know, most of functional programming. I seriously doubt he knows what indexed monads are better than me.

So, at the point they were creating Go, I think it's perfectly reasonable they had even less exposure to fp, and didn't actually know about these better solutions.


This feels like the opposite of an ad hominem fallacy


Well... those Bell Labs types were polyglots. They tried a lot of things in a lot of languages. Does that mean that Rob Pike knew about sum types? Not necessarily, no. But it gives you two possibilities.

1. Rob Pike spent all that time at Bell Labs, with all these CS experts, read and wrote all those papers, and never heard about sum types. That's... possible. It's not the way I would bet, but it's possible.

2. Rob Pike knew perfectly well what sum types were, and left them out of Go, because he thought they didn't fit with what he was trying to do.

To me, the second is both more charitable, and more in line with what I think Rob Pike's background and experience would have exposed him to. crimsonalucard obviously disagrees. He seems to think that sum types are so obviously the right thing that Pike could not have possibly not put them in Go had he known about them, and therefore he could not have known. And that is in fact possible.

But it seems to me to better fit with Pike's background, as well as with the principle of charity, to assume that he knew. And still he chose to leave them out.

Now, he could still be wrong. And we can discuss whether sum types are really a good fit for what Go is trying to do. But the assumption that he couldn't have known, or he would have done it the way someone else thinks he should have, is what grates on me.

For what it's worth, the Go FAQ (at https://golang.org/doc/faq#variant_types) says that they considered sum types, and didn't think they fit.


> But the assumption that he couldn't have known

Please read my initial assumption. In no place did I say he COULDN'T have known. Read it. I literally started the statement with "I think the designers probably didn't know" rather than "I know they COULDN'T have known." There is nothing to "grate" you here. I simply had an opinion and a guess, and you disagreed with it and decided to insult me.

What grates me is the assumption that I said it's 100% true that Rob Pike didn't know what a sum type was. I think it's very likely he didn't know. If he did know then I am wrong. That's all.

>For what it's worth, the Go FAQ (at https://golang.org/doc/faq#variant_types) says that they considered sum types, and didn't think they fit.

That FAQ should have been presented earlier in a cordial and civil way. If you did I would have admitted that my hypothesis was incorrect. Science logic and evidence rule the day and I try to not invest any emotion into any of my opinions. It's hard but I follow this rule. If the FAQ says he knows about it then he does and I am wrong. Instead you chose not to present this evidence and call me arrogant.

There was no need to call me "Arrogant." It disgusts me to hear people talk like this. Either way the GO the language feels awkward in the way it uses product types and does indeed feel like Rob didn't know about them because the sum types certainly do feel more fitting then having a function return a tuple out of nowhere.

I also disagree with the FAQ. Plenty of languages have constraint types that are placed on the subtypes of the sum type. There's no confusion imo. Also note that the previous sentence was just an opinion. Please don't call me arrogant because I have one.


Well, I didn't have the FAQ earlier. I was guessing then.

And I don't see the FAQ as necessarily total vindication of my position. The language team considered sum types; it doesn't mean that Rob Pike did in the initial design. It could be that, after it was kind of mostly formed, they thought about sum types and couldn't find a sensible way to make them fit. Or it could mean that he considered them and rejected them from the beginning. The FAQ isn't specific enough to say.

As for calling you arrogant: You are not the first person who has said, here on HN, that Pike "looked like he didn't know"/"must not have known"/"couldn't have known". Those conversations kind of run together in my mind. As a result, I was hard on you at least in part because others went too far. That's not fair to you, and I apologize.

I also cannot call you arrogant for having an opinion. I also have one - you may have noticed this. ;-)

However, I feel that I should say (and say as gently as I can) that you often sound very harsh on HN. A harsh tone causes many to read your content with less charity than the ideas might deserve. (I am not here trying to defend my interaction in this thread.) And this is not very helpful of me, because if you ask for advise on what, specifically, to change, I'm not sure I can give any. I mention it because you may be unaware of it, and awareness may help.

I can easily see how the previous paragraph could offend you. I am not trying to do so. Forgive me if it causes offense.


> To me, the second is both more charitable, and more in line with what I think Rob Pike's background and experience would have exposed him to.

More charitable to Rob Pike, rather than to the person you're in the the middle of a conversation with.

Anyway "appeal to authority" is not an argument, it's a religion. Our lord and savior, Rob, knows so much that his design decisions are beyond question.


Not only is it likely that, as you say, Rob Pike was aware of sum types, but he did not create Go on his own but it was created by a small team. Someone like Robert Griesemer, who studied under Wirth, would have known about them, if the others hadn't.

I have been using the Wirth languages a lot (Pascal, Modula), and one of the big appeals of Go to me is, that it brings back a lot from those languages to the modern times. The Wirth languages are far to underrated in programming today.


The irony being that most Wirth languages are more expressive than Go will ever be, with the exception of the first release of Pascal and Oberon versions, and the follow up on minimalist design approach with Oberon-07.

When Go came out, I though it could follow Oberon, starting small and eventually reach Active Oberon/Zonnon expressiveness, but alas that is not how they see it.

Even Limbo has features that Go still lacks.


It’s called “appeal to authority”: https://en.wikipedia.org/wiki/Argument_from_authority


The name of this fallacy is appeal to authority.


Having the value returned together with the error is convenient for a couple of reasons.

First, it's often possible for the function to return a meaningful value even in an error case (e.g., number of bytes read before the error occurred).

Second, it's often possible to return a sensible 'null' value together with an error which can be handled correctly without checking the error value. (A map lookup is the obvious example of this.) This simplifies logic in some places.

Using sum types for errors in Go wouldn't actually work very well unless you fundamentally changed other aspects of the language. You'd need pattern matching, a whole bunch of generic higher order functions for manipulating option/result types, etc. etc.


>First, it's often possible for the function to return a meaningful value even in an error case

Create a type that explicitly stores this information. The return type can hold the (error message and a value) OR (just a value). This type expression is a more accurate description of what's really going on. Whichever way you want to represent the product type it's not isomorphic to the actual intended result that the function should return. A sum type can represent the return value of the sentence below while GO cannot:

"A function that returns (a value) OR an (error with a value)"

This is the true intention of the function you described.

>Second, it's often possible to return a sensible 'null' value together with an error which can be handled correctly without checking the error value. (A map lookup is the obvious example of this.) This simplifies logic in some places.

But opens up the possibility of a runtime error if you fail to check for it. Historically there are tons of functions in C, C++ or javascript that use null to represent an error and it is self quoted to be the greatest mistake ever made by the creator of null. No language needs a null.

>Using sum types for errors in Go wouldn't actually work very well unless you fundamentally changed other aspects of the language. You'd need pattern matching....

Using product types to represent errors has already changed the nature of GO in a very hacky way.

Only Functions in GO can return tuples and the concept of the tuple can never be used anywhere else. You cannot save a variable as a tuple, you cannot pass a tuple as an argument. You can only return a tuple then instantly unroll it. It's an arbitrary hacky feature obviously made to support error values.

It would be better to have an arbitrary pattern matching feature... this makes more sense then arbitrary tuples returned from functions.

>a whole bunch of generic higher order functions for manipulating option/result types, etc. etc.

Actually no you don't. The fact that go functions return tuples with errors, does this mean that higher order functions need to handle tuples? No! not at all. In fact go explicitly eliminates support for this... The tuples in GO need to be unrolled into their constituent types before they can be used in any other function. The same concept can be applied to Option types. You have to unroll the value and explicitly handle either individual type. You do not ever need a higher order function that accepts the Option type as a parameter.

Like all languages that have the Option Type/Maybe Monad etc... Any function that returns this type represents a function that is impure that needs to be unrolled first before passing the value down to the functions that do closed and pure calculations. A function that takes an Option type as a parameter is a function that says "I am a function that can only take values from IO" It's very rare for functions to be implemented like this even in languages that have first class support for the sum type and Monads. In haskell I can't recall ever seeing a function that takes the IO monad as a parameter. In haskell and in Rust these values need to be unrolled into their constituent types before they can be used.

Please note I am not advocating the inclusion of Monads into GO. Just talking about sum types.


To me, Go is "masturbation prevention" language. Meaning that certain deliberate design choices were made to prevent precisely the types of endless unproductive masturbation you see in some other languages. I.e. type masturbation in Haskell or OOP/IoC masturbation in Java (particularly egregious, Java is not a bad language otherwise), or metaprogramming mastrurbation in C++. The omission of these features is not a bug. It's a feature in itself.


Indeed. More than once, I have seen program projects ruined by creating an overly elaborate class hierarchy, sometimes 10 layers deep. Just to express any theoretical aspect of the domain in the structure of the class hierarchy. Java programs often suffer from this. Which is especially sad, as Java has interfaces, which I think are the right way of representing abstract types for APIs for example. Interfaces don't force you to into a type inheritance just to fulfil a contract. But unfortunately, they are way to rarely used.


If anything in Java they're overused. You often see only one class implementing an interface where the programmer can reasonably expect there will never be another implementation, and where it's not exposed outside the API boundary, so the interface is gratuitous.


For json unmarshalling of structs at least, I've become a pretty big fan of the validator library from the go playground:

github.com/go-playground/validator/v10

It uses struct tags to validate the struct and is quite extensive.

https://godoc.org/gopkg.in/go-playground/validator.v10


  type Color struct {
    R, G, B, A byte // IDK
  }
  func (o OtherType) *Color {
    return &Color{R: o.r, B: o.b, G: o.g}
  }
  type Colorer interface {
    func Color() *Color
  }
A colorer would return a color regardless of its type. This is behavior-based interface. I just need a thing that when I call Color(), you get a *Color.


That's not an enum or sum type, though, and misses the point of my example. For colors, sure, you can use a structural type to represent RGBA, but that wasn't what I was trying to get across. What if the set of possible values cannot be described as scalars? The other example with "actions" demonstrated this problem.


I want to see generics and real enums (I.e., Rust enums, not C/Java enums) added to Go, but as far as it being unsafe or inefficient, these concerns are overblown for many apps. People who levy this criticism are often fine with Python and/or JS for similar categories of applications, even though they are far less safe and less performant than Go. We should be clear when we criticize Go that we’re talking about addressing the last 1% or so of type-related bugs and/or extending the performance ceiling a bit higher. We should also give Go credit for permitting a high degree of safety, performance, and productivity when other languages make you choose one.


Java enums are much more powerful than plain old C enums actually.


I’m aware, but it’s irrelevant.


I find the opposite is true. It too often means the focus is on fiddly and tedious book-keeping - instead of writing some code that says “please do a thing”, I write some code that says “please perform these 20 steps to do a thing in excruciating detail even though you are much better at deciding how to do this than I am”. It’s noise that detracts from the readability of the code far too often for my taste.


I agree with this 100%. Go is great for micro-readability: "what do these ten lines of code do." Go is horrible for macro-readability: what does this module do, what does this service call do. If you compare a fixed number of lines of code, I wouldn't be surprised if Go always wins out for readability. But if someone says, "Figure out the business logic behind this feature implemented in Go," get ready to spend a lot of time scrolling through low-level code.

I always thought code written as page after page of low-level details was bad code. I thought the same thing about code written as class after class after class of OO hierarchy. But people talk about Java and Go as if it's impossible to write unreadable code in them. I don't think code has to contain a single hard-to-understand statement to be "unreadable." After all, code that is unreadable because of abuse of powerful language constructs isn't literally "unreadable." You call it that because reading it requires an unreasonable amount of time and effort. The same thing can (and should) be said about code that requires an unreasonable amount of effort for any reason.

To me, it's just different ways that programmers can waste your time. One programmer might waste your time by combining powerful language features in cryptic ways; another might waste your time by hiding crucial structure in a vast sea of details. What's the difference?


>I always thought code written as page after page of low-level details was bad code.

I completely agree with this. The best code is code you don't have to read because the structure of the code makes navigation easy and functional boundaries obvious. A language that doesn't provide strong support for declaring functional boundaries results in code that is much harder to read because you have to comprehend a lot more of it to know what's going on.


Oh my, I didn't even thought of it, but you are right.

Few months ago I started working on a Go codebase. Yes, while the language is simple, you can absolutely make code confusing. If you wrote code yourself it is obviously simple to you, because you know the structure. But it can be a nightmare to someone else who needs to learn the structure and only has the code.


> But if someone says, "Figure out the business logic behind this feature implemented in Go," get ready to spend a lot of time scrolling through low-level code.

I've found "go doc" amazing for this use-case; I only ever trawl through the source-code for a high-level understanding as a last resort - usually because the code is undocumented or under-documented.


Bad Go code can easily have this property. But good, elegant, well-structured Go code absolutely does not.


This has absolutely been my experience as well. Golang is good at being fast, but to say that it helps write better code because of its missing batteries/features is just silly I think.

"I want to interact with a REST API and pull a field out of its response JSON" is an incredibly common workflow, and yet to do that in golang is far from trivial. You need to define serializer types and all sorts of stuff (or you can take a route I've seen encouraged where people to use empty interfaces, which can cause runtime exceptions).

Same deal with a worker pool. Concurrency is great, but instead of providing a robust, well written solution as part of the language itself, it gives you a toy like this https://gobyexample.com/worker-pools (still the most common result on Google) that is only 80% of the way there. Then you find yourself bolting things onto it to cover your features (we need to know if things fail, so let's just add another channel. We also need finality, whelp, another channel it is), and before you know if you have an incomprehensible mess.


> "I want to interact with a REST API and pull a field out of its response JSON" is an incredibly common workflow, and yet to do that in golang is far from trivial

    // Interact with a REST API and pull a field out of its response JSON.
    func interact(url string) (field string, err error) {
     resp, err := http.Get(url)
     if err != nil {
      return "", fmt.Errorf("error making HTTP request: %w", err)
     }
     defer resp.Body.Close()
    
     if resp.StatusCode != http.StatusOK {
      return "", fmt.Errorf("error querying API: %d %s", resp.StatusCode, resp.Status)
     }
    
     var response struct {
      Field string `json:"the_field"`
     }
    
     if err := json.NewDecoder(resp.Body).Decode(&response); err != nil {
      return "", fmt.Errorf("error parsing API response: %w", err)
     }
    
     return response.Field, nil
    }
IMO this is a good level of abstraction for a language's standard library. Each of the concrete steps required for the process are simply and adequately represented. Each can fail, and that failure is idiomatically managed directly and inline, which, when applied to an entire program, significantly improves reliability. If you find yourself doing this often, you can easily write a function to do the grunt work. Probably

    func getJSON(url string, response interface{}) error
> Same deal with a worker pool. Concurrency is great, but instead of providing a robust, well written solution as part of the language itself, it gives you a toy like this

Go's concurrency primitives are very low level. This can be bad, but it can also be good: not all worker pools, for example, need the same set of features.


Some of us need the intellectual delight to make the work bearable. If I can view my code as some kind of art, constructing great abstractions, it helps me forget that I’m spending a huge portion of my life logging, aggregating, and analyzing internet clicks.


The problem is you’re delighted by the code but everyone who comes after you can’t stand it.


You don't know that the word "great" in this case doesn't mean "simple, elegant, and as minimally abstracted as necessary"


It might not have been the intention of the original poster, but "great" to me implies pretty much the opposite of minimal abstractions. But perhaps I am just burnt by experience :)


Consider trying to get a job in a part of the tech sector doing meaningful work. Just because the best minds of our generation are squandering themselves on adtech doesn't mean you have to do the same.


You should view this condition as something to fix in yourself. Take joy in doing these things with precision and efficiency, and making your work easy to understand and explain to others. It's hubris and, frankly, rude to subject your professional colleagues to your artistic expression.


"making your work easy to understand and explain to others"

This is the hallmark of a professional (very likely mature and senior) team member. Nothing to prove and interested in a maintainable project.

There are times for clever, no doubt. Every rule has an exception.

Bill (William) Kennedy at Ardan Labs has a line he uses in his talks: the bottom level developers need to grow and come up and the top level developers need to avoid cleverness and come down and everyone meet in the middle.


Look bud, not all of us want to take delight in being an easily replaceable automata. It ain't a personality flaw.


So you’d take joy in making your coworkers’ lives hell instead?

I understand the kind of joy-of-expression that lives in things like https://poignant.guide/dwemthy/, but if you attempt to put that in a production codebase, I’m not gonna be at-all positive in code review.


Great, I'll let every reviewer of my code know that they've been doing it wrong.


Are you saying that you actually write the kind of code that I linked? Because my expectation was that you’d look at it and say “well, my code certainly doesn’t look like that. It’s quite reasonable in comparison, in fact.” I don’t think I’ve ever seen anyone write code like Dwemthy’s Array in a project that has to “do something productive”, even if they’re the only one working on it.


Maybe the sort of code you linked isn't the sort of code we're talking about? Nobody is running around bigco writing demoscene stuff or whatever.


Doing the things I said well makes you an irreplaceable member of a team, worth your weight in gold.

Save art for the canvas, for your weekends, for your loved ones. Bring a professional self to your job.


Not sure why you seem to think code written with feeling behind it is some unmaintainable mess. That's been the opposite of my experience.

The way people fail in this line of work, if they have skill, is burnout. Burnout is the thing that'll get you. So you do whatever you can to stave it off - and that requires working on something you actually care about in some way.


We aren't talking about "code with feeling behind it" but rather "code as some kind of art".


You might have a different idea of what art looks like. Well designed abstractions are elegant, conceptually simple, and not-leaky. These tend to make code more maintainable and easier to comprehend.


One of the things you learn after writing code long enough, is that there is no such thing as a perfect abstraction, or even a non-leaky one. Eventually you run into edge cases, either in performance or functionality, that causes you to add warts to your abstraction.


Stipulating that no abstractions are perfect shouldn't be an excuse to abandon the entire notion. There's still a gradient of more or less elegant and flexible abstractions.


This definition of art, though not wrong, is so expansive as to be meaningless, especially in the context of this discussion.


> Take joy in doing these things with precision and efficiency, and making your work easy to understand and explain to others.

If you think this is different than what the parent is describing then you’re doing it wrong


"Code as art" implies a strictly different set of criteria than the ones I listed. If the Venn diagrams overlap a lot for you, that's great, but it's rare.


I think you have a very specific, and not widely shared, definition of “code as art.” Code as art does not mean code full of pointless Rube Goldberg mechanisms or following some esoteric golden ratio whatever. For me, “code as art” means code which is well-abstracted, readable, correct, concise, maintainable, extensible, well-documented, performant, etc — I.e. reflecting the things that matter to me as a developer. The process of getting to the point where the code has all of those things, or as many as possible, is indeed the “art” of coding. To assume that the result is some horrible morass of spaghetti that no coworker wants to read is a strange one for sure.


The thing I detest about discussions of code aesthetics is the idea that the quality metrics you speak of have such a direct relationship to the "product features" of the language, that we can simply know it's good by looking at it, and we are hapless simpletons unable to write this so-called "beautiful" or "clean" code if we do not have the feature available. That is all bullshit. Most of the features are shiny baubles for raccoons and magpies, I do NOT know what good code looks like(I can only state whether the coding style eliminates some class of errors), and what matters the most is the overall shape of the tooling.

Some languages have a big bag of tricks, other languages let you extend them to the moon, and still others make you work at it a little. In the end it's all just computation, and the tool choice can be reduced to a list of "must haves" and "cannots". If you need more expressive power -- make your build a little more complex and start generating code, give it a small notion of types or static invariants. It only has to generalize as much as your problem does, and that leads you to build the right abstraction instead of dumping an untried language feature on the problem in the hope that it is a solution.


Your definition of art is essentially synonymous with good or elegant, and therefore not really useful in this discussion.


What is your useful definition of art that is useful in this discussion?


As for writing code, I write it cleanly and with proper, clear language in-code commenting for ME. Because I need to go look back at what I've done and why often.


I used to be a Haskell type and now enjoy Go greatly for this reason.

There was a thread on the Rust reddit where someone was asking how to do something relatively simple using some elaborate combination of map/reduce/filter/continuations/who-knows-what, and someone said "just use a for loop", and the OP was enlightened.

People don't know how great the burden of trying to model their problem to fit a fancy language is until it's gone. I didn't.

I want generics and sum types, but I miss them less than I would have predicted.


This topic is more complicated than “for loops good, iterators bad.” I absolutely agree there’s a time and place for both; that’s why we included both in the language. But sometimes, iterators have less bounds checks than for loops do, so they can be more performant than a loop. Sometimes they’re the same. Depending on what you’re looking for, the details of what you’re doing, and your literacy with various combinations, different ways of expressing the same idea can be good. It all just dependents.

(Also, rust’s for loops are implemented in terms of iterators; they’re actually the more primitive construction in a language sense; a while loop with the Iterator library API.)


Sure, and then you ask how to fold over a tree or an infinite stream, and the answer is to reimplement all the HOFs from the “fancy languages” in a type-specific way, because otherwise every user of your ADT is having to write not just a for-loop, but an entire push-down automata.

I also write Go code without missing generics, but that’s because I’m also fluent in other languages, and tend to use those when I want something ill-suited to Go, rather than trying to force Go into that shape.


> I also write Go code without missing generics, but that’s because I’m also fluent in other languages, and tend to use those when I want something ill-suited to Go, rather than trying to force Go into that shape.

I think this should be the main takeaway from people learning go - it's not suited for everything. Technically you can write "World of Warcraft" in pure assembly, but it doesn't make sense to do - you're using the wrong tool for the job. My problem is I hear a lot of people advocating for golang with a one-size-fits-all, theres-nothing-better, sort of mantra.

I have things I absolutely reach to golang for, but the sweet spot I've found is to re-implement a prototype I've built in some other language (like Python) when I need the speed. Trying to actually create new things in golang is tedious and I end up fighting the tooling more than most other languages (sans maybe C++ or Java).


> There was a thread on the Rust reddit where someone was asking how to do something relatively simple using some elaborate combination of map/reduce/filter/continuations/who-knows-what, and someone said "just use a for loop", and the OP was enlightened.

I think it's hard to discern between "that overly-complex functional and declarative definition is unfamiliar to me" and "that is way over-complicated and should just use a for loop".

Any chance you can track down that example so others can compare the two examples as well?


I’ve seen a similar effect in myself and others at work but I don’t think it’s only a symptom of the language. After we switched from java 6 to 8 a handful of devs including myself went overboard modeling problems to be solved with streams API when it wasn’t necessary. These days it’s leveled out and use of the api is on a much more appropriate level.

I think this is a process of learning. While learning a new tool you start to model problems so you can practice, even though not necessary. Once comfortable you realize when to use the tool and when not.


“Some internet person did it wrong once [in my view]” isn’t really an indictment of the whole thing


> I used to be a Haskell type...

Pun intended? If so, nicely done.


I'm just going to leave a direct quote from Rob Pike:

""" The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt. """

I read that as Rob Pike saying he wrote go for idiots google hires to write.

https://bravenewgeek.com/go-is-unapologetically-flawed-heres...


Not idiots, but young coders who don't have 10-20 years of experience, who are required to write good code pretty quickly. So you want a language which is not only to quickly pick up but also quickly to learn to a point at which you are writing good programs.


Is the much touted "Make invalid states unrepresentable" consequence of garden-variety Sum Types wasted effort? Seems very good bang-for-mental-buck to me.


Joe Doffy argued, in my view persuasively, that Go missed an crucial opportunity by not requiring that users actually do something with returned error codes.

http://joeduffyblog.com/2016/02/07/the-error-model/#forgetti...

> It’s surprising to me that Go made unused imports an error, and yet missed this far more critical one. So close!

Result/Option/Maybe types force unwrapping, which makes ignored return codes auditable and allows you to manage technical debt.

This doesn't speak to Go's simplicity so much as it does to Go's conservatism. Having the Go standard library use sum types, establishing a precedent and a culture, would be no more complex in the absolute, but would have been more of a stretch for its initial target user base.


> It’s surprising to me that Go made unused imports an error, and yet missed this far more critical one. So close!

The most egregious to me has always been that unused imports are an error but variable shadowing is not.

Even in languages with dynamic side-effecting imports (like Ruby or Python) I've never seen a bug caused by an unused import. Not so for shadowing (don't get me wrong, it's a convenient feature, but if you're going to remove this sort of things because reasons it's a much bigger pitfall than unused imports).


Variable shadowing is actually a pretty clever thing that I'd like to see in other languages.

For example I often write code like this in Java:

    String ageString = request.getParameter("age");
    int ageInt = parseInt(ageString);
because I can't re-use name `age` twice and forced to distinguish between those names.

Now I agree with you about imports. I often want to comment a line and run program. Now I have to comment a line, run a program, encounter compilation error, find import, comment that import, run again. And uncomment two lines later. With Java my IDE optimizes import and removes all unused imports when I'm commiting my code. While I'm working on my code, I'm absolutely fine with any warnings. I would say even more: back then when I used Eclipse, it had awesome ability to compile even code with errors. This code just throws exception on runtime. But if I'm not really interested in that snnippet and working on other part, I can run it just fine. Probably that feature is the only thing that I'm missing from Idea.


> Variable shadowing is actually a pretty clever thing that I'd like to see in other languages.

Shadowing exists in most languages, the biggest difference being the allowed scope relationships between the shadower and the shadowee: most languages allow inter-function and inter-block shadowing (if they have actual block-level scope so e.g. not Python).

Intra-block shadowing is a much rarer feature, and one which Go doesn't have.


I agree with you. And it would even be fine to allow shadowing but require an explicit declaration to allow it to happen in a particular case. e.g.

    import "foo"
    func other() {
        shadow var foo string = "bar";
    }


Or only allow shadowing for `var` and forbid it for `:=`[0]. Though forbidding it entirely would work just as well.

[0] and go is actually weirder than that — and the opposite way 'round — as `var` doesn't allow any shadowing in the same scope:

    var a, b int
    var a, c int // fails because it redeclares a in the same block
while `:=` allows same-scope shadowing as long as the overlap is not complete:

    a, b := foo()
    a, b := foo() // fails because no new variable on the left side
    a, c := foo() // succeeds
both allow arbitrary shadowing in sub-scopes.


There's no shadowing in the latter case. The second case is the same thing as

    a := 2
    a = 3
By definition you must have nested scopes to have shadowing. Within the same scope, it's only ever assignment.


Well, in Rust you could do:

    let a = 2;
    let a = 3;
I think you would say the latter shadows the former...


Indeed, because semantically there is a syntactically implicit scope for every let binding. For example, in that case, the outer a is dropped after the inner a, just as if the second a had been inside of a block. There may be multiple syntactic ways to introduce a new scope.


That last one isn’t shadowing. := reuses a variable of the same name in the same scope.


I'm honestly of the opinion that this shouldn't be allowed either. Accept that you need to name the variable 'fooErr' and just ban all shadowing


I really like this syntax. I wonder if it has ever been proposed for Rust? It should be compatible with the existing semantics and could be phased in and then made mandatory in a new "edition".


It has, but hasn’t gained much traction.

There is already a “let” to show you that a variable is being created, adding more verbosity to a feature that, in some sense, is about removing verbosity kinda misses the point, in my opinion.

That said, never say never, but if I was a betting kind of person, I’d bet against it ever being accepted.


> This doesn't speak to Go's simplicity so much as it does to Go's conservatism.

I think this really hits the nail on the head. There are benefits to a conservative approach, but it's not the same as simplicity.


The errcheck linter is very popular for checking that you looked at every error return: https://github.com/kisielk/errcheck

I use it in most of my open source projects.


An indication that something is missing.


Not necessarily, insofar as “Go where you always use the (result T, err error) return type” is a dialect of Go rather than Go itself.

You could give this dialect a name and then maybe the compiler could enforce rules on projects that declare that they’re using that dialect (like C compilers do with dialects like “c19” vs “gnu99”), but it’s not strictly necessary; you can also just create “dialect tooling” that wraps the language’s tooling and adheres to those rules (like Elixir’s compiler wraps Erlang’s compiler.)

And a CI shell-script, or git pre-commit hook, that runs a linter before/after running the compiler, is an example of just such “wrapped tooling.”


> Most of it is wasted intellectual effort.

I would challenge this. I would say _some_ of it is wasted, but most of it works towards making the code more understandable. And code being understandable to the next person to read it (both in the small "what is this block doing" sense and in the larger "what is this algorithm doing" sense) is very important.


I find other people's go code is generally much more readable than other people's code in most other languages.

Go's simplicity and heavy idiomatic culture means all the code more or less looks identical. This is great for team projects.


Some of it is wasted, some is genuinely useful and to know which is which you need experience and good judgment. My point is that it is incredibly easy to fall into the trap of pursuing maximum code beauty and abstraction and wasting a lot of time in trying to attain some shining ideal, especially if the language is conducive to it.


I can't speak for the GP, but my interpretation is that the wasted effort comes from doing what you describe when, instead, one could have chosen a simpler language and produced understandable (albeit "ugly", "not eloquent") code the first time round. Then one can turn those intellectual wheels on a more interesting problem to solve.


The assumption that simplicity in a language naturally encourages understandable programs is a mistake IMO. Language complexity generally exists because the language is trying to shoulder some of the complexity that would otherwise go into your program. For example, Brainfuck is nearly the simplest language possible — you could write a full rundown of its features on a sticky note — but programs written in it are not very readable.

Even Go did this by adding async features into the core language. This is a complication that doesn't exist in the older languages it was intended to build on, but by building that complexity into the language, they reduced the burden of using it in your code.


> Most of it is wasted intellectual effort.

It is highly stimulating intellectual effort though. Sometimes I sit down and spend hours just thinking about the best way to do something. It's some kind of philosophy: the abstractions we create reflect the way we understand things. To write good code, we must study the computer science and the problem domain itself.

Without this, it's just boring mechanical work. Once the project has been figured out it ceases to be interesting. Some of my projects are unfinished because I can't justify spending more time on them even though I know exactly what must be done.


As someone who thinks readability is the¹ most important quality of source code, that makes me less interested in learning Go.

¹ Yes, even over correctness.


That's really strange to me because I find go to be very readable. There are so few approaches to each of the basic programming building blocks in go that once you have read a moderate amount of idiomatic go, everything else just feels easy.


There are far more readable languages in go. Go also encourages nested conditionals which can become a nightmare to trace when you're under duress.


Maybe I'm missing something but Go _discourages_ nested conditionals. There are even static analysis tools in the idiomic toolkit which tell you when your conditionals can be simplified.


No it doesn't, quite the opposite actually. Go favours "exit early" strategies


I'm only responding to my parent post's arguments. I don't know much about Go myself.


This philosophy is about as far from what I value as it's possible to get. An easily understandable program that doesn't solve the problem is worth absolutely nothing. In fact, if it's code someone is depending on it's probably worse than no code at all. In some applications it could even cost people their lives.

Correctness is a basic starting point. It's the minimum viable offering.


That's the most common opinion, and the one I used to hold.

Here is my counter argument.

Readable code that has some bugs is fixable. Because you can understand what it's doing and how to change it.

Working code that is unreadable is basically dead. No one can make changes to code they don't comprehend. The only thing it's good for is running it as is, much like a compiled binary.


That old adage, the best code is the code not written. Programs have to solve a problem to be worth anything.


I'm firmly in this camp as well. A strong code quality metric is how easily understood something is - we write code in all sorts of states of minds at all hours of the day. If others can read your code and understand it, it means you can too when it's time to extend/modify it. This is also why things like Ruby's over reliance on metaprogramming bugs me - sometimes duplication is fine, and I'd much rather have some duplication than a wrong (or hard to discover) abstraction.

The argument that "all code gets sloppy so let's just have very verbose code from the get-go" is pretty insane to me.


aesthetics are relative, not absolute.


Readability to me is about clarity, not aesthetics.

How easily can people understand what this code does and how?

This is at least conceptually objectively measurable, though I don't know of any actual attempts to do so.


That's important. But why I say relative - is because the structure and language of readability is dependent on the code culture one is working within, not an outside measure.

That's why things like C coding style is so variable - sometimes within the same body of code (see net-snmp ....)

I'd rather have consistent coding standards but I daily deal with different team projects with different conventions, so I'm used to adapting my own reading conventions as I switch contexts.

Keep a common aesthetic within a project. It's worth it - by measure of success of a project.


This might be bad for maintaining a lively argument in this thread, but I fully agree with that.


Many programmers using a more "clever" language like haskell will spend a lot of time trying to make code more readable, searching for the right abstraction. Most of it is wasted intellectual effort.

I rather disagree that making your code readable and maintainable is a waste of effort.


When I used to work on PHP, we had a similar appreciation for this, and called it “the joyless programming language” (as a complement)


Code is helped by being readable.

Saying your code is going to "look like sh£t anyway" seems rather defeatist, and an _excuse_ to write unreadable code.


Strong agree. I feel like the opposite is true of the Rust community. I follow a lot of prominent people in Rust and all they tweet about is intricacies of the language and new features/libraries. I'm not sure these people are even building anything, they seem to be "snacking" on the language only.

You don't see people in Go doing this, and it gives the impression that the community is small, but I think they're just building stuff.


People build libraries to solve real problems. Language features need to have proper motivations in order to be accepted; we have rejected more academic features that don’t have direct uses. For example, async/await solved a real pain point for our largest production users, and that’s why everyone has been talking about it.

Sometimes, these connections can be unclear from the outside. For example, there’s a lot of talk about “generic type constructors” and “generic associated types”, which sounds academic. However, it’s something the compiler needs to understand in order to implement a very simple user-facing feature: async functions in traits. From the outside it may look like “oh those folks are out of touch” but it’s directly connected to real user needs.

(As a further aside, these two features are identical, but “generic type constructors” focuses on the academic, and “generic associated types” focuses on the end-user benefit; we changed out terminology here specifically to focus on what users get out of the feature rather than the type theory implications.)

Furthermore, some people tweeting about things they’re excited about does also not preclude others who are heads down all the time. You wouldn’t see them for the same reasons, they’re not tweeting.

These kinds of swipes against other languages lower the discourse and promote animosity when there really should be none. I’d encourage you to consider if these kinds of attitudes help bring about more people who are interested in building cool things, or fan flame wars that distract folks from doing exactly that. Every minute spent arguing over whose language is better is also a distraction from building cool things as well.


What are some examples of features Rust has rejected for having no direct use? (I’m considering doing some language hacking just to learn the more arcane aspects of compiler theory, and it’d be nice to have a list of “exotic features you won’t usually find in a language because they don’t do much to help people” to explore.)


There was a contingent of folks who argued that we should not build async/await, but that we should instead build a generalized effect system, or figure out monads and do notation, because async/await is a specific form of those things and we should wait until we can get the more general feature first. Higher kinded types is sort of in this space, GATs will provide equivalent power someday...

We rejected a proposal for dependent/pi types; we’re still adding a limited form, and may get there someday, but we didn’t want to go fully into them at first because the difficulty was high, and the benefit less clear, than just the simple version. (Const generics)

There’s a few other other features that we had and removed too I can think of off the top of my head. We used to use conditions for error handling. We had typestate.

There was a battle over type classes vs ML style modules, type classes (traits) won in the end. That doesn’t mean modules are useless...

I think the answers to these questions are very relative to your langauge’s values and goals. All of these features have good uses in other languages, but couldn’t find a place in Rust for a variety of reasons. Your language should be different than Rust, so you may find some of these features useful, and not find some of ours useful.

I would encourage you to read TAPL, I think it would help with what you’re trying to do. Oh and check out https://plzoo.andrej.com/


> People build libraries to solve real problems.

Not always, some Rust community members in fact don't build any sort of applications with Rust at all, and only build libraries. I think it's reasonable to be skeptical of this.

I think the community can do better by promoting more talks involving applications written with Rust. As an outsider this is a puzzling omission to me as there seems to be more people using Rust than there is Firefox and Cloudfront engineers, so I'd like to learn more about where it's actually used.


> I follow a lot of prominent people in Rust and all they tweet about is intricacies of the language and new features/libraries. I'm not sure these people are even building anything, they seem to be "snacking" on the language only.

This is obviously false. How do you square that with all the Rust code we've shipped in Firefox, for example? I build things in Rust every day.


I didn't state that no one builds anything in Rust so I have no need to square it.


>>> I follow a lot of prominent people in Rust ... I'm not sure these people are even building anything, they seem to be "snacking" on the language only. - Touche

>> This is obviously false. How do you square that with all the Rust code we've shipped in Firefox, for example? I build things in Rust every day. - pcwalton

> I didn't state that no one builds anything in Rust so I have no need to square it. - Touche

Touche!

You are technically correct I suppose. You didn't state no one builds anything, but arousing suspicion that "a lot" of prominent Rust people aren't building anything and are just "snacking" on the language is pretty pointed rhetoric with an obvious purpose.

You could start up a Programming Language tabloid with a headline like:

EXPOSED: PROMINENT RUST PROGRAMMERS CAN'T EVEN WRITE IN RUST!

And really that's all before analyzing the line of logic of "people tweeting only about interesting language features and not their personal projects, public work projects, or private work projects implies they might just not be building anything at all" which seems pretty flimsy at best.


A lot of prominent Rust people, in fact, aren't using it in production. The Rust community is large and not everyone works for Mozilla or Cloudflare. I'm not going to call these people out by name because that would be a mean and pointless thing to do. I'll just point at that the community size to known production-uses ratio is not encouraging, to me at least.


Yes, this! If you walk slow, then you walk in a straight line to the destination.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: