Hacker Newsnew | past | comments | ask | show | jobs | submit | joeshaw's commentslogin

OP here. I appreciate the comments I've read here, and it might inspire a new blog post: "Defer is for resource cleanup, not error handling."


OP here. Another commenter pointed out that `errors.Join` didn't exist when I wrote this, but I wouldn't have changed my guidance if it had.

The core issue here is that you want to deal with errors as soon as possible. The async nature of writes to files makes this more challenging. Part of the purpose of this post was to inform people that you don't necessarily always see write errors at write-time, sometimes you see them at close-time and should handle them there. (And sometimes you don't see them at all without fsync.)

Even if you do handle errors in your deferred close, you may be taking other actions that you'd rather not have until the file i/o is completed, and this could leave your program in an inconsistent state. Side effects in Go are common, so this is a practical concern and it is hard to spot and debug.


I caution against this approach, as you are not really dealing with the error when it occurs. If the work you do after the defer has other side effects, you may have just gotten your application into an inconsistent state and it's very hard to see in code why this might be.

`defer` is really not well-suited for error handling, its benefit is mainly in resource cleanup where failure is impossible or doesn't matter. (This makes it fine for `Close` on read-only file I/O operations, and not so great for writes.)


> If the work you do after the defer has other side effects, you may have just gotten your application into an inconsistent state and it's very hard to see in code why this might be.

Can you give an example case of how this could happen?


This is a contrived example, but imagine a situation where I have a file I want to write on disk and then have a reference to it in a database. If I have a flow like:

    func UpdateUser(user *User) (err error) {
        f, err := os.Create("/some/file.txt")
        if err != nil {
            return err
        }
        defer CheckClose(f, &err)

        if _, err := f.Write(somedata); err != nil {
            return err
        }

        if err := db.UpdateUser(user, "/some/file.txt"); err != nil {
            return err
        }

        return
    }

This function might have updated the user in the database with a new file despite the fact that `CheckClose` (defined up-thread) does check to see if the `Close` failed and returned an error. The calling code won't have known this has happened.

The core problem is that the error checking is not done soon enough, either because Go programmers are conditioned to `defer f.Close()` from nearly all example code -- most of it demonstrating reads, not writes -- or because they are handling the error, but only in a deferred function, not earlier.

A more correct way to do this would be:

    func UpdateUser(user *User) error {
        f, err := os.Create("/some/file.txt")
        if err != nil {
            return err
        }
        defer f.Close()

        if _, err := f.Write(somedata); err != nil {
            return err
        }

        if err := f.Sync(); err != nil {
            return err
        }

        if err := f.Close(); err != nil {
            return err
        }

        if err := db.UpdateUser(user, "/some/file.txt"); err != nil {
            return err
        }
    }
`Sync()` flushes the data to disk, and `Close()` gives a "last-chance" opportunity to return an error. The `defer f.Close()` exists as a way to ensure resource cleanup if an error occurs before the explicit `f.Close()` toward the end of the function. As I mentioned in an update to the post, double `Close()` is fine.


I think a better solution is to write smaller, single-purpose functions. To refer to your example downthread, you should have one function that only writes the file, and another that does the "whole" operation -- calling the function to write the file, checking for errors, and then updating the database.

Then you can use defer in the file-writing function if you so please, and not bother to close at the end explicitly, without issue. A more robust example might be to even include the sync call in the deferred function (and even clean up the file itself on error). To re-use your example from your blog post:.

    func helloNotes() (err error) {
        var f *os.File
        f, err = os.Create("/home/joeshaw/notes.txt")
        if err != nil {
            return
        }

        defer func() {
            if err == nil {
                err = f.Sync()
            }

            cerr := f.Close()
            if err == nil {
                err = cerr
            }

            if err != nil {
                os.Remove("/home/joeshaw/notes.txt")
            }
        }()

        err = io.WriteString(f, "hello world")
        return
    }
I would probably move that out into a helper, though, so I could do something like

    defer SafeClose(f, &err)
instead, and be able to use it elsewhere. Hell, even without defer, it's nice to have a helper that will sync and close for you so you can avoid the boilerplate, if you have lots of different bits of code that writes files.

FWIW, I'm not sure why you are so negative on named return values, but I'm at best a novice Go programmer, so perhaps I don't fully understand why they aren't great (I guess it does look weird to me to have bare `return` statements that do actually return a value even though it doesn't look like it). Your argument about the return value possibly being modified after the core function finishes being unintuitive doesn't really strike me as a big deal either.


> If the work you do after the defer has other side effects

Defer is by definition the last work you do in a function, there won't be more work except by the caller who will get the error returned to them.

If you are structuring a function that writes a file, and then does something with it, defer isn't appropriate, since you should close it before you do any more work.


It's possible to have multiple defers in a function though (so you have multiple "last work in a function"; nowhere is it dictated that a function should only have one operation that needs to clean something up at the end. Think for example copying one file to another.


I'm fine with vendoring going away as long as we can check these new modules into version control and have vgo use them. Adding a caching proxy adds too much infrastructure for most folks, and without it you can't have reproducible and available builds.

I like Go's distributed package imports (even if everyone just uses Github), but it means you're distributing single points of failure all over the place. Vendoring (whether source or modules) solves this problem.


In net/http, if a handler panics you don't want it to bring down your entire application. For more info, see https://golang.org/pkg/net/http/#Handler

The encoding/json package is interesting because it uses both panic() and recover() internally in a way more reminiscent to how exceptions are used in other languages (that is, for control flow).

For example, https://github.com/golang/go/blob/2596a0c075aeddec571cd658f7... panics, and https://github.com/golang/go/blob/2596a0c075aeddec571cd658f7... recovers it.

Both uses are quite uncommon, though.


interesting! Yeah this is definitely changing my idea of how panic should be used.


> I think there are plans to make an official debugger it just hasn't been prioritized.

There was a nascent project inside the Go team a couple of years ago (called ergo I think?) but I believe it was shelved in favor of Delve, which was much further along.

It's probably safe at this point to consider Delve the de facto standard Go debugger.


My explanation for why is the following sentence:

> If a function could fail, the function must return an error as its last return value. The caller should immediately check for errors from that function.

In practice, in idiomatic code it is quite rare for a function that returns a `(* Something, error)` to return `(nil, nil)`, which are the cases where a nil pointer dereference would most often happen. The other case is not _immediately_ checking errors. I would say both of these cases are not idiomatic Go. (Yes, there are exceptions.)

The other cases where I've seen nil dereferences pop up most often:

(1) the zero value for maps is nil, and a nil map is mostly not usable. `m[50] = "foo"` will panic with a nil dereference if you've not initialized `m`. This is one of the most annoying things about Go.

(2) not checking for presence when pulling out of a map. If you have `map[string]* Something` and do `x := m["nonexistent"]` and try to use `x`, it'll blow up. Always use the `x, ok := m["nonexistent"]` form instead, and deal with stuff when `ok == false`.

(3) nil receivers. `var x *Something`, then `x.Method()` -- the `Method()` method _can_ deal when `x == nil`, but most code does not.

My reasons for citing C and Java:

C doesn't have multiple return values, and kinds of C code deals with errors differently. Lots of stuff just returns `NULL` and all too often things don't check for it. (`malloc()` is a perfect example of this -- it rarely fails but a _lot_ of code doesn't check for errors. Now extrapolate that to everything :)

For Java, it's because the try-catch control structure complicates program flow. If you have a large block of code in a try-catch, it is very easy for a different line of code to throw an exception than you expected, and as a result an object you attempt to dereference later is null and you get an NPE. There are ways to deal with this, of course, but in my experience most people are pretty lazy about the non-happy path.

EDIT: formatting. how can a site about programming have such terrible support for typing in code


My 7 years of C++ and 7 yesterday of C# and 5 years of go agree with you. Many more nil pointer exceptions in the former and almost none in the later in similar size (large) codebases.


Most of my experience is in C#, and NullReferenceException is, by far, the most frequent exception that ever happens. Curious about if your experience with it is different, and if it is, how.


Not sure if you're talking to me. I was saying that NullReferenceExceptions happen a lot more in C# than nil pointer panics in Go.


My 6 years of C++ and 8 years of C# and 10 years of Java and 5 years of Go agree with you both.


All Fastly customers that use it to terminate SSL/TLS are also affected. Among them are Wired and Buzzfeed (ones I've run into this morning).


Mono was started by Miguel at Ximian, and most of the Mono code was owned by Ximian. Ximian was bought by Novell and with it, all the Ximian-owned Mono code. (I don't remember if a CLA was required before or after Novell bought Ximian.)

The Novell-owned IP was either sold or licensed to Xamarin later.


Joe is correct, the code was always owned by Ximian.

And we did require copyright assignment or relicensing rights from the start to support a dual-licensing business. It was the One Blessed Business Model endorsed by Richard Stallman at the time.


On a related note, there are an alarmingly large number of hosts listening on port 23 (unencrypted telnet) on the internet: https://www.shodan.io/search?query=port%3A23

Most of them seem to be interfaces to network switches.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: