I don't know about that. Every programmer's first Go program seems to like to go to channel city. Perhaps more accurately: Over-engineering your Go program is going to quickly lead to pain. It doesn't have the escape hatches that help you paper over bad design decisions like some other languages do.
Also: interfaceiritus. Someone saw "accept interfaces, return structs" somewhere and now EVERYTHING accepts an interface, whether or makes sense or not. Many (sometimes even all) of these interfaces have just one implementation.
A lot of times you want to be able to cmd+click on something and actually see what the hell the code actually does and not get dead-ended at an interface declaration.
What are you using that can cmd+click to take you to a definition, but can't also take you to an interface implementation? I develop Go in Emacs with the built-in eglot + gopls, and M-. takes me to the definition, C-M-. takes me to the implementation(s). It's a native feature of gopls. Sure, it's one extra button, but hardly impossible.
The compiler certainly knows how to determine if there is only one implementation of an interface and remove the interface indirection when so. There is nothing really stopping the cmd+click tooling from doing the same.
Does the compiler do that? That sounds extremely unlikely, especially because an interface with only one implementation can store the nil type tag or a tagged pointer to an instance of that implementation.
The nil interface is another implementation. I mean, unless it is being used as the sole implementation, but I think we can assume that isn't the implementation being talked about given that it isn't a practical implementation. We're talking about where there is one implementation.
Right. Can you cite anything that says that the go compiler does this sort of whole-program analysis to try to prove that a certain argument to a function is always non-nil, so that it can change the signature of that function and the types of variables declared in other functions?
Uh. No. Why would I ever waste my time proving something I said? If I'm right, I'm right. If I'm wrong, you'll be sure to tell me. No reason for my involvement.
Nil is built-in. You just have to write the code to instantiate it and the compiler gives you one. The coder does not need to create an implementation, it's there for free.
I would not have called it a "second implementation" myself, but that's your claim to defend, not mine.
map is also built-in. Where do you find the hash map in the given program?
By your logic some nebulous package in a random GitHub repository that happens to satisfy an interface is also another implementation, but you would have to be completely out to lunch to think that fits with the topic of discussion.
> map is also built-in. Where do you find the hash map in the given program?
If you told me a type can be optimized because the compiler knows it can only have non-hash-map uses, but I could put that type into a hash map with a single line, I think I would be right to be skeptical.
> By your logic some nebulous package in a GitHub repository that happens to satisfy an interface is also another implementation, but you would have to be completely out to lunch to think that fits with the topic of discussion.
I expect the compiler to have a list of implementations somewhere. I don't know if I can expect it to track if nil is ever used with an interface. I could believe the optimization exists with the right analysis setup but you called the idea of finding a citation a "waste of time" so that's not very convincing.
> but you called the idea of finding a citation a "waste of time" so that's not very convincing.
Not only a waste of time, but straight up illogical. If one wants to have a discussion with someone else, they can go to that someone else. There is no logical reason for me to be a pointless middleman even if time were infinite.
Now, as fun as that tangent was, where is the nil implementation and hash map found in the given program?
You can head over to godbolt.org and see for yourself that changing the value to nil doesn't change the implementation of `bar`, though it does cause `main` to gain a body rather than returning immediately.
The implementation is preexisting. Even if it was directly used, there would not be an implementation in the snippet. So it not being implemented in the snippet proves nothing.
And what do you mean "someone else"? You're the one that said the compiler "certainly knows" how to do that.
> So it not being implemented in the snippet proves nothing.
It doesn't prove anything, but is what we've been talking about. Indeed, there is nothing to prove. Never was. What is it with this weird obsession you have with being convinced by something? Nobody was ever trying to convince you of anything, nor would there be any reason to ever try to. That would be a pointless endeavour.
What was the point of your question, if not to prove something?
If you were trying to imply that the implementation doesn't exist, that implication was fatally flawed.
If you were asking to waste time, then it worked.
If you had another motive, what was it?
Are we having a 5d chess game? I thought it was a normal conversation.
> He who wrote the "citation".
Nobody? Nobody wrote a citation.
Do you mean the person that asked for a citation? If so, you're wrong. Finding evidence for your own claims would not make you a middleman. They didn't want to have a discussion with someone else, they wanted a discussion with you, and for that discussion to have evidence. Citing evidence is not passing the buck to someone else, it's an important part of making claims.
> What was the point of your question, if not to prove something?
My enjoyment. For what other reason would you spend your free time doing something?
> If you were trying to imply that the implementation doesn't exist, that implication was fatally flawed.
And if I weren't trying?
> If you were asking to waste time, then it worked.
I ask nothing, but if you feel wasted your time, why? Why would you do such a thing?
> If you had another motive, what was it?
As before, my enjoyment. Same as you, I'm sure. What other possible reason could there be?
> Nobody? Nobody wrote a citation.
There was a request for me to refer another party who was willing to talk about the subject that was at hand – one that you made reference to ('you called the idea of finding a citation a "waste of time"'). Short memory?
> Finding evidence for your own claims would not make you a middleman.
There wasn't a request for evidence, there was a request for a citation. Those are very different things. A citation might provide some kind of pointer to help find evidence, which is what I suspect you mean, but, again, if that's what you seek then you're back to wanting to talk to someone else. If you want to talk to someone else, just go talk to them. There is no reason for me to serve as the middleman.
> it's an important part of making claims.
Nonsense. If my claim does not hold merit on its own, it doesn't merit further discovery. It's just not valuable at all. It can be left at that, or, if still enjoyable, can be talked about to the extent that remains enjoyable.
Perhaps you are under the mistaken impression that we are writing an academic research paper here? I can assure you that is not the case.
It's great that in your reply upthread you actually understood that it was a request for any kind of evidence, including evidence you just created on the spot, but now you pretend not to understand that.
What ever do you mean? There was no change in understanding. You spoke to seeking a proof in addition to a citation, the parent did not originally speak to the proof bit, only to a citation. Entirely different contexts.
In fact, you would have noticed, if you read it, that the "upstream" comment doesn't even touch on the citation at all. It is focused entirely on the proof aspect. While the parent wanted to talk about citations exclusively, at least at the onset. Very different things, very different topics.
You could just go to godbolt.org, as others have already said, and as any normal person would do. Evidence is neither here nor there, though. We're talking about citations, which nobody of sound mind does. Why on earth would you have a conversation using someone else's words? That's the stupidest thing I have ever heard of.
> Many (sometimes even all) of these interfaces have just one implementation.
They are missing that mocks are the second implementation. (It took me years to see this point.) I would say that in most of my code at work, 95+% of my interfaces only have a single implementation for the production code, but any/all of them can have a second implementation when mocking for unit tests.
The point of using a mockable interface, even if there's only one real implementation, is to test the behavior of the caller in isolation without also testing the behavior of the callee.
This can be overdone of course, not everything needs this level of separation, but if it makes testing one or both sides easier, then it's usually worth it. It's especially useful for testing nontrivial interactions with other people's code, such as libraries for services that connect to real infrastructure.
Did you miss "just one implementation"? A mock is literally defined by being another implementation. If the 'mock' is your sole implementation, we don't call it a mock, that's just a plain old regular implementation.
I think my comment was clear on the distinction between real and mock implementations. If the code was testable with no need for mocks then certainly remove the interface and devirtualize the method calls.
Your comment was clear about mocks, but not why mocks are relevant to the topic at hand. The original comment was equally clear that it was in reference to where there is only one implementation. In fact, just to make sure you didn't overlook that bit amid the other words, the author extracted that segment out into a secondary comment about that and that alone.
Mocks, by definition, are always a supplemental implementation – in other words, where there is two or more implementations. What you failed to make clear is why you would bring up mocks at all. Where is the relevance in a discussion about single implementations the other commenter has observed? I wondered if you had missed (twice!) the "one implementation" part, but it seems you deny that, making this ordeal even stranger.
It is easy to generate mock implementation code (GoMock has mockgen, testify has mockery, etc.) The lack of a hand-rolled mock implementation doesn't mean that much. For example, many people do not like to put generated code under source control. So, just because you don't see a mock implementation right away doesn't mean one isn't meant to be there. Also, the original author of the function that consumed the apparently unnecessary interface type may have intended to test it, but not had the time to write the tests or generate the mocks. If we are going to be this pedantic, I did say "mockable" interface, implying the usefulness and possibility, but not necessarily existence, of a mock implementation.
Since we are examining code we can't see, we can only speak about it in the abstract. That means the discussion may be broader than just what one person contributes to it. If this offends you or the OP, that was not the intent, but in the spirit of constructive discussion, if you find my response so unhelpful, it is better to disregard it and move along than to repeat the same point over and over again.
Not in any reusable way. Take a look at mockgen and testify: All they do is provide a mechanism to push implementation into being defined at runtime by user code. So, if they, or something like it, is in use the implementation is still necessarily there for all to see.
> Also, the original author of the function that consumed the apparently unnecessary interface type may have intended to test it
Okay, sure, but this is exactly what the commenter replied to was talking about initially. What is a repetition of what he said meant to convey?
> That means the discussion may be broader than just what one person contributes to it.
Hence why we're asking where the relevance is. There very well may be something broader to consider here, but what that is remains unclear. Mocking in and of itself is not in any way interesting. Especially when you could say all the very same things about stubs, spies, fakes, etc. yet nobody is talking about those, and for good reason.
> If this offends you
For what logical reason would an internet comment offend?
Can't Go compiler statically prove that such single implementation interfaces are indeed that and devirtualize the callsites referring to them?
Either way, the problem seems to happen in most languages of today, if they (or their community) ever happen to accidentally encourage passing an opaque type abstraction over a concrete one.
I think it actually does that, but in local contexts, where this analysis is somewhat easy.
I also believe you don't actually have to prove it statically: PGO can collect enough data to e.g. add a check that a certain type is usually X, and follow a slow path otherwise
I understand that it does so when the exact type is observed - a direct call on a concrete type. But I was wondering if it performs whole-program-view optimization for interface calls. E.g. given a simple AOT-compiled C# program:
using System.Runtime.CompilerServices;
var bar = new Bar();
var number = CallFoo(bar);
Console.WriteLine(number);
// Do not inline to prevent observing exact type
[MethodImpl(MethodImplOptions.NoInlining)]
static int CallFoo(Foo foo) {
return foo.Number();
}
interface Foo {
int Number();
}
class Bar: Foo {
public int Number() => 42;
}
On x86_64, 'CallFoo' compiles to:
CMP byte ptr [RDI],DIL ;; null-check foo[0]
MOV EAX,0x2a ;; set 42 to return value register
RET
There is no interface call. In the above case, the linker reasons that throughout whole program only `Bar` implements `Foo` therefore all calls on `Foo` can be replaced with direct calls on `Bar`, which are then subject to other optimizations like inlining.
In fact, if we add and reference a second implementation of `Foo` - `Baz` which returns 8, `CallFoo` becomes
;; calculate the addr. of Bar's methodtable pointer
LEA RAX,[DevirtExample_Bar::vtable]
MOV ECX,0x8 ;; set ECX to 8
MOV EDX,0x2a ;; set EDX to 42
;; compare methodtable pointer of foo instance with Bar's
CMP qword ptr [RDI],RAX
;; set return register EAX to value of EDX, containing 42
MOV EAX,EDX
;; if comparison is false, set EAX to value of ECX containing 8 instead
CMOVNZ EAX,ECX
RET
Which is effectively 'return foo is Bar ? 42 : 8;'.
Despite my criticism of Go's capabilities, I am interested in how its implementation is evolving. I know it has the feature to manually gather a static PGO profile and then apply it to compilation which will insert guarded devirtualization fast-paths on interface calls, like what OpenJDK's HotSpot and .NET's JIT do automatically. But I was wondering whether it was doing any whole-program view or inter-procedural optimizations that can be very effective with "frozen world single static module" which both Go and .NET AOT compilations are.
EDIT: To answer my own question, I verified the same for Go. Given simple Go program:
package main
import (
"fmt"
)
func main() {
bar := &Bar{}
num1 := callFoo(bar)
fmt.Println(num1)
}
//go:noinline
func callFoo(foo Foo) int {
return foo.Number()
}
type Foo interface {
Number() int
}
type Bar struct{}
func (b *Bar) Number() int {
return 42
}
It appears that no devirtualization takes place of this kind. Writing about this, it makes for an interesting thought experiment what it would take to introduce a CIL back-end for Go (including proper export of types, and what about structurally matched interfaces?) and AOT compile it with .NET.
[0]: VMs like OpenJDK and .NET make hardware exception-based null-checks. That is, a SIGSEGV handler is registered and then pointers that need to throw NRE or NPE either do so via induced loads from memory like above or just by virtue of dereferencing a field out of an object reference. If a pointer is null, this causes SIGSEGV, where then a handler looks if the address of the invalid pointer is within first, say, 64KiB of address space. If it is, the VM logic kicks in that recovers the execution state and performs managed exception handling such as running `finally` blocks and resuming the execution from the corresponding `catch` handler.
I don't know about that. Every programmer's first Go program seems to like to go to channel city. Perhaps more accurately: Over-engineering your Go program is going to quickly lead to pain. It doesn't have the escape hatches that help you paper over bad design decisions like some other languages do.