Hacker Newsnew | past | comments | ask | show | jobs | submit | byronr's commentslogin

I have written hundreds of thousands of lines of plain old C, as well hundreds of thousands of lines of Go over a career spanning three decades. I've used Go in earnest since 2013.

I would pick Go for any large project in a heartbeat.

I have to concur that the haters who post to HN probably have never used this language in earnest.

Here's what you get out of the box:

* Extensive standard library * Cheap concurrency * Fast compilation * Built-in test/benchmark framework, including coverage and race detection * Built-in heap and CPU profiling

Of all the complaints the one I understand the least is the reservations about performance. It is within 10-20% of C, and callouts to C and assembly are possible. (My 8-way multiplexed MD5 routine was written in AVX assembler and lives on in minio.) Extracting that last 10-20% is two or three sigmas away from the norm of how programming languages are used these days.

The objection to generics is similar. The lack of generics shows up -- once in a long while. It doesn't prevent me from being immensely productive with this language.

Looking back at the last few years I've wondered if I could have accomplished what I did in Go by using a different language (C in particular) and the answer, for me, is "not a chance".


I recently spearheaded the creation of a new Go service at my company. If it was Node or Python I could easily imagine a team getting sucked into weeks of bikeshedding the API framework, the test framework, linting tools, formatting tools, the TLS stack, research spikes on packaging and distribution, etc.

With Go, all of these problems were basically solved by the strength of the standard library. We could not be happier.


> the TLS stack

Networking prowess of Go is often underrated in these discussions. Building an web application with server, router, mux, auth etc. all from standard library and just deploying the binary blob on the cloud (irrespective of the architecture) to get your application running with less effort than competing programming languages/frameworks is itself enough for most use cases.


Did the programming on macOS, build machines were Linux and the customer ran the code on Windows.

Zero Go-related issues during the whole project. Just drop in one executable and one config file and it just works.


I think having options is just sign of more mature language.

In python you could also implement the API framework, do unit testing, TLS, packaging using standard library. And similarly there are 3rd party frameworks for Go as well.


There's always tradeoffs, to be sure. What many are trying to convey is that Go's defaults tend to be the stronger and simpler options.

However, for a lone wolf project, I'd lean more on something like gorm, than just hand-craft and maintain lots of sql.

Another misconception is that it's all for inexperienced programmers. But you need decades of experience to appreciate all the opinionated tradeoffs already made in Go.

It's just one more tool though, and nowhere near FP, which is a different beast altogether. Lots of features around Go and ecosystem is clearly inspired from Haskell. That might be another good tool from the other end of the spectrum.


Go is just new, so it had opportunity to build what we already know. Python is older than Java. When it was first released there was no World Wide Web even, not mentioning REST.


Agreed.

I used to use Java because it was fast enough, have good libraries and was productive enough. Go is almost as fast, the standard library has most of what you need and its ability to compile to a single binary makes deployments simple.

I also look back and think, could I have done all that in Java? Probably... but I also feel it would have required more effort.


Java has come a long way. It's pretty easy to program in a functional style in Java these days. Go seems to go out of its way to make that hard. Maybe generics will help.


Go is more about explicit imperative style, defaulting to good/known performance characteristics. Would advice against trying FP with it, unless just for testing it out.


Go and Java are close cousins.

The difference is of course the JIT -- with Go the single binary means you can disassemble it to find out what it's trying to do. JIT adds a layer of mystery.

Also, with Go it's largely possible to avoid heap allocations in performance hotspots. With Java I'm not so sure.


But the tools that are available to profile performance, memory etc on the JDK are wonderful.


It is much easier to reverse engineer compiled Java than Go.

Later

But see below, I think I misconstrued this comment.


I agree in general, but in this particular case (escape analysis [1]), you won't see what's happening in the ByteCode output. That being said, it's still very easy to see how the code will be JITed by running it through JMH (or just making sure it runs enough time to be optimized). The JVM and IDEs like IntelliJ contain tons of tools to review that.

To get back to the issue at hand, from my personal experience, if you have many small varibales, your app will perform better in Java than Go. The escape analysis in Java is probably more advanced than the one in Go, and in Java you could also use a generational GC if you prefer throughput to latency.

[1] To be more accurate, the actual optimization the JVM does is called Scalar Replacement, and it's not even about allocating variables on the stack, but rather about inlining primitives: https://shipilev.net/jvm/anatomy-quarks/18-scalar-replacemen...


The argument above is being able to reverse engineer the machine code that will be executed by the processor. Java bytecode isn't that. That's why he said "the JIT adds a layer of mystery".


Ah, that makes sense.


I suspect parent may be better able to divine what the Go compiler intended by looking at the produced assembly, vs. knowing or predicting how the JVM will behave by looking at bytecode/decompiled Java code. Not that it is impossible, it's an additional skill that's bourne of familiarity with Hotspot/Graal/$JVM


It's awesome to see all this Java expertise on this thread but at its essence -- a jar file doesn't define the program that's going to run. It's the union of a jar and the JVM it runs on. With Go, the binary is the whole story.

So even if I can predict how a particular JVM will treat a given jar, I'm still stuck if my code might be deployed across a range of JVMs.

I say all this having written orders of magnitude more Go, with apologies to the Java programmers.


>So even if I can predict how a particular JVM will treat a given jar, I'm still stuck if my code might be deployed across a range of JVMs.

This seems like an odd objection to me. I'm not the biggest Java expert, and I don't currently use it professionally, but I've never written Java that was meant to run on a runtime I not only didn't control but did not know the version of. In theory this was supposed to be a big feature of Java, in practice the community is moving towards bundling the JVM when distributing apps, precisely because versioning is such a fiasco.


Why not just ask the current jvm what version it is and act accordingly? I mean surely you can advise the user / customer what JVM's your code is compatible with, which really translates to what JVM's you have tested your programming on, right?

I'm obviously surrounded by folks far smarter than me, so I'm posting something probably grossly uninformed to see what the experts think!


>Why not just ask the current jvm what version it is and act accordingly?

I would! That's more or less what I was saying. But earlier on in the life of Java, there were technologies like Web Start and applets that could be embedded in webpages and the idea was that they would run on the client much like JavaScript does today, and you would have little idea of what the client environment was like, other than "it's a JVM." That turned out to be a bad idea for a number of reasons, most famously security, and the situations in which you can't just bundle your JVM are dwindling.


Fair enough! Still, I'm thinking many javascript pages query to find out the browser environment and also act accordingly. I do agree with the security complaint, however I also like web apps.

I do think that you are sort of implying that web applets can't determine which jvm they're running under, is this really the case? Was there any reason that the java webapp folks prevented the applet from inquiring of the java version?

This page chats about the version stuff for applets, it seems even with applets there was some options for picking the java version:

https://docs.oracle.com/javase/8/docs/technotes/guides/deplo...

You've made a nice argument for bundling the jvm so maybe I should change my inclination there, especially since I do like avoiding depencencies.


> I do agree with the security complaint, however I also like web apps.

Java applets are basically dead at this point, it’s all JS.

> I do think that you are sort of implying that web applets can't determine which jvm they're running under, is this really the case? Was there any reason that the java webapp folks prevented the applet from inquiring of the java version?

I never wrote applets but the point I was trying to make is that your environment is fundamentally unknown in that context. You could query for different versions and so on, but you would probably never know for sure because you don’t control the runtime environment (everything from the JVM down to the OS). This was supposed to be a big selling point of Java and it has for the most part turned out to be more trouble than it’s worth. That’s all.


>... a jar file doesn't define the program that's going to run. It's the union of a jar and the JVM it runs on. With Go, the binary is the whole story.

I was saying the same thing - albeit poorly perhaps. I'm no Java expert either, but once had to download 'jad' off sourceforge to decompile a buggy 3rd party lib :-).


> I'm still stuck if my code might be deployed across a range of JVMs.

Modern Java is meant to be packaged with the JVM via jpackage. You will know exactly what you're running on.


> Go is almost as fast

The performance measurements I've seen point to Go being faster. To me it seems also to make more sense since Go is compiled to native code, whereas Java still needs JVM to run the non-native binary.

Can you elaborate in what kind of cases Java had better results?


Tight loops usually where it’s really dealing with math and not business logic. The JVM GC was tuned for throughout over latency which may be a factor.


The problem with not having generics is the systems you build instead. They either throw away type safety and use reflection, which causes hard to debug issues, or they lean on code generation, which is slow and hard to debug.

I agree that the lack of generics only shows up once in a while and doesn't hamper productivity as much as might be expected, but I'm still very excited to see them added to the language.


> throw away type safety

In theory yes, in practice if your Sort array function takes a list of interface{} instead of a well defined type is not going to cause any type bugs. Might be slow, might be ugly, but lack of types on functional/generic stdlib functions doesn't make your code buggier if the original implementation is sound.


I'm cautiously optimistic about the addition of generics to Go.

If all goes well, we'll get some nice stdlib stuff and 3rd party functions for handling different kind of data with a common api.

But if the - you know what kind of people - discover it in earnest, we'll get a hellscape of C++ templating proportions when people go way overboard with it.

Time will tell.


Sincere question out of curiosity, since I have not been exposed to Go at all.

How do container types, like arrays or hashmaps work without generics? Do you basically have to declare and implement types/methods for each different contained type?


I hardly ever need generics directly. If I have to write an algorithm for 2 types, I can copy and tweak as easily as I can deal with the added complexity of programming with abstract types.

BUT, I need generics all the time indirectly. I've worked in lots of languages over the decades, and I highly value languages with mature/debugged/optimized standard libraries of algorithms and data types (esp. "containers").

Go has an impoverished subset "built-in", and that's all it offers.

One of the best things about Go is the high quality of its standard libraries--where it has standard libraries--so there is a gaping hole where the container and algo libraries ought to be. Experts in these algorithms could take advantage of Go's concurrency for example to write library algorithms that provide a combination of performance and safety that I couldn't create quickly if at all. They could, that is, if they had generics in the language, but they don't, so I'm on my own (for now).

When people (repeatedly) claim that Go doesn't really need generics because they (the claimants) hardly ever need generics, I have to wonder whether their knowledge of DS and algos is especially high or especially low: that they could whip up a debugged and optimized algo as easily as calling it from a library or whether they aren't even aware of the DS/algo that they really ought to be using.


Personally, it’s very rare that I need to reach for a specialized algorithm. I’m not sure if that’s primarily the problem space I work in (services providing user identity and access management), but I never find the need to go beyond a brute force search or need a data structure other than an array or built in hash map (sometimes a concurrent hash map). Most of the optimization I do involves eliminating network calls and implementing caching.


Others have answered, but I just wanted to add, your question is usually where beginners/non Go users start to pearl-clutch. I would advise, if you're curious, to give it a shot for a while. As with all new things, try to not just write the language you're used to, but in the syntax of the language you're using, and instead embrace patterns that exist in a given language.

Lots of people find the things you're concerned about missing to not really be the end of the world.

Or you might hate it, and that's also okay.


Go supports a handful of built-in generic types, like hashmaps. What Go currently lacks is user-defined generics, though that is actively being worked on.


Arrays/slices and maps are in fact provided as built-ins that function like generics, e.g. []foo and map[foo]bar work as expected, as do associated helper methods.

But those are the only container types with built-in support like that. (Channels can similarly be of any type)


They use hack of "interface{}" which does type checking at runtime (slow). And you then cast it to a proper type.

If I remember correctly (haven't used it in a year) Go cheats a bit, and it has few functions in standard library that do have generics (like make), it just doesn't provide this functionality to the developer.


If you can avoid interface{} you should, ie. by declaring interfaces to types and using them explicitly. You can then mix your own types, and still be protected by the type system.


Yes, but that requires generics and without them code generators (and if you don't use them, a lot of copy & paste).


Roughly the same way containers work in Python.


> Roughly the same way containers work in Python.

That’s not really true, since built-in containers are special sui generis generics in Go, whereas in an untyped python they get by being untyped, and in python’s statically-checked type system (as implemented in pyright, mypy, etc.) they use general-purpose generics.


That complaint is as old as the language --- the first piece about Go I ever read, after my friend Yan convinced me to pick up Go, called it out for having "generics for me and no generics for thee". I've never understood why I'm meant to care. I get, if you want to write your own generic containers, being irritated that there aren't universally available generics (yet). I don't get being upset that the language has some special-cased generics. It's not personal. They're not rubbing salt in your wounds. They just need a map type that works.

At any rate: the person asking the question is wondering how you manage to have performant containers with reasonable interfaces in a language without generics. That's a good question! If you've worked a lot in C, you might be imagining a horrible mess of void-stars or something. But no, the ergonomics are basically those of Python, with extra typing.


Given that they stated "Do you basically have to declare and implement types/methods for each different contained type?" in the comment, I don't think they were wondering if you end up using void*/interface{}.

I really don't understand how the python comparison makes sense. Since those particular containers are special cased, the ergonomics are also the same as java (and I'd argue more like java than python). Other than ownership, I don't think go's maps are any more python like than C++s or Rusts either.

> reasonable interfaces in a language without generics

and with static typing. Python doesn't fall into that category. It either doesn't have static types, or does have generics. The answer (as others have mentioned) is that go does support generics for a few particular container types.


> That complaint is as old as the language

I’m not making a complaint, I’m pointing out a fact without a value judgement.

> At any rate: the person asking the question is wondering how you manage to have performant containers with reasonable interfaces in a language without generics.

Uh, I don’t see either “performant” or “with reasonable interfaces” there, the question is: “How do container types, like arrays or hashmaps work without generics? Do you basically have to declare and implement types/methods for each different contained type?”

The answer is, for built-in containers, “no, because they use special-purpose generics”. For custom containers, its basically “yes”, but you probably only want a single contained type with an appropriate interface anyway. In neither case is “basically the same as Python” particularly accurate, though I suppose the former case is similar to typed python using generics in the same way as any language with generics would be, while the latter is loosely similar to bare Python or typed Python using protocols instead of generics. But only loosely.


> The answer is, for built-in containers, “no, because they use special-purpose generics”.

Unless you want to implement utility helpers working on them then it’s yes again, because Go doesn’t have generic functions either, so while it does have a few builtin generic containers, you can’t operate genetically over them.


Yeah, I'm choosing not to die on this hill. I'm just saying that Go has first-class table and resizable array ADTs.


They’re not first class though: you can not take in, return, or work with the ADTs, only with concrete instances of them.


Hey, there, I'm over on this other hill now, not dying on that one. :)


I've also been mostly using Go over the last 6-8 years. Before that it's mostly been C++, pretty "modern" towards the end.

Go would definitely be my preferred pick for distributed/server applications even though the previous large (mixed C and C++) project I was on was a large distributed application and we did just fine. In fact I'd say our productivity and quality in C++ in that other team was just as good as any I've seen (which is not a random/double blind sort of thing, who knows if that project started in Go from day 1).

I would say that in a large project the language choice has some impact but there are many other variables. A well managed C++ project with competent developers can work well. A less than well managed with so-so developers in Go can be disaster zone. There are lots of other moving pieces, testing, build automation, CI/CD etc.

There are certain areas where I would prefer not to use Go. Basically areas where you spend a lot of time up or down the abstractions. I would prefer not to implement a video codec or a signal processing system in Go. Firmware, motion control etc. would also probably not be a good fit. Going the other direction there are things I can do really quickly in Python (lessay grab some data from some services, do some manipulation, draw some graphs), generally smaller utility like pieces where I want to use data with various random types, use third party libraries. I wouldn't want to write a large project in Python because it's slow and is dynamically typed.

I would also beg to differ wrt/ 10%-20% performance difference. You tend to rely a lot more on GC and produce more "garbage" in Go and the optimizer isn't as good vs. the best C/C++ compilers, I'd say it's more like 50%-150%. But for a lot of stuff it doesn't matter. If you're just making database queries then most of your CPU is sitting there anyways and the delta on the caller side isn't a factor.

Go is pretty nice in it's zone. It's got rough edges but all in all it's fun to use. There are occasional annoyances (like those times where you implement another cache for the 10th time because you can't build a good generic cache or those times where you need a priority queue and interact with those clunky library implementations, so yeah, I'm in the generics fan club). But that hasn't stopped me from using and loving Go. I don't miss those C++ core dumps ;) or dealing with memory management (even with smart pointers and containers it can be a pain). Go's more dynamic aspects (reflection etc.) also make some things easier vs. C++. People can learn Go really quickly which is another bonus, learning C or C++ can take years.


If you can find a team of competent C++ programmers, most definitely you should go with C++ for all tasks.

But if you've got a pool of people who have never touched either C++ or Go, I'd pick Go for the project. It's orders of magnitude easier to teach generic programmers the basics of writing decent Go than C++.


100% on the team making the most difference. (An aside: a software project I've been admiring recently is the Apollo Guidance Computer's. To the moon and back on a 15-bit 1s complement machine and a few kb of memory. It's a case where the team was certainly better than the available tools.)

On the perf side: I'm curious to know where Go sits now relative to C assuming you elide away all heap allocations in your inner loops. In the early days Go's codegen was based on plan9's simple and simplistic C compiler's back end. Things have gotten much better since then and to my knowledge it's within striking distance of C. But I could be wrong.


there's a number of benchmarks you can checkout online.

number crunching wise, Go is on par with Java and C#: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

for webdev work, the standard library might not be that fast, but it's fast enough, and if you need even more speed the fasthttp third party Implementation offers good speed: https://www.techempower.com/benchmarks/


https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

Sort of supports my pulled out of thin air numbers (50-150%). In C/C++ you got those lovely SSE intrinsics ;) But look at the binary tree entry as well, that sort of x13 win for C++ really shows you how C++ lets you get sheer power with decent abstractions.


should we add the amazing declarative marshalling and unmarshalling to/from json and xml and possibly others? or is that part of the interaction with the outside world, ffi and marshalling in one bag?


Isn’t it stringly typed, full of runtime introspection and rather slow?


sure it is. however isn't most xml or json marshalled communication I/O bound? so even though it could be faster it also is "fast enough"?


It depends a lot how you're getting it and what you're doing with it. The existence of projects like rapidjson, simdjson, … indicates that even without GTAV-style mistakes, JSON enc/dec can absolutely be your limiting factor. Even more so if you're on a local network (e.g. between machines in the same DC) where bandwidth is… essentally infinite, for all intents and purposes? When you've got 10Gb/s or even 100Gb/s between your racks, odds are IO is only the limiting factor for very short messages (lots of framing overhead).

As for XML, it's an absolute bear to encode and decode, documents are often large (dozens, hundreds, even thousands of GB), and the format is complicated enough that writing truly fast parsers is difficult. Being CPU-bound when XML is involved is routine, especially if you need a DOM.


It’s not that Go is not a great language; The problem is that it’s a very conservative language that is not picking up low-hanging fruit. Their maintainers also have a nasty habit of being condescending to other people’s use cases; E.g., f-strings, unix shebang as comment, ... .


As an IOCCC winner I've had my winning entry verbatim on my resume for years. It's small enough to look like a decorative footer and it always garners comments.

Your question has another angle: why would I care to work for an employer who looked down on my IOCCC win? It acts as a filter both ways.


Has anyone ever tried to compile it? It doesn't work for me:

  gcc -w 1990/tbr.c
  1990/tbr.c: In function ‘main’:
  1990/tbr.c:7:50: error: void value not ignored as it ought to be
      7 |   e(-8)     ,gets      (1+(    c=q)     )||      exit      (0);     r(0,0)
        |                                                  ^~~~~~~~~~~~~
  1990/tbr.c: In function ‘e’:
  1990/tbr.c:22:43: error: void value not ignored as it ought to be
     22 |  3 ;}e(x){x<0?write(2,"?\n$ "-x/4,2),x+1||exit(1):
        |                                           ^~~~~~~
Similar for clang. I'm sure there is an easy fix, but I don't know what it is.

The program in question, if anyone is wondering, was the 1990 winner of IOCCC "Best Utility" award. Source is here:

  https://www.ioccc.org/1990/tbr.c
The authors explain in rot13: "Guvf cebtenz vf n ehqvzragnel furyy."


Works for me with

    clang -std=c89 -Wno-implicit-function-declaration -fno-builtin tbr.c
Compilers are fussier (with good reason!) than they used to be.


Those flags work with gcc too. A deleted commented suggested adding `int exit();`, which also worked.


exit is a function returning void, which was traditionally coerced to 0 but anymore. Yusuke Endoh's patch [1] is available.

[1] https://mame.github.io/ioccc-ja-spoilers/patches/1990-tbr.pa...


No coercion needed. Pre-ANSI C does not have a void type, so unless otherwise specified the return type of a function defaults to int.


Yes, I did! I used Duff's paper as a reference, and relied on the good taste of all my beta testers to guide me towards a working shell. Back when source code was distributed via shar files in an email.


Thanks for writing rc! I’d forgotten about Duff’s paper and the exact circumstances of rc’s release until I read LukeShu’s and your replies.

I had a lot of fun with rc....


It's hosted on github now and still serves as the login shell for me and presumably many others!


The key word is "nameplate", a colorful term implying engraving and rivets:

https://en.wikipedia.org/wiki/Nameplate_capacity


Nuclear power is usually run close to nameplate, continuously, as it this is generally more economical.


> Rob Pike probably has never written a program where performance really mattered

Rob Pike has written window system software which ran in what now would be called a "thin client" over a 9600 baud modem and rendered graphics using a 2MHz CPU. He probably knows a thing or two about performance tuning.


By "rendered graphics" you mean "place characters on screen"? If the bottleneck for that is a 9600 baud modem throughput, there's not a lot you need to optimize, even on a 2MHZ CPU.

Also, having programmed more constrained systems decades ago doesn't magically make you knowledgeable on performance on modern hardware with completely different capabilities. In fact, it's probably what causes you to develop a "computers are so fast now, no need to think about performance"-mindset, because everything you want to do could be done in an arbitrarily inefficient way on modern hardware. Performance doesn't matter to you anymore.


> By "rendered graphics" you mean "place characters on screen"?

It was a fully graphical 800x1024 (or 1024x1024) system running on 1982 processors.

https://en.wikipedia.org/wiki/Blit_(computer_terminal)

> having programmed more constrained systems decades ago doesn't magically make you knowledgeable on performance

Perhaps not but it does mean you've "written a program where performance really mattered" which I believe was the original claim?


> It was a fully graphical 800x1024 (or 1024x1024) system running on 1982 processors.

I've looked into in that. Blit was monochrome, had an 8Mhz processor, and a relatively large 256KB framebuffer which could but directly written to. There were only a handful commands, mostly concerned with copying (blitting) bitmaps around.

Rob Pike only wrote the first version of the graphics routines - the slowest version, in C(!). It was rewritten another four times over, by Locanthi and finally Reiser.

I don't think any credit should go to Pike for implementing the performance-critical parts of that system.

https://9p.io/cm/cs/doc/87/archtr.ps.gz

> Perhaps not but it does mean you've "written a program where performance really mattered" which I believe was the original claim?

No, it doesn't mean that. You can be wasteful on constrained hardware as well, performance doesn't necessarily matter even on the simplest chips, if what you want to do doesn't need the full capabilities of the system.

However, I am specifically replying to the claim that "Rob Pike probably knows a thing or two about performance". As you can see, Rob Pike handed off performance-critical work to someone else. He probably didn't know how to write optimal code for that particular platform, but even if he did, most of that knowledge wouldn't transfer over to modern systems.

At the very least, he didn't care about optimizing that stuff, or he wouldn't have handed it off. He would've enjoyed optimizing that stuff. And that's all completely fine, not every programmer needs to care about performance. I just refuse to take advice from these people about performance or "premature optimization", because it is uninformed.


Writeup of the blit terminal's operating system is here, it consists of a lot more than the bitblt primitive, with many whole-system performance concerns at play:

http://a.papnet.eu/UNIX/bltj/06771910.pdf

Suggest you read this before denigrating Rob Pike's bona fides. Not sure what axe you are trying to grind but it is ugly and unbecoming of a professional.


I'm not denigrating his bona fides, I'm questioning his credentials on performance-oriented computing. For all I know, if Rob Pike had been a performance freak, Blit might've never shipped. He may indeed have chosen all the right trade-offs.

Nevertheless, the advice he gives on performance is wrong, plain and simple, for the reason that I gave you: If you have overhead everywhere, there is no bottleneck that you can observe - your software is just slower than it needs to be across the board. If you write software without performance in mind from the very beginning, you can never get all of it back by optimizing later - without major rewrites that is.

How does one give wrong advice? By not having the required experience to give correct advice. I don't care if you're Rob Pike, Dennis Ritchie or Donald Knuth. If you're wrong, you're wrong.


> In fact, it's probably what causes you to develop a "computers are so fast now, no need to think about performance"-mindset

This is the complete opposite of Rob’s mindset, which you’d know if you had any familiarity with his work.


Your reply here saddens me.

I suggest you look up Rob Pike and reconsider some of your hypotheticals about what he knows about. (https://en.wikipedia.org/wiki/Rob_Pike)


One of the eyewitnesses to Bach's playing was Constantin Bellermann. (chronicled in the Bach Reader)

Bellermann wrote about a pedal solo that Bach played: he “ran over the pedals so quickly that his feet appeared winged, with a thundering fullness of sound, and penetrated the ears of the listeners like a bolt of lightning”, and that he “was admired in amazement” by Prince Frederick van Hessen. The prince removed a jewelled ring from his hand on the spot and gave it to Bach. If he had earned that just with the speed of his feet, wondered Bellermann, what would the prince have given him if Bach had used his hands as well?


There is a German lady called Barbara Dennerlein that is a bit like that, her footwork is nothing short of amazing.

https://en.wikipedia.org/wiki/Barbara_Dennerlein

There are some youtube videos of her playing jazz.


I saw her in Chicago in 2009, playing the pipe organ at the Rockefeller Chapel, in a wonderful but utterly poorly-promoted performance. The audience was tiny for such a magnificent space, and I thought it a crying shame.

It's hard to think of a pipe organ as a jazz instrument, but then again, any instrument is a jazz instrument.


I wrote a toy lisp 1.5 interpreter in Go a few years ago, and part of the joy was making the core of the interpreter mimic McCarthy's typography. This is my apply function:

  func apply(fn, x, a Addr) Addr {
          if atom(fn) == T {
                  switch fn {
                  case CAR:
                          return caar(x)
                  case CDR:
                          return cdar(x)
                  case CONS:
                          return cons(car(x), cadr(x))
                  case ATOM:
                          return atom(car(x))
                  case EQ:
                          return eq(car(x), cadr(x))
                  default:
                          return apply(eval(fn, a), x, a)
                  }
          }
          switch car(fn) {
          case LAMBDA:
                  return eval(caddr(fn), pairlis(cadr(fn), x, a))
          case LABEL:
                  return apply(caddr(fn), x, cons(cons(cadr(fn), caddr(fn)), a))
          }
          panic(errint("bad node: " + car(fn).String()))
  }


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: