Thanks for pointing that out! I hadn't made the connection and now I've got another reason to figure out how fold works (have yet to wade through all of http://mlton.org/Fold)
It's unlikely to occur as the type is exponential in size and thus quite unwieldy to do anything with! I haven't worked with generated ML or Haskell code, but I still think it's unlikely to be an issue in "real" code.
It's rarely a compile-time performance issue because O(2^N) algorithms perform well on modern hardware when N ~= 3. You could nest the `double` example from the article 6 times and it would still only be a 64x slowdown.
Where it does show up, annoyingly, is in error messages. You actually see this all the time with GCC's STL error messages (C++ templates provide a parameterized type system very similar to Haskell or ML, but without the inference, at least until C++11). STL containers are usually parameterized not only over the key and value, but over the comparator and allocator. Nest a couple of them and you suddenly have a type with several dozen variables. GCC expands the whole type out, without typedefs, when printing error messages, which means that you will sometimes get compile errors that are 10 pages long. Clang fixes this by preserving the typedefs, without which you wouldn't want to write such a type. Haskell has the same problem, except worse because it infers types, but GHC has gone to great pains to format error messages so they are at least readable if verbose.
XML/SGML people know this sort of thing (exponential resource use due to expanding references) as the "billion laughs" attack. There are real-world consequences to this sort of stuff.
That's very interesting! This seems to me like it might be a different kind of edge case though, since the type of x is still t -> t, even if it takes a long time for the type checker to arrive at this. From the little I got out of Mairson's paper, it seems that the proof he uses depends on the exponential size of the type itself, but I could be mistaken... either way I wonder how they're related?
Actually, these examples are similar to the first example in my post (under the first "pathological case" heading) and don't quite exhibit the worst case behavior since the type can be represented in linear space as a dag.
Rent the Runway - New York, NY (VISA candidates welcome!)
Rent the Runway is building the first online rental platform for retail goods. We're a disruptive e-commerce business that believes democratizing luxury products in the US is just the first step of a broader vision to drive aspirational experiences for tens of millions of users across the globe. We're more than "Netflix for dresses"—we're Cinderella Experience as a Service. Find out more about the challenging product-oriented problems we face across the boundaries of e-commerce, mobile, analytics and shipping/fulfillment here: http://blog.tech.renttherunway.com/
We are hiring front-end, back-end, DevOps, and mobile (iOS) engineers. Our stack:
* Service-oriented architecture with Java 7 (soon to be 8!) and DropWizard: Modern Java is a thing and we've got the proof!
* Ruby and Sinatra for lightweight, scalable web applications.
* JavaScript and Backbone for a front end that's becoming faster and more awesome to work on every day.
There is an entire lineage of real object-oriented languages with nominal types starting from the dynamically typed smalltalk (research on which where we learned H&M was horribly ill suited to OOP in the first place). Or if you prefer, just look at how you use your natural language to name things and reason about the world.
On the other hand, so many people have abandoned the most common forms of OOP. See for example the story from yesterday [1]. Many writers have written about how brittle object hierarchies are in anything nontrivial, [2] for example.
I will say though that newer models of OOP, focusing on interfaces and composability of behavior, seem to have quite a bit of steam left in them.
Don't count OOP as being dead yet, the FP stuff is useful (I use it myself), but we have our own better abstractions coming out, like having more fun with mixin-style inheritance. The FP + OOP crowd is pretty vocal, I'll give you that.
Rent the Runway is hiring all sorts of engineers (Java, Data, iOS, full stack) in the NYC area.
Rent the Runway is building the first online rental platform for retail goods. We are a disruptive e-commerce business that believes that democratizing luxury products in the US is just the first step of a broader vision of helping drive better aspirational experiences for tens of millions of users across the globe.
Our engineering team works on challenging product-oriented problems across the boundaries of e-commerce, mobile, analytics and shipping/fulfillment, and the backbone of our business is served by our custom logistics management system which is core to our capacity to deliver the right product to the right user at the right time. We utilize data, engineering and algorithms to create a personalized website and an adaptive supply chain to fulfill our commitment to an amazing customer experience.
Engineers at Rent the Runway focus on solving business problems first, and receive the satisfaction that they have true impact on the success of the company. Many of our engineers are entrepreneurs themselves, and we strongly encourage a collaborative, product-driven culture across our organization. We have a very diverse team and welcome those with supernerd CS degrees as well as those with non-traditional backgrounds.
Short answer: I don't think LYAH would be entirely sufficient background, and I think familiarity with imperative equivalents is a prerequisite, but along with some other background.
Long version: I stumbled a bit when trying to read this book, because I'm not super comfortable with the formal techniques for analyzing algorithms. I'm not talking about the kind of fluffy-tech-company-interview-"knows what big Oh is and can estimate it roughly on a whiteboard" ability but actually something much more concrete and formal, the kind taught in an undergrad algorithms class or in CLRS. I've taken those classes, I have CLRS on my shelf and have read some of it, but it's been a long time and it's not at my fingertips. There are exercises in the book ("prove that this data structure has this amortized performance", "prove that it does if you change this one thing", etc...) that challenged me quite a bit, and I ended up putting it down after a while thinking I would revisit it after refreshing my fundamentals.
The ML code is easy enough to follow if you know Haskell, but since the point is often to understand the subtlety of the performance characteristics of the data structures and algorithms, just reading and playing with the code wasn't enough for me. After all you can trivially implement functional data structures naively just by copying everything on every operation.
Frankly, Rich Hickey's videos on Clojure's data structures have done a lot more for me in terms of understanding how functional data structures work, even though they are much less formal in presentation. To be clear: I think Okasaki's book is probably a very good book and an important work, but am not so sure it is crucial for the working functional programmer to internalize all the mathematical details.
Anyway, YMMV. Check out his thesis and see what you think before buying it.
A researcher in programming languages once told me that when she encounters the "HTML isn't a programming language" meme she always thinks: Agda and Coq aren't Turing complete, are they also not programming languages?
The point being, I think, that drawing this bright line at Turing completeness doesn't hold up when you give it any thought, and so it's not very useful. Aside from that, you've utterly missed the point of the OP.
How shall we define a programming language? At least as powerful/expressive as a DFA? I kind of like that. (A bit off topic, but not really - writing out a regex is sometimes harder for me than C++/Java/JavaScript. +1 for DFAs!)