Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Bjarne Stroustrup Quotes (stroustrup.com)
172 points by asymmetric on Nov 26, 2023 | hide | past | favorite | 164 comments


Language design is a curious mixture of grand ideas and fiddly details

I hadn't heard this last one before, but it's SO right ...

I always wondered why JS and PHP and Perl got so many details "wrong" (e.g. with Perl, one definition of "wrong" is that Perl 6 / Raku didn't make the same design choice)

Turns out there's an avalanche of details, and they interact in many ways!

Python did better, but I strongly argue both Python 3 and Python 2 got strings wrong. (Array of code points isn't generally useful, and it's hard to implement efficiently. See fish shell discussion about wchar_t on the front page now; also see Guile Scheme)

OCaml seems to have gotten mutable strings wrong (for some time), and also I think the split between regular sum types and GADTs is awkward. And also most people argue that objects vs. records vs. modules is suboptimal. And a bunch of mistakes with syntactic consistency, apparently.

Looks like almost every language had problems with for loops and closures, including C# and Go - https://news.ycombinator.com/item?id=37575204

So basically I agree that plowing through all the details -- and really observing their consequences in real programs -- is more important and time-consuming than grand ideas.

But if you lack any grand ideas, then the language will probably turn out poorly too. And you probably won't have any reason to finish it.


Yeah, it's extremely difficult. Everything interacts with everything else. A recent snag I've run into recently with my language: I want to have mutable strings and lists but that complicates their use as hash table keys.

I'm so used to the C mindset of just modifying everything in place. It wasn't until I actually tried this that I understood why so many languages just copy everything. I researched how other languages dealt with this problem and many only allow immutable objects as hash keys. Most interesting was Ruby which allows mutable objects as keys, warns programmers that modifying keys can invalidate the hash table and provides a rehashing method to fix it.

https://ruby-doc.org/core/Hash.html

> Modifying a Hash key while it is in use damages the hash’s index.

> You can repair the hash index using method rehash

> A String key is always safe

> That’s because an unfrozen String passed as a key will be replaced by a duplicated and frozen String

I like Ruby a lot.


Can you explain how you think strings should work?


I think the main alternative design is to treat strings like in Rust or Go.

The problem with the “array of code points” idea is that you end up with the most general implementation, which is a UTF-32 string, and then you end up with the fastest implementation, which is a UTF-8 string, and maybe throw in UCS-2 for good measure. These all have the same asymptotic performance characteristics, but allow ASCII strings (which are extremely common) to be stored with less memory. The cost is that now you have two or three different string representations floating around. This approach is used by Python and Java, for example.

The Rust / Go approach is to assume that you don’t need O(1) access to the Nth code point in a string, which is probably reasonable, since that’s rarely necessary or even useful. You get a lot of complexity savings from only using one encoding, and the main tradeoff is that certain languages take 50% more space in memory.

Python and Java both date back to an era where fixed-width string encodings were the norm.


Working with strings is one of the most common complaints about rust though. Unless your only talking about the implemtation of it?


Would love to hear what you think the complaints come from. They seem fine to me, and I have voiced plenty of criticism about the other parts of Rust. They work more or less how I expect—you have an array of bytes, which can either be a reference (&str) or owned / mutable / growable (String).

The only unusual thing about Rust is that it validates that the bytes are UTF-8.


Mostly around usability and learning curve. I wasnt sure if the post was meant as a total endorsement of rusts strings or just the encoding aspect of them


What people complain about with Rust strings are that there are so many different types, like &str vs String, and OsString / OsStr. The encoding of the strings isn't the issue.


Encoding might not be the whole issue, but "Rust mandates that the 'string' type must only contain valid UTF-8, which is incompatible with every operating system in the world" is the reason why OsString is a separate type.


The only encoding which is compatible with "every operating system in the world" is no enforced encoding at all, and you can do very little "string-like" operations with such a type.

Even Python, well-known for being a very usable language, distinguishes between strings (which are unicode, but not utf-8 necessarily) and bytes, which you need to use if you're interacting directly with the OS.

The only real difference between the two is really the looseness with which Python lets you work with them, by virtue of being dynamically typed and having a large standard library that papers over some of the details.


> The only encoding which is compatible with "every operating system in the world" is no enforced encoding at all, and you can do very little "string-like" operations with such a type.

People who like "list of Unicode code points" string types in languages like Rust and Python 3 always say this, but I'm never sure what operations they think are enabled by them.

In the "bag of bytes that's probably UTF-8" world, you can safely concatenate strings, compare them for equality, search for substrings, evaluate regular expressions, and so on. In the "list of code points" world, you can do... what else exactly?

Many things that users think of as single characters are composed of multiple code points, so the "list of code points" representation does not allow you to truncate strings, reverse them, count their length, or do really anything else that involves the user-facing idea of a "character". You can iterate over each of the code points in a string, but... that's almost circular? Maybe the bytes representation is better because it makes it easier to iterate over all the bytes in a string. Neither of those is an especially useful operation on its own.


> In the "bag of bytes that's probably UTF-8" world, you can safely concatenate strings, compare them for equality, search for substrings, evaluate regular expressions,

No you can't (except byte -level-equality). If your regex is "abc" and the last byte of an emoji is the same as 'a' and the emoji is followed by "bc" it does the wrong thing.


You can.

The last byte of an emoji is never the same as 'a'. UTF-8 is self-synchronizing, a trailing byte can never be misinterpreted as the start of a new codepoint.

This makes `memmem()` a valid substring search on UTF-8! With most legacy multi-byte encodings this would fail, but with UTF-8 it works!


Assuming that your strings are normalized, otherwise precomposed characters will not match decomposed characters.


You still have this problem in the "list of Unicode code points" world, since many multi-code-point emoji sequences appear to users as a single character, but start and end with code points that are valid emojis on their own.

Python 3 believes that the string "[Saint Lucia flag emoji][Andorra flag emoji]" contains a Canada flag emoji.


> People who like "list of Unicode code points" string types in languages like Rust and Python 3

Rust and Python 3 have very different string representations. Rust's String is a bunch of UTF-8 encoded bytes. Python 3's String is a sequence of codepoints, and the size changes based on the encoding.


Yeah, implementation-wise, Rust's version of this idea is a little better, since at least you can convert their strings to a bag-of-bytes/OsStr-style representation almost for free. (And to their credit, Rust's docs for the chars() string method discuss why it's not very useful.)

I do think the basic motivation of Unicode codepoints somehow being a better / more correct way to interact with strings is the same in both languages, though. Certainly a lot of people, including the grandparent comment, defend Rust strings using exactly the same arguments as Python 3 strings.


Quote: which is incompatible with every operating system in the world

Should be some, not every, since there are OS’s where string types are utf-8, eg BeOS and Haiku


Rust and Go string implementations are very different.

Rust strings are safe, have rich API and prevent you from corrupting their contents unless you go out of your way to do so. Go strings are a joke and won't do any validation if you slice them incorrectly and are extremely bare bones.


Thanks for your response. Personally I fall into the "strings are arrays of bytes" camp (which is also shared by Go). A difference between my view and that of the Go designers is that I don't feel that it is important to support Unicode by default and am perfectly happy to assume that every character corresponds to a single byte. Obviously that makes internationalization harder, but the advantage is that strings are much simpler to reason about. For example, the number of characters in the string is simply the length of the string. I would be fine having a separate Unicode string type in the standard library for those instances when you really need Unicode; this design makes the common case much simpler at the expense of making the rare case harder. I also don't see that mutability is such a huge deal unless you absolutely insist that your language support string interning.


>I would be fine having a separate Unicode string type in the standard library for those instances when you really need Unicode; this design makes the common case much simpler at the expense of making the rare case harder.

Even as a native English speaker, I'm extremely uncomfortable with the idea that we're going to make software even more difficult to internationalize than it already is by using completely separate types for ASCII/Latin1-only text and Unicode.

And it's a whole different level of Anglocentric to portray non-English languages as the "rare" case.


If you give an input box to an American I promise you an emoji will find it's way into it no matter what it's for.


So much this. Thinking that only America and the UK matter is something that was forgivable 40 years ago but not today. It’s even more bizarre because of what you point out - emojis don’t make sense if you consider them as single byte arrays. And lastly, even if you only consider input boxes that don’t accept emojis like names or addresses, you have to remember that America is a nation of immigrants. A lot of folks have names that aren’t going to fit in ASCII.

And this stuff actually matters! In a legal, this-will-cost-us-money kind of way! In 2019 a bank in the EU was penalised because they wouldn’t update a customer’s name to include diacritics (like á, è, ô, ü, ç). Their systems couldn’t support the diacritics because it was built in the 90s with an encoding invented in the 60s. Not their fault but they were still penalised. (https://shkspr.mobi/blog/2021/10/ebcdic-is-incompatible-with...)

It is far more important that strings be utf-8 encoded than they be indexable like arrays. Rust gets this right and I hope future languages will too.


"i must have indexable strings for performance reasons. oh, btw its an electron app"


Unicode has such a rich collection of symbols. I use them frequently in code comments.


such a different level, you could even call it Latincentric :)


> this design makes the common case much simpler at the expense of making the rare case harder

In the age of emoji (and uhhhh, everyone who doesn't use English as their main language), I don't think your "rare case" is really that rare.


There must be statistics around how much of the data in the world is or could be Latin-1. I'm going to guess it's a very high percentage.


wat?

Counting webpages (not bytes) the number is near 50% are English only and 50% are other languages.

Only 20% of people worldwide speak English.

I know other languages can fit inside ASCII but, really?


> In the age of emoji (and uhhhh, everyone who doesn't use English as their main language)

Open a random NYT article of more than a hundred words. It won’t be in ASCII.


Do note that emoji are a case where the "codepoint" approach fails catastrophically.


Can you please elaborate?


On paper you're not wrong, but String used for localized text are a special subclass you can deal with separately. Most Strings that will cause you problems are, you know, technical: logs, name of subsystems, client ids, client-sourced API-provided values which change format across client etc. Those, in my experience, are always ASCII even in China, exactly because nobody wants to deal with too much crap.

Display Strings are simpler to manipulate in most cases: load String from file or form, store back verbatim in DB or memory, you barely do anything other than straight copying, right ?

The way I do in Java is that I always assume and enforce my strings to be ASCII single byte, and if I want to display something localized, somehow, it never really goes through any complex logic where I need to know the encoding: I copy the content with an encoding metadata, and the other side just displays it.


"strings are arrays of bytes" combined with the assumption that "characters are a single byte" sounds basically the same as the "array of code points" that the parent comment is disagreeing with


Code points are not bytes.


Sure, but if you're insisting that the string be represented as one byte per character, you end up with the exact same properties with "array of code points" and "array of bytes"


Sort-of, but no, because code points are not characters.

There's a big difference between "get the 5th code point" and "get the 5th character".

Because multiple code points can be used in a single character, it not possible to do random-character-access in a unicode-encoded string.


No, it's impossible to do random access to retrieve a character, if you are dealing with code points, because code points do not have a fixed byte size. I thought this a good intro <https://tonsky.me/blog/unicode/>.


> For example, the number of characters in the string is simply the length of the string.

For I/O you need the amount of bytes it occupies in memory and that's always known.

For text processing, you don't actually need to know the length of the text. What you actually need is the ability to determine the byte boundaries between each code point and most importantly each grapheme cluster.

> when you really need Unicode

You always need Unicode. Sorry but it's almost 2024 and I shouldn't even have to justify this.

For I/O, you don't need "strings" at all, you need byte buffers. For text, you need Unicode and everything else is just fundamentally wrong.


> Obviously that makes internationalization harder, but the advantage is that strings are much simpler to reason about.

Internationalization relative to what? Anyway, just pick any language in the world, i.e. an arbitrary one—can you represent it using just ASCII? If so I would like to know what language that is. It seems that Rotokas can be.[1] That’s about 5K speakers. So you can make computer programs for them.

Of course this comment of mine isn’t ASCII-only.

[1] https://en.wikipedia.org/wiki/Rotokas_language


Which non-ASCII character did you use?


Out of interest, would you also say that "images are arrays of bytes"?

If not, what's the semantic difference?

For me, strings represent text, which is fundamentally linked to language (and all of its weird vagueness and edge-cases). I feel like there's a "Fallacies programmers believe about text" that should exist somewhere, containing items like "text has a defined length" and "two identical pieces of text mean the same thing".

So whilst it's nice to have an implementation that lets you easily "seek to the 5th character", it's not always the case that this is a well defined thing.


> I feel like there's a "Fallacies programmers believe about text" that should exist somewhere

I got you covered.

https://github.com/kdeldycke/awesome-falsehood#international...

http://garbled.benhamill.com/2017/04/18/falsehoods-programme...

https://jeremyhussell.blogspot.com/2017/11/falsehoods-progra...

https://wiesmann.codiferes.net/wordpress/archives/30296

I love when the writing gets visibly more unhinged and frustrated with each invalidated assumption. It's like the person's mind is desperately trying to find some small sliver of truth to hold onto but it can't because the rug is getting constantly pulled out from under it.


See also: Text Rendering Hates You https://faultlore.com/blah/text-hates-you/ and Text Editing Hates You Too https://lord.io/text-editing-hates-you-too/


Amazing, thanks. Bookmarking the lot.


The short answer is somewhere between Go and Rust strings, which are newer languages that use UTF-8 for interior representation, and also favor it for exterior encoding.

Roughly speaking, Java and JavaScript are in the UTF-16 camp, and Python 2 and 3 are in the code points camp. C and C++ have unique problems, but you could also put them in the code points camp.

So there are at least 3 different camps, and a whole bunch of weird variations, like string width being compile-time selectable in interpreters.

More details here:

https://www.oilshell.org/blog/2023/06/ysh-design.html#text

and here:

https://www.oilshell.org/blog/2023/06/surrogate-pair.html#hi...

A main design issue is that string APIs shouldn't depend on a mutable global variable -- the default encoding, or default file system encoding. That's an idea that's a disaster in C, and also a disaster in Python.

It leads to buggy programs. Go and Rust differ in their philosophies, but neither of them has that design problem.


Raku introduced the concept of NFG - Normal Form Grapheme - as a way to represent any Unicode string in its logical ‘visual character’ grapheme form. Sequences of combining characters that don’t have a canonical single codepoint form are given a synthetic codepoint so that string methods including regexes can operate on grapheme characters without ever causing splitting side effects. Of course there are methods for manipulating at the codepoint level as well.


Unfortunately, strings cross at least 3 different problems:

* charset encoding. Cases worth supporting include Ascii, Latin1, Xascii, WeirdLegacyStatefulEncoding, WhateverMyLocaleSaysExceptNotReally, UTF8, UTF16, UCS2, UTF32, and sloppy variants thereof. Note that not supporting sloppiness means it is impossible to access a lot of old data (for example, `git` committers and commit messages). Note that it is impossible to make a 1-to-1 mapping between sloppy UTF-8 and sloppy UTF-16, so if all strings have a single representation (unless it is some weird representation not yet mentioned), it is either impossible to support all strings encountered on non-Windows platforms, or impossible to support all strings encountered on non-Windows platforms. I am a strong proponent of APIs supporting multiple compile-time-known string representations, with transparent conversion where safe.

* ownership. Cases worth supporting: Value (but finite size), Refcounted (yes, this is important, the problem with std::string is that it was mutable), Tail or full Slice thereof, borrowed Zero-terminated or X(not) terminated, Literal (known statically allocated), and Alternating (the one that does SSO, switching between Value and Refcounted; IME it is important for Refcounted to efficiently support Literal). Note that there is no need for an immutable string to support Unique ownership. That's 8 different ownership policies, and if your Z/X implementation doesn't support recovering an optional owner, you also need to support at least one "maybe owned, maybe borrowed" (needed for things like efficient map insertion if the key might already exist; making the insert function a template does not suffice to handle all cases). It is important that, to the extent possible, all these (immutable) strings offer the same API, and can be converted implicitly where safe (exception: legacy code might make implicit conversion to V useful, despite being technically wrong).

(there should be some Mutable string-like thing but it need not provide the API, only push/pop off the end followed by conversion to R; consider particularly the implementation of comma-separated list stringification)

* domain meaning. The language itself should support at least Format strings for printf/scanf/strftime etc. (these "should" be separate types, but if relying on the C compiler to check them for you, don't actually have to be). Common library-supported additions include XML, JSON, and SQL strings, to make injection attacks impossible at the type level. Thinking of compilers (but not limited to them), there also needs to be dedicated types for "string representing a filepath" vs "string representing a file's contents" (the web's concept of "blob URL" is informative, but suffers from overloading the string type in the first place). Importantly, it must be easy to write literals of the appropriate type and convert explicitly as needed, so there should not be any file-related APIs that take strings.

(related, it's often useful to have the concept of "single-line string" and "word" (which, among other cases, makes it possible to ); the exact definition thereof depending on context. So it may be useful to be able to tag strings as "all characters (or, separately, the first character or the last character (though "last character" is far less useful)) are one of [abc...]"; reasonably granularity being: NUL, whitespace characters individually, other C0 controls, ASCII symbols individually, digit 0, digit 1, digits 2-7, digits 8-9, letters a-f, letters A-F, letters g-z, letters G-Z, DEL, C1 controls, other latin1, U+0100 through U+07FF, U+0800 through U+FFFF excluding surrogates, low surrogates, high surrogates, and U+10000 through U+10FFFF, and illegal values U+110000 and higher (maybe splitting 31-bit from 32-bit?) (several of these are impossible under certain encodings and strictness levels). Actually supporting all of this in the compiler proper is complicated and likely to be deferred, but thinking about it informs both language and library design. Particularly, consider "partial template casting")


I agree with all of this. I remember way back when i was doing CORBA programming (argh!) thinking "can these stupid bastards not specify a simple string class??" To have the most commonly used data type be so complicated makes me think we have got things deeply wrong somewhere.


The Tower of Babylon story seems to tell the story of where we went wrong.

I’m only partially kidding, because I think this is a fundamental-and-ancient-issue of writing (information) technology. As soon as different groups went to encode their language, this problem was born.


Languages have trends too.

Weak typing/Implicit conversion was cool in the 90s.

JS and PHP would have magnitudes better if they were typed like Python, but that's history.


I can appreciate why it was cool. "What if everything could automatically do the right thing when it interacts with other things, and you wouldn't need any of this ritual boilerplate stuff? Why do we keep needing to convert from string to number to string over and over and over it's crazy. The language should just do the right thing!"

I think it's very easy to be sympathetic to the design trend.


Yes.

Then there is the fact that PHP and JavaScript were basically domain specific languages at their inception.

PHP was a hypertext pre-processor. Pearl, but stripped down for templating.

JavaScript was an alternative to Java applets. When you don't need the full power of Java, you could throw in a few small scripts.

To make things a bit less intimidating, the weak typing seemed like a good idea, and for these use-cases it might very well have been. If you don't know how to program and only want to display some fancy styled text, why would you need to understand that a number isn't a string?

But there was the pattern, that "real" programming languages like Java and C/C++ were hard to learn for the average "creator", so they took JS and PHP as entry and built up their skills from there, building bigger and bigger solutions with those tools.


There are definitely trends, but the pattern I see with early JS and PHP is they simply didn't anticipate the consequences of certain decisions.

What happens is you make a decision "locally", for some use case or program. And then you don't realize how it affects other programs.

The well known "Wat" talk about JS by Gary B. is basically a bunch of these unintended consequences.

So the problem is really to see the whole picture, and to get some experience using the language. But there's that catch 22 -- by the time you release it, it's almost set in stone.

I remember in the early 2000's Guido and the Python team would always refer to "Python 3000" -- the one chance they had to break things!


And why would they? PHP and JS weren't designed with their big use-cases in mind, even if their creators like to boast ("Always bet on JavaScript", etc.)

People saw the alternatives as much better solutions. We had JSP and Java applets; those were the future! PHP and JS were just playthings for people who that couldn't grasp Java.

Python had the luck not to be incepted as a small DSL for a specific problem. So it doesn't have the issues that PHP and JS face.


If lang design is so insanely hard, then how C# designers get so many things right?


I haven't used C# enough to comment on the language itself, but I do know it was designed (at least initially) by Anders Hejlsberg who is very experienced in language design.


If basketball is so hard, then how did Michael Jordan score so many points?


Also I pointed out an issue in my comment that C# got wrong, then fixed -- closures created in loops


I didnt say that C# is perfect

But it feels like there is at least order or two of magnitude of difference. Especially if you start comparing std libs.


“C++ is a language, not a religion.”

I swear Stroustrup said this at the beginning of his EE380 (CS Colloquium) talk at Stanford around 1986. At the time we (grad students) were all C programmers and we were sure that C was the best language evar, at least, better than Pascal, which was the language Stanford used for teaching back then. So we all wanted to see this person who had the temerity to claim that he had improved C and who had the audacity to call it “C++” of all things! Skilling Auditorium was packed. When Stroustrup took the stage, a tense hush fell over the crowd. He said the above line fairly quietly, and everyone burst out laughing, breaking the tension. The rest of the talk was a pretty straightforward explanation of C++ 1.0, and I came away fairly impressed.

Several years later at another talk I asked him to inscribe my copy of the C++ book with that quote. He claimed not to remember it, but he inscribed the book per my request anyway.

I think the quote holds up well.


I disagree so, so strongly with this. C++ programmers are one of the most religious groups around, eclipsed only recently by the rust community.

Try telling a C++ programmer RAII is not a good idea. I dare you. Or smart-pointers. Or that the STL is mostly-garbage..

Trying to honestly have these conversations is typically an exercise in starting a religious flame-war.


"Man annoyed no fish wants to have a debate about his position that being wet is evil. Convinced this makes him right by default"


I am a C++ programmer and if you told me those things, I would just flip a bozo bit on you and carry on coding, no religious flame-war is warranted.


Exactly. Outside of the actual merits of the feature being flamed, someone confidently presenting such an opinion without caveat is unlikely to be worth having a reasonable discussion with.


Point proven


I'm not a c++ programmer. I don't see how RAII is a bad idea, or how it's "cult like" to use it. It has obvious benefits


Every language has its zealots.

What I like about this quote is that it embodies Stroustrup’s personal attitude. He’s ultimately pragmatic. Nothing can be perfect — including C++ by his own admission. It’s aligned well with his other writings and with his commentary on the quotes page, particularly where he says that he tries not to be rude about other languages.


“The problem with many professors is that their previous occupation was student”

This one is one of the more impactful on our industry in my opinion. I have a “side gig” as an external examiner, which means that every half year I get to sit through examinations on topics that I know most of the students will never ever have to use. And that’s the best case for much of it, the worst case is all the stuff they’re going to need to unlearn to actually become good programmers in a world where much of the academic programming that is taught today is the same I learned more than 20 years ago. Like a heavy use on technology based architecture, like separating your controllers and your models into two directories… Fine when you have two or three, not so fine when you have 9 million. I’m still impressed with how hard it’s been for domain driven architecture and a focus on actual business logic to make its way into academia when it’s literally always going to be how the students are expected to work in my area of the world. The same goes for much of the OOP principles, like code-share which has been considered an anti-pattern around here for the better part of a decade. Not because the theory is wrong, but because it just never works out well over a period of several years and multiple developers extending and changing the shared code.

I’m honestly not sure how we can change it though. Because most of the people who govern the processes are exactly students who’ve become professors, and if I had gone that route myself, I’d probably also be teaching the flawed lessons I was taught those 20+ years ago. Hell, I would’ve still taught them a few years into my career as it wasn’t until I “stuck around” or got to work on long running projects it became apparent just how wrong those theories were in practice. Again, not because the theories are wrong but because they are never applied as intended because the world is so much less perfect than what is required for the theories to actually work. You’re not going to be at your best on a Thursday afternoon after a sleepless week of baby watching and being slightly sick, but you’re still going to write code and someone is going to approve it.


>And that’s the best case for much of it, the worst case is all the stuff they’re going to need to unlearn to actually become good programmers in a world where much of the academic programming that is taught today is the same I learned more than 20 years ago.

Not commenting on the general picture of "student to professor without any industrial experience in the middle", just this point.

At some point, you have to acquire the basics. Learning to articulate thoughts into algorithms is something to be acquired at some point to work in CS, and this just didn’t change over the last 20 years. That’s the whole point of Knuth’s (M)MIX actually.

Just like learning to use alphabet won’t be enough to write every prose you will ever need, but alphabets don’t change every six months.


I don’t disagree that some of the CS education is fine. A lot of the algorithm/math curriculum is fine even though it’s very old. I’m talking more about the systems design, systems architecture and project management which is sorely outdated compared to what most of the students will meet in the real world.

It wasn’t when I stated, but in the 20 years since then, things have just evolved so much. Nobody really does OOP around here anymore. Parts of it, sure, but for the most parts functions live on their own and “classes” are now mainly used as state stores for variables, and that’s in languages that don’t have a real alternative to “state store” because people vastly prefer types that can’t have functions to protect themselves from bad habits. But fresh from CS hires come out expecting to build abstract classes that are inherited, and then they meet the culture shock, and sometimes some of them don’t even know you can write functions without putting them inside an object. They come out with the expectation that “agile good, waterfall bad” but modern project and contract management has long since realised that “pure agile” just doesn’t work unless you’re in a specified team in a massive tech company. Because in smaller companies nobody is going to sign a contract that’s based on agile promises, and anyone who uses Scrum by the Book has basically gone bankrupt because they got outcompeted by more adaptable ways of working. It’s not that modern things aren’t inspired by what came before, and there is even a lot of research and good books available on things like team-topologies and how to work as fast delivery teams, but it’s just not what’s being taught in traditional CS around here.


I don't remember the exact quote, but I saw a talk where he said something like "People want big/verbose/explicit syntax for the languages features they don't understand, and small/terse/implicit syntax for the language features do understand." It made me realize that many of my language design opinions at that time were a matter of personal preference.


You might be referring to this, which I think is even more accurate. Dave Herman quotes him:

https://www.thefeedbackloop.xyz/stroustrups-rule-and-layerin...

One of my favorite insights about syntax design appeared in a retrospective on C++ by Bjarne Stroustrup:

For new features, people insist on LOUD explicit syntax.

For established features, people want terse notation.

I call this Stroustrup's Rule. Part of what I love about his observation is that it acknowledges that design takes place over time, and that the audience it addresses evolves. Software is for people and people grow.

Side note: Dave Herman is Rust's most unrecognized contributor: https://brson.github.io/2021/05/02/rusts-most-unrecognized-c...

I often notice that the actual Rust authors and designers respect and are influenced by C++ very much. (Niko M is another C++ fan)

It's only the randoms online that like to start C++ vs. Rust arguments.

Another thing to note is that the Mozilla Rust is MUCH closer to C++ than Graydon's Rust was. Graydon's Rust was not at all about zero-cost abstraction, the shared motto of C++ and Rust.


Agree. This is also true of UX design in general.


"The official mascot for C++ is an obese, diseased rat named Keith, whose hind leg is missing because it was blown off. The above image is a contemporary version drawn by Richard Stallman."

I have been unable to determine the provenance of this quote. Source and image: https://ifunny.co/picture/history-the-official-mascot-for-c-...


The first place I ever saw that was here:

https://en.uncyclopedia.co/wiki/C%2B%2B

Whole article is worth reading, lots of good laughs in there!


"We didn't have time for that." (Bjarne Stroustrup, before an invited talk, Cambridge Computer Lab)

(in response to my complaint to him that hastables again hadn't been included in the most recent standard at the time.)


Yeah, so many programmers had to find the time to implement them or find an STL library with the stuff.

The really important stuff was printing with "<<", or whatever ...


"There are only two kinds of languages: the ones people complain about and the ones nobody uses".

He is right on this one. Pretty much in every discussion about Programming Languages people write how good Rust is and complain about how bad C++ is but the reality is, C++ it's one of the most used languages in the world.

This quote could be a very harsh reply to Rust vs C++.


I came to the conclusion that the inverse is true, people tend to love languages they don't use.

I used to love Lisp and Racket. But after writing some real programs with other people I realized the idea that every codebase has its own DSL and languages is actually stupid, doesn't scale and hard to maintain. Came to hate Haskell for the very same reason. Every Haskell programmer think he's more clever than others so he decides on 30/40 language extensions and you have something that simply isn't Haskell.

People should not program programming languages. There's use cases for this style of programming, but they aren't how general-purpose programming should look like.


> But after writing some real programs with other people I realized the idea that every codebase has its own DSL and languages is actually stupid, doesn't scale and hard to maintain.

Code bases can use DSLs. DSLs should used judiciously. For example, if you need an LALR parser, you'd probably wouldn't code it all by hand, and you'd probably use a DSL.

Just like we use libraries judiciously in many languages. (Well, we should, but casually pulling in a hundred libraries is more a Python/JS/Rust convention, than a Lisp family one.)

> Came to hate Haskell for the very same reason. Every Haskell programmer think he's more clever than others so he decides on 30/40 language extensions and you have something that simply isn't Haskell.

Is this a problem when Haskell is used professionally by software engineering teams? Or are you speaking of code by academics/students, who don't have a lot of experience on professional software engineering teams? Or by hobbyists, who are (rightly) indulging, and writing code however they want (more power to them), not writing how they have to at their day job?


Yes but: u/epolanski is referring to the abuse of metaprogramming, such that every org, every project creates its own bespoke mutant creole which remains C++ (or Haskell, Forth, LISP) in name only.

Java's founders wisely omitted metaprogramming. But memories are short. And chaos always finds a way. So now Java has its own medley of obfuscation strategies. Annotations, aspects, inversion of control, dependency injection, logging frameworks, etc.


> Is this a problem when Haskell is used professionally by software engineering teams?

Yes, large segments of Haskell culture love complex language features and wild abstractions. Unfortunately that mindset seeps into everyday code because key libraries depend on the complex features.

The Simple Haskell movement (https://www.simplehaskell.org/) tried to develop a pragmatic culture but it fizzled. Back when Simple Haskell was active, it was fun watching advocates of complex, abstract Haskell argue against straightforward programming that normal developers can understand.


> it was fun watching advocates of complex, abstract Haskell argue against straightforward programming that normal developers can understand

I don't think that's what they were arguing against. I think they were arguing against equating straightforward programming with basic language features.


That's true, much of the technical discourse haggled over favorite language features. But that missed the much bigger point: If we use complex, abstract Haskell for practical apps, what are the consequences of alienating the vast majority of professional developers?

Unfortunately the answer is clear: Haskell's arcane reputation has solidified.


Well, it's a good question. I'm in favour of using simple[+], abstract Haskell for practical apps, welcoming the vast majority of professional developers!

[+] "simple" in that the solution to a problem is expressed in a simple way, not that it restricts itself to a particular subset of the language.


No, it could be a very stupid reply to Rust vs C++ since people do write in Rust. Bigger programs get written in it all the time and - what a surprise - people who use it have things they are annoyed about, which is why it gets improved.

To me this is one of the most stupid things he's ever uttered on one hand and the most useful one on the other. Cause it can be used to remind people that there's always trade-offs, which is a good thing if a discussion gets a bit too heated and "I am right!" "No, I am right!", but it can also be used, and most often is, as a very shallow and arrogant dismissal - funny enough, especially by C++ zealots, IME - of someone trying to fix some things. As if trying to do things better is somehow an affront to their greatness.


FWIW, I think Bjarne and other C++ magnates have a plan for eating Rust's lunch by allowing for "safe"/"unsafe" within C++.


I mean, sure, Bjarne calls his proposed way forward "safety profiles".

Most fundamentally, this completely misunderstands the nature of the problem. This is a technical change, but the most important problem C++ has is cultural. So, they're not even addressing the right problem. In his original talk about this Bjarne even repeatedly describes his approach as a "strategy" which practically begged someone to say "Culture Eats Strategy For Breakfast" but no-one did.

But let's imagine that C++ culture magically is fixed by pixies or whatever, leaving only technical problems, which safety profiles could address. The next big problem is that Rust's safety is compositional. The many different kinds of "safety" delivered via Bjarne's "safety profiles" don't compose, safety A + safety B = no safety. So this makes it largely useless from a software engineering point of view.

Once you've cleared these two fundamental obstacles you're back to more mundane limitations like timing. Rust 1.0 shipped in 2015. There are teams out there already with many years of Rust experience in practice. But Bjarne's "Safety Profiles" aren't available in your C++ compiler today, and won't be for years to come, perhaps many years. Are you confident that starting this far behind the pack will be OK?


> but the most important problem C++ has is cultural

That part, Bjarne and others have been working on for at least two decades I think. There's a lot of indoctrination/education about "staying safe" so to speak, through better coding practices, extended standard library facilities, static analysis and so forth. And from the little I can see, this is seeping into the C++ culture.

> The many different kinds of "safety" delivered via Bjarne's "safety profiles" don't compose

I'm not familiar enough with the details. But, about Rust - I was under the (possibly wrong) impression that you have the binary of either safe or unsafe: https://doc.rust-lang.org/std/keyword.unsafe.html

> Are you confident that starting this far behind the pack will be OK?

You're counting the wrong thing IMHO. If you count software systems of note, or add up their sizes; or count developers; or count organization; or add up turnover; etc. - its Python, Java, C, C++ in some sort of order that are at the head of the pack. Rust has certain benefits which make it attractive to jump onto its bandwagon - but it needs a lot of bandwagon-jumping to take the lead. If you can achieve more or less the same thing by just fiddling with your C++ development environment, then people might just not switch.


> I was under the (possibly wrong) impression that you have the binary of either safe or unsafe

No, that completely misunderstands what the unsafe keyword is for. Rust has some very strict rules to preserve safety and everybody has to obey those rules. However, in safe Rust you don't need to think about that - or even know what the rules are exactly - because the safe Rust subset always obeys the rules. In unsafe Rust you are responsible for upholding all the rules, which means you need to properly understand what the rules are and be careful to do your job properly, the same responsibility you have in every single line of C++.

Another way to think about it is that "unsafe" means "trust me, this is OK". You may find that a more helpful way to understand what unsafe blocks do, but the reason I don't prefer it is that people (including Herb and Bjarne) seem to imagine we're just "switching off" the safety rules, and that's not what happens in any way, semantically, syntactically or de facto. Suppose I have an group of three integers (indexes start at zero) let group = [1, 2, 3]; println!("{}", group[5]); -- that won't work, Rust will reject that because it's a bounds miss†. But if we replace that println! with println!("{}", unsafe { group[5] }); the Rust compiler doesn't shut up and let us do it, in fact it complains even more, in addition to saying we can't do an out-of-bounds access it also warns that this unsafe block is futile, unsafe doesn't make bounds misses magically OK.

† We can tell the Rust compiler we insist on seeing this through to the bitter end, it will emit the program, and then when this code executes there's a bounds miss and it panics. The default is to reject programs which always just panic when executed.


If culture wanted safety strategy in the compiler, wouldn't it need to standardize it?

>There are teams out there already with many years of Rust experience in practice.

They routinely use nightly version, no?


> They routinely use nightly version, no?

Depends on what you mean by "routinely." In 2020, the last year that the annual survey published these numbers, 8.7% of Rust programmers used exclusively nightly. It has been dropping every year.

Some people do occasionally use nightly; at my job, most code is on stable, but there are a few projects that do currently require a couple of nightly specific things.


I'm skeptical of the value of adding on "safe / unsafe" to C++ at this point. It's a bit like adding type annotations to Python. Better than nothing I suppose, but there's 30+ years of C/C++ that doesn't and will never be opted-in to these features, and the value declines rapidly when only 10% of the codebase (including dependencies) can be considered "safe" vs. when 99.9% of it can be.

https://cor3ntin.github.io/posts/safety/


> I'm skeptical of the value of adding on "safe / unsafe" to C++ at this point.

Me too. I've went down the road of a safer C/C++ a decade ago. So have many others. It's not impossible. But backwards compatibility is really tough. The existing attempts at a better C or C++ did not work out.

After three years of Rust, I have some misgivings. Rust does many things right, and the rigor does get you reliable programs if you stick to safe Rust, which I do. But there are problems.

- The single-ownership thing is useful but very restrictive. Lack of back references is a serious problem. Yet, so often, you want to have something that talks to its owner. Refcount everything, and you've re-invented Python and moved the problem to run time. If you have to use handles and hashes, you're lost the value that Rust added.

Something like static-analyzed safe weak back references is needed, and that's a hard theoretical problem. Think of this as working like single strong forward references and weak back references that can become strong only temporarily. Compile time checking like the borrow checker would enforce rules that eliminated the need for reference counts. This is probably possible, and is hard to do in a way that is not too restrictive to be useful. Someone has to work through the common design patterns for trees, lists you can modify in the middle, and such. Good PhD topic for someone.

- Traits turn out to be useful for only a limited class of problems. Traits are not a substitute for classes. Converting a class-oriented program to Rust is very tough.

Once new Rust programmers get past the syntax, those two issues are the big ones that prevent conversion of existing programs to safe Rust. There's a big impedance mismatch. You can't just convert; you have to redesign. Which is hard.


Sounds like something they could try. We'll see what happens. It's not like I expect C++ to "roll over" and just declare that they don't care anymore if people use the language.


Profiles will help on the domains where C++ is going to stay for a long time, like HPC, GPUs, game development, GCC/LLVM.

However it is kind of late in domains where Rust, or other safer languages are already being used. They won't rewrite back into C++.


> domains where Rust ... [is] already being used

The point is that Rust usage is still quite limited. This is a bit like C++ and D, two decades back; or perhaps even Scala and Java. The analogy isn't perfect, but the point is you had a language with a lot of potential usability-domain overlap which addressed some or many pain points and failures of the older, more popular language - but the older language embraced some of the alternative ideas, adopted them in a more-or-less compatible way, and made it not-attractive-enough to switch. So the newer languages lost momentum, and at least in the case of D - stopped gaining users and eventually sank into oblivion.

> other safer languages ... won't rewrite back into C++.

I mostly agree. Except... that some safe languages, like Java, pay for safety with a lot of overhead. And Dennard scaling is over. So, over time, there is some pressure to replace Java, or maybe C# code with something closer-to-the-metal. But we'll see.


D suffered from lack of focus, and company sponsoring, hence why it hardly mattered to C++ folks.

The domains that Java and .NET took away from C++ aren't coming back to C++, even if now they feel the pressure to have AOT and value types with better low level coding primitives.

Additionally Java and .NET applications that get rewritten, most likely will be in one of those C++ wannabe successors, even if C++ is part of the equation by using GCC/LLVM backends.


The thing is, that the "C++ successor language" sometimes ends up being C++ itself, a decade or two later.

As an example, take this question from 2008:

"How do I tokenize a string in C++?" https://stackoverflow.com/q/53849/1593077

A very popular, straightforward, and traditional-style answer to this question, , given early on, was:

  vector<string> split(const char *str, char c = ' ') {
      std::vector<std::string> result;
  
      do {
          const char *begin = str;
          while(*str != c && *str) { str++; }
          result.push_back(std::string(begin, str));
      } while (0 != *str++);
      return result;
  }
but a recent answer is:

   auto results = str | ranges::views::tokenize(" ",1);
which is in lazy-evaluated functional style, and doesn't even directly utilize the fugly standard C++ string class. This example is of course a bit simplistic (since the main change exhibited here is in the standard library), but the point is that the language has demonstrasted strong abilities to reconfigure how users tend to write code. But - perhaps I'm giving it more credit than it's due any this won't comen to pass.


If you think nobody complains about Rust then you haven't visited HN much recently ;). Heck, Bjarne Stroustrup himself has recently taken to complaining about Rust in papers and talks (though most recently he's taken to referring to it without naming names).


There's a noticeable difference between what you get from somebody like Barry Revzin, who understands C++ and Rust well and has very specific criiques, and from people like Bjarne or Herb who seem to be relying on superficial impressions at best.


I think you would have a very hard time defending the claim that 'nobody uses Rust,' given its current adoption trend in major technology companies like Microsoft and its integration in software projects like the Linux kernel.


A language usage doesn't correlates with quality.

I wish people would stop spamming that quote on discussions here on this site as shallow dismissal everytime someone posts their critique.


You can already program. When you program a hobby/research project in the language you want to learn (better) you program /with/ the grain of the language. It's a nice experience.

Move over to implementing someone else's hard requirements where you have to make that happen, with time pressure - you find yourself going against the grain of the language by necessity and start describing the difficulties, sometimes colorfully.

People waxing lyrical about (this year haskell, rust for example) and who don't have a list of complaints are in the first category.


Ecosystem effect.


A few of my favourites;

> An organization that treats its programmers as morons will soon have programmers that are willing and able to act like morons only.

In a broader sense what it implies is that companies should not make a Programmer's job onerous (in any dimension) to the point that the "joy" is gone from the doing of the activity itself. Thus Reports/Meetings/Processes/Testing/etc. should all be modulated/balanced based on needs/project and not because it is the latest fad. Managers should really really heed this.

> Far too often, 'software engineering' is neither engineering nor about software

This is a follow-on from the above.

> Any problem in computer science can be solved with another layer of indirection.

I have also heard this attributed to Andy Koenig.

> My ideal of program design is to represent the concepts of the application domain directly in code. That way, if you understand the application domain, you understand the code and vice versa.

This is how i learnt the techniques of designing in C++ from the early days i.e. from "Barton and Nackman's Scientific and Engineering C++" and "James Coplien's Multi-paradigm design for C++". This is fundamental to problem-solving itself and hence in any job it is of utmost importance to understand the domain i.e. Why and What is being done rather than the How.

> Legacy code' often differs from its suggested alternative by actually working and scaling.

Very very true. This is why i dismiss people who come in and start saying "everything must be rewritten" without spending time studying and learning about the existing system.


Section 10.7 of "The Design and Evolution of C++" has some good ones from the early 90s:

"When (not if) garbage collection becomes available, we will have two ways of writing C++ programs."

"I suspect that a garbage collection scheme can be concocted that will work with (almost) every legal C++ program, but work even better when no unsafe operations are used."

"I am under no illusion that building an acceptable garbage collection mechanism for C++ will be easy - I just don't think it is impossible. Consequently, given the number of people looking at the problem several solutions will soon emerge."


Actually, these quotes have become outdated by the language itself! ... in favor of this quote:

> I don't like garbage. I don't like littering. My ideal is to eliminate the need for a garbage collector by not producing any garbage. That is now possible.

See this question:

https://stackoverflow.com/q/147130/1593077

and this answer (of mine):

https://stackoverflow.com/a/48046118/1593077


for cyclic object graphs you build your own GC, no matter if you call it such, or you leak memory.


Well, we now have garbage collection. Just not tracing garbage collection.


Tracing GC is there on C++/CLI, Unreal C++, Bohem and V8's Olipan.


But not on standard C++. And C++/CLI doesn't collect garbage of native objects.


Depends on how C++/CLI was compiled, and how was that allocation done. TIMTOWTDI

The biggest issue with current ISO C++ is that the standard desigs some features on paper, instead of adopting existing extensions as standard features, as it used to be.

Hence when paper designed features get wrongly designed, no one picks them up.


> "When (not if) automatic garbage collection becomes part of C++, it will be optional"

Technically, his prediction (the implied inevitability of GC) came true, in practice, it did not.

Optional GC was added, never implemented, never used, and removed again.


Mostly because its design never took into consideration the needs of Unreal C++, C++/CLI, C++/CX, Bohem and Olipan.

So the major C++ GC implementations ignored what was there and kept using their own solutions.


Programming -- Principles and Practice Using C++ is one of the few must-read programming books in my opinion.

Even if you're a C++ expert, have 20 years in industry, etc. it's still an amazing example of beautiful technical writing.


I'll see you Bjarne Stroustrup, and raise you Alan Kay,

"Actually I made up the term "object-oriented", and I can tell you I did not have C++ in mind."

https://en.wikiquote.org/wiki/Alan_Kay


Bjarne didn't have Alan in mind when designing C++ either.


If he had, we would all be better off.


That's why so much more real-world software was written in Smalltalk than in C++.

Wait...


Made up the term, but didn't invent object orientation. The real inventors were more generous.


Fun fact: The Wright brothers didn't have modern airliners in mind when they invented airplanes.


It's almost as if language and meaning evolve in the minds of people in strange ways.


A selective quotation which misses the point.

"Actually I made up the term "object-oriented", and I can tell you I did not have C++ in mind. The important thing here is that I have many of the same feelings about Smalltalk."


I did not know this, or forgot, thank you for saying! There's applause and laughter after the short version of the quote, so it struck a nerve, and not without reason.

There are comments on Smalltalk at various places in the talk, but after the quote, all he says is, ..."There is one really important thing about Smalltalk, and some of languages like it, that we should pay really really close attention to, but it has almost nothing to do with the syntax and accumulated superclass library," ....

Quote is here: https://www.youtube.com/watch?v=oKg1hTOQXoY&t=634s


C++ is my favourite programming language precisely because one of its main designers is so sensible.


Until C++98 came to be, nowadays, regardless of how sensible Bjarne Stroustoup happens to be, his opinion is a vote among 300 or so.

Remember the Vasa paper happened for a reason.


of course, but he is still very influential. better than languages that have no standard at all.


Yeah mine too until Rust. But part of that sensibleness was making it backwards compatible with C, which entails a mountain of inherited design mistakes.

I wonder what his blank slate language would have looked like.


C++ is not backwards compatible with C. It’s mostly compatible, but it’s not completely compatible like say Objective C which is a strict superset of C.


They have diverged slightly in the years since C++ was introduced. But at the time the incompatibilities were extremely small. Even today they are pretty small and GCC/Clang will happily compile C constructs (e.g. designated initialisers, VLAs) with just a warning.

Actually you need `-Wpedantic` to even get a warning for both of those. (And `-std=c++17` since designated initialisers are actually in C++20.)


...at least as long as you don't go anywhere near the atomic mess.


> C++ is not backwards compatible with C.

Yeah, it is really ironic. C++ got all the warts of C to be compatible with it and then C++ had lost its compatibility with C but not warts.


honestly, no one cares about splitting that hair. Most valid C programs will compile in C++ and believe it or not, binary logic is not the only logic humans are capable of.


Almost every C program does `int *c = malloc(sizeof(int) *10)` or similar, which has always been invalid C++.


I believe most compilers have switches to allow this. The real issue you might run into that can't be worked around easily is if the C source uses a C++ keyword as an identifier.


    #define new new_


actually, i reckon that most, if any, c programs will not compile with c++, without at least some (possibly not difficult) modifications


My only qualms is the number of genuine C only compilers in use, vs number of embedded system companies/projects/codebases in the world.

The subset of C features that are incompatible with C++ is quite small, and most of the features are a little out of the way IMO.

Of the regular suspects (GCC, MSVC, Clang), your C codebase is probably entirely compatible with C++ as each of these has varying levels of compliance with the C and C++ standard.


> Of the regular suspects (GCC, MSVC, Clang), your C codebase is probably entirely compatible with C++ as each of these has varying levels of compliance with the C and C++ standard.

yep, leave it up to the technical crowd to try and split hairs too fine.



That's a very nice find. I love that kind of candor about motivations on why software is written the way it is written. ;)

I probably have a few confessions to make myself...

But the royalties are just too good.


I don't find it at all insightful or humorous.

(I hope you don't think it's genuine)


> I don't find it at all insightful or humorous.

Humor isn't an absolute. I find it funny because if Stroustrup had tried he probably couldn't have made a language with more footguns, there are so many things in there that require a lot of discipline not to use (or overuse). At the same time it was a supremely useful language that allowed for much better abstraction than C ever did but it definitely came with a price.

> (I hope you don't think it's genuine)

Wait, what, that wasn't real???


Oh absolutely not an absolute, but there are qualities that make or break good humour.

On the footgun side, I dunno. It probably widened and/or deepened. If you read e.g. Effective C++ and turned on all warnings you were much better off than in plain C. OTOH of course it's a more complex language.

A lot of the early criticism of C++ was directed at the OO features, which have fallen well out of favour.


> not real?

real genuine Eric S Raymond humor, yes.


I've been a C++ user since 'cfront' and he simply has a point. If Stroustrup had been a little bit more disciplined about what features to add and which to leave out C++ would have been much more manageable. I'm not really afraid of many things but legacy C++ codebases I try very hard to stay away from. With some luck the original author(s) went overboard with all of the stuff that's available and never got around to refactoring any of it.


C compatibility is both the best and worst thing about C++


Haha.. Classic :D I remember reading it back in the days...


Shooting yourself in the foot in various programming languages.

http://www.toodarkpark.org/computers/humor/shoot-self-in-foo...


My favorite:

  UNIX:

    % ls
    foot.c foot.h foot.o toe.c toe.o
    % rm * .o
    rm: .o: No such file or directory
    % ls
    %


Ha ha. And that's not even a newb mistake alone. It's possible even for an expert to maje, by just one accidental space keypress, between the * and the .o, in 2nd command above. Such is Unix.


"rm -I" provides some protection.


"Nobody should call themselves a professional if they only know one language".

Presumably he means programming language, but this may apply more generally.


Thats very bad take.

Concepts are above languages.


I would respectively beg to differ.

Getting your hands dirty actively engaged in the practice of building software in multiple languages is needed to get good, diverse sets of experience around what works and why.

Reading books or worse, opinions on HN, just isn’t sufficient to claim a robust and well-rounded resume.


Nowadays languages are huge enough to cover various paradigms and countless concepts

That should enough, right?


> "There are only two kinds of languages: the ones people complain about and the ones nobody uses". Yes. Again, I very much doubt that the sentiment is original. Of course, all "there are only two" quotes have to be taken with a grain of salt.

True for many things. There are only two kinds of companies: the ones people complain about and the ones with no market power.


> "People who think they know everything really annoy those of us who know we don't"

I have read some of these quotes before, but not this one. Very true, especially for designing programming languages (where many people think they're experts), but not only...


> "Java is to JavaScript as ham is to hamster". No. Never. Nothing even close to that. I try hard not to be rude about other languages.

To be fair, it's not exactly a rude statement.


bjarne stroustrup, underrated hero of many facets of society, imo


kinda reminds me of a page on stallman.org that tells potential hosts what stallman likes and dont likes...if only i could find that link....hmmm..


So many brilliant quotes here, thanks for posting




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: