Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was expecting someone would say that. There are very specific situations where a language has an advantage over an exceptionally "bad" language in the same domain, such as Rust vs. C or TypeScript vs. JS (for which we also have evidence of a ~15% improvement). But that doesn't mean that the very concept of "new types" generally has a big impact. E.g., it's easier to write more correct software in Rust than in C, but probably not than in Java or even (the untyped) Clojure.

Rust is a special case because its main contribution is to use typing rules to solve a harmful problem that's well-known and particular to C (or C++).



Clojure is strongly typed and dynamically typed, not "untyped". Much of its core behavior is built on interfaces like ISeq.

It's not uncommon to use a spec library like clojure.spec or malli, whose benefits overlap those of static typing. I'm not sure if there is a measured improvement from their use, but they have either advantages like facilitating generative testing that do help one to write more correct software.


The term "untyped" means anything that isn't statically typed (or just "typed"). This is because (static) types and dynamic "types" are two very different objects, and only the former is called "types" in programming language theory and formal languages in general.

I am well aware of clojure.spec, and it, as well as many other techniques employed in development, are probably among the reasons why types don't actually seem to have a big relative impact on correctness.


Thanks for the explanation. What are some of the other techniques?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: