Is there a resource which compares Lisps (expressiveness, limitations, available special forms, ...)? I often read about lisp 1 and 2.0, clojure being a lisp 1.5 (because of the callable keywords if iirc).
Dabbling into llms I think that lisps could be very interesting format to expose tools to llms, ie prompting a llm to craft programs in a Lisp and then processing (by that I mean parsing, correcting, analyzing and evaluating the programs) those programs within the system to achieve the user's goal.
Do you mean Lisp-1 and Lisp-2 as in the number of namespaces?
https://dreamsongs.com/Separation.html - Goes into depth on the topic including pros and cons of each in the context of Common Lisp standardization at the time (ultimately arguing in favor of Lisp-2 for Common Lisp on grounds of practicality, but not arguing strictly for either in the future).
Common Lisp was a, more or less, unification of the various Lisps, not Scheme, that had developed along some path starting from Lisp 1.5 (some more direct than others). They were all Lisp-2s because they all kept the same Lisp 1.5 separation between functions and values. Scheme is a Lisp-1, meaning it unifies the namespaces. The two main differences you'll find are that in CL (and related Lisps) you'll need to use `funcall` where in Scheme you can directly use a function in the head position of an s-expr:
(let ((f ...)) ;; something evaluating to a function
(f ...)) ;; Scheme
(funcall f ...)) ;; Lisp
The paper on microKanren [1] is imho the most approachable piece outside the "reasoned schemer" [2]. The thesis on which it is based is also interesting but is a thicker beast. Looking at stuff from the clojure world (clojure.core.logic and this talk [3]) is also interesting imho, especially from a dev perspective. From this point of view I found this talk [4] to be especially enlightening in how to build a database / query engine and concrete applications of MiniKanren / datalog.
Interesting project, showing how "easy" it is to host[1] another language within Clojure. Like others, I admit I see little value for myself or as a selling point for beginners. For experienced engineers however, like I wrote above it should serve as a case study into how to hook up everything together to produce a working tool. Then it is a matter of seeing if and when there is opportunity to reuse such techniques to build DSLs that compile to native Clojure.
[1] Clojure is known to being designed to be hosted within another platform, but like all Lisps it is also a valid (and productive) hosting target by itself. It should be known that many of the APIs in the Clojure ecosystem rely either on:
- literal data structures
- macros reusing native Clojure patterns / forms such as `let`, `def` ...
- the shape of native (ie: defined within clojure.core) Clojure APIs, such as `get-in`, `with`...
I would be very interested to read about the limits of conjure. In my mind it is a fabulous tool, with vim-sexp it is very productive to write clojure code with and a pleasure to use.
I agree. I used ChatGPT to boostrap a coding session in which I was using a library to describe data structures. My steps were
- ask for documentation and examples relevant to the task ahead.
- produce some examples
- tweak the example using domain-like data
- propose a solution using the lib as is
- ask for a solution using a different convention than the classic one
i kind of think about this as a REPL assistant.
very interested in other ways to use it (and the use cases, anecdotally I also used it to try and learn a little bit of COBOL using the same iterative approach)
Dot operators vs plain functions are just two different ways to call a function, binding the lexical context `this` to a different value, possibly `undefined` in a strict context (which is default in es6 modules). The two are not uncompatible and libraries like jQuery make a wonderful use of the first version (dot call). The fact is that everything is mutable in JavaScript and one of its main API (the DOM) is actually built around this concept.
I really fail to see how those kind of changes bring anything of value to the language, apart from creating complexity where there is much already: some codebases will try and use this before it's ironed out with different / changing / competing transpiler implementation while it is only stage 2. What happened with decorators (see babel-legacy-decorators) should be a warning against enthusiasm for these kind of things and remind me of the pitfalls of macros in the Lisp world...
I am not saying that you should stick with an idiom till the end of time but changes should try and lower that complexity while (in the case of EcmaScript) being backward compatible so as to not break existing stuff.
Btw I love RxJS and its variants, Ben Lesh's talks (as well as those by Matthew Podwysocki, Andre Staltz etc...) from around 2015 and then on were one of the reasons I got into JS as a professional.
The pipe operator/function is useful and appropriate when all you are doing is stringing along some data through a series of transformations and where introducing a named variable would be more confusing and take away from the context of what is actually important.
Methods are nice but suffer from the fact that adding a custom method to a new object is more involved and dangerous than just creating a new function. However, "f(g(h(x)))" is not very readable (at least for English speakers) since while we read left to right, the invocation order is right to left. "x | h | g | f" is arguably more readable since the evaluation order follows reading order (plus you don't have to deal with so many parenthesis).
I want to reemphasize though that introducing a variable is often better practice and that the pipe operator only makes sense when there is no suitable variable name.
> However, "f(g(h(x)))" is not very readable (at least for English speakers) since while we read left to right, the invocation order is right to left. "x | h | g | f" is arguably more readable since the evaluation order follows reading order (plus you don't have to deal with so many parenthesis).
There's also a "hierarchy" thing at play. With "f(g(h(x)))", x is "inside" h which is "inside" g which is "inside" f. With "x | h | g | f" all are on the same level. For me at least, it's easier to imagine your data "flow" through functions when they are on the same level. It sound a bit weird when I put it that way, but I don't think I'm alone in feeling that. Fluent interfaces are considered easy to read, and help flatten the code. Same thing with promise chaining compared to the callback > of doom.
Chained function calls only work if the types in question have immutable functions that support chaining. This is the case for arrays with `map`, `filter` and for promises with then/catch.
But it's not the case for the majority of libraries, including many of the builtin types.
A pipeline operator is generic and can be used everywhere, without requiring the API to be built around chaining. This can often lead to much nicer, more readable code.
I think this is something you only start to really appreciate once you've used it a bit, like in Elixir or F#.
Obviously this is much more natural in a functional language, where everything is immutable anyway.
I merely explain where the pipeline operator came from and why it's preferred in F#: because it plays nicely with immutable data and the Hindley-Milner type inference algorithm. I make absolutely no comment about the sanity of adding anything to JS (or, indeed, of using JS at all).
Anecdotal but I know at least one megacorp that is doing layoffs of manager level positions after noticing that their services/industrial processes worked well while them being on furlough during the lockdowns.
Hello, thanks for your submission this is really interesting. Out of curiosity have you thought about how this problem could relate to history navigation, in the sense that it can create context around a specific resource, ie : retrieve links that you read after reading a HN thread, in order to prioritise its reading order or link it to some topic?
Dabbling into llms I think that lisps could be very interesting format to expose tools to llms, ie prompting a llm to craft programs in a Lisp and then processing (by that I mean parsing, correcting, analyzing and evaluating the programs) those programs within the system to achieve the user's goal.