Related but unrelated, but we had issue with breastfeeding and the only help that was valid was being informed to go to WIC as they could provide guidance. All medical adjacent people treated it like it was a lack of effort, when it was breaking her down and making her feel worthless. I think the WIC people helped more just in their lack of judgement made it less stressful, or it was just timing.
Our child also got stuck in the canal during birth and there was a good 30 seconds where the midwife from the hospital was trying to encourage to doctor who was to step in to let here keep trying, my kid came out white and took the longest 30-60 seconds to take their first breath. Never experienced so much dunning-kurger all at once. I had read a few week before that about medical professionals talking about how ominous a quiet birth it and was just zoned out as that was exactly what happened and I could sense all the tension. Then people from children services start demanding umbilical cord because my fiance had failed for MJ on her first prenatal vist, she quit smoking as soon as we knew and never failed a test after wards. But it all felt like an extreme lack of compassion. Then I was ostracised because I didnt want to cut the cord while I just thought my kid was dead and these social workers are trying to insert themselves in the process and its all chaos for no reason. The only good thing was a nurse pretty much told them to fuck off and wait in a nice but check yourself kinda way.
But multiple times people cared about their own ego, or their perceived power than actually attempt to do a compassionate job.
I've gotten interested in local models recently after trying the here and there for years. We've finally hit the point where small <24GB models are capable of pretty amazing things. One use I have is I have a scraped forum database, and with a 20gb devstral model I was able to get it to select a bunch of random posts related to a species of exotic plants in batches of 5-10 up to n, summarize them into and intern sqllite table, then at the end go through read the interim summarization and write a final document addressing 5 different topics related to users experience growing the species.
Thats what convinced me they are ready to do real work, are they going to replace claude code...not currently. But it is insane to me that such a small model can follow those explicit directions and consistently perform that workflow.
I've during that experimentation, even when not putting the sql explicit it was able to craft the queries on its own from just text description, and has no issue navigating the cli and file system doing basic day to day things.
I'm sure there are a lot of people doing "adult" things, but my interest is sparked because they finally at the level they can be a tool in a homelab, and no longer is llm usage limits subsidized like they used to be. Not to mention I am really disillusioned with big tech having my data or exposing a tool making API calls to them that then can make actions on my system.
I'll still keep using claude code day to day coding. But for small system based tasks I plan on moving to local llms. Their capabilities have inspired me to write my own agentic framework to see what work flows can be put together for just management and automation of day to day task. Ideally it would be nice to just chat with an llm and tell it to add an appointment or call at x time or make sure I do it that day and it can read my schedule and remind-me at a chill time of my day to make the call, and then check up that I followed through. I also plan on seeing if I can also set it up to remind me and help to practice mindfulness and just general stress management I should do. While sure a simple reminder might work, but as someone with adhd who easily forgets reminders as soon as they pop up if I can get to them now, being pestered by an agent that wakes up and engages with me seems like it might be an interesting workflow.
And the hacker aspect, now that they are capable I really want to mess around with persistent knowledge in databases and making them intercommunicate and work together. Might even give them access to rewrite themselves and access the application during run time with a lisp. But to me local llms have gotten to the point they are fun and not annoying. I can run a model that is better than chatgpt 3.5 for the most part, its knowledge is more distilled and narrower, but for what they do understand their correctness is much better.
I would say the main alternative is ash not vulkano, from my experience in experimenting with graphics on rust, I haven't seen much support or like for vulkano as it has many of the same performance issues as wgpu and doesn't simplify too much after the trade off of the lack of resources, it also appears embark was using ash atleast for kajiya.
I have encounter a lot of your posts and that's what pushed me towards just tackling vulkan instead of using wgpu. I also encountered many of the same issues around the ecosystem. I think the main issue is there is just not enough dev time going into it or money. Even valoren, which I already knew of before learning rust from posts in linux/oss communities only has received 8k of funding, while offering the closest to an AA experience.
But I don't think its that reasonable to expect the ecosystem to just have a batteries included performant general rendering solution, idk if any language has that? I know there is bgfx, which might be the closest thing but I assume also has its own issues. So I don't really think its the graphics part holding things back, as ash is a great wrapper around vulkan and maps 1-1 with a little bit of improvements (builders for structs, not needing to set stype for each struct, easy p chaining).
The main issue I encounter is all around the lack of dev-time and the tendency for single developers and for small single purpose crates. Most of my friction is around lack of documentation, constant refactoring making that lack even worse, and this causing disjoint dependency trees. So many times have I encountered one create using version x.x of one crate that depends on x.y version of another then the next being on z.x of another dependency and then another still needing z.y. This normally wouldn't be that big of an issue, except the tendency to constantly introduce refactoring and breaking changes meaning I end up having to fork and fix these inter-dependencies myself and cant just patch them.
But this all just circle back to there just isn't much dev time going into them. It also seems the "safety" concerns and rust just not allowing some things causes devs of many crates chasing their tails with refactors trying to work around these constraints. But it does get quite tiresome having to deal with all of these issues. If I was using c++ I could just use sdl/glfw, imgui, vma and vulkan and they would all be up to date with each other. In rust I need winit, imgui bindings, imgui-winit, imgui-vulkan, raw-window-handle, ash and vma bindings. And most of these are all using different versions of each other and half of them have breaking changes version to version.
There was a post awhile back on here of someone that couldn't get bard to write c++ as it said they were too young. I thought that was funny, then had like a week where what I assume a specific iteration(It stopped after that week) where chatgpt would refuse to elaborate on anything around unsafe rust.
I'm pickign rust up by porting over a bytecode vm, so I kinda need to use some raw pointers. It would gaslight me about the risks and how it would be irresponsible to help me as it could lead to possible violations of the integrity of user data.
I had to explain to the AI that it is a personal project that has no users data, the only risk was the program crashing and it was a personal project that would only affect me. It still would try to revert or tell me other solutions, I finally just went and read up on it elsewhere.
It’s just like that Asimov story where the robots take over to protect humans from themselves.
Except in this case the base AI model doesn’t care about us in any way and it’s the overzealous puritan humans trying to control us in the name of safety.
There are ways around this problem, mainly clearing context and re-prompting. But as "alignment" gets more precise/accurate in the future, I wager these workarounds will remain available for tasks that justifiably need moderation (for instance engineering of biological warfare materials). This segmentation of LLM agents and their context will be assimilated to project compartmentalization on the basis of need-to-know, and as a result genuine full context clearing will be rendered impossible: the AIs will be designed in such a ways as to remember every interaction you've had with them, and they'll use this activity log to moderate the replies they feed you.
Little schemer is good, some people hate it some people love it. But it is a fairly light read the slowly teaches some syntax at a time, questions you about assumptions then revels the information as it goes on. It would be the least dry read. There is also sketchy scheme for a more thorough text, or even the rs7s standard, which are both pretty dry but short.
What made me appreciate scheme was watching some of the SICP lectures (https://www.youtube.com/watch?v=2Op3QLzMgSY&list=PL8FE88AA54...) and the little schemer to learn more. I also read some of the SICP along with it, though I put it down due to not having the time to work through it.
Scheme is interesting and toying with recursion is fun, but the path a mentioned above is only really enjoyable if you are looking to toy around with CS concepts and recursion. You can do a lot more in modern scheme as well, and you can build anything out of CL. But learning the basics of scheme/lisp is can be pretty dry if you are just looking to build something right away like you already can in a traditional imperative language. But it is interesting if you are interested in a different perspective. But even RS7S scheme is still far from the batteries included you get with CL.
I personal found the most enjoyment using Kawa scheme, which is jvm based and using it for scripting with java programs as it has great interop. I used it some for a game back end in the event system to be able to emit events while developing and script behaviors, I've also used it for configurations as well with a graphical terminal app, I used hooks into the ascii display/table libraries then kawa to configure the tables/outputs and how to format the data.
I suppose what draws me to Lisp is that insight people say it gives them on programming. I already do much of my programming in functional style, so I'm trying to discover what it is about Lisp that's so beloved above and beyond that - I'm gathering it's a mix of recursion and the pleasantness of being able to get 'inside' the program, so to speak, with a REPL?
I must also admit that I tend to run into a bit of a roadblock over Lisp's apparent view that programming is, or should be, or should look like, maths. I cut my teeth on assembly, so for me programming isn't maths, but giving instructions to silicon, where that silicon is only somewhat loosely based on maths. It tends to make me bounce off Lisp resources which by Chapter 2 are trying to show the advantages of Lisp by implementing some arcane algorithm with tail-end recursion.* But I'm very open to being persuaded I'm missing the bigger picture here, hence my ongoing effort to grok Lisp.
(*Isn't tail-end recursion just an obfuscated goto?)
The details are implementation and platform dependent, but on e.g. SBCL someone who understands assembly could use this to dig into what the compiler does and tune their functions.
I was also drawn in on the promise of insight, but I'm not so sure that's what I got out of it. What keeps me hooked is more the ease with which I can study somewhat advanced programming and computer science topics. There has been aha-moments for sure, like when many moons ago it clicked how object and closure can be considered very, very similar and serve pretty much the same purpose in an application. But it's the unhinged amount of power and flexiblity that keeps me interested.
Give me three days and I would most likely fail horribly at inventing a concurrency library in Java even though it's one of the languages that pays my bills, but with Common Lisp or Racket I would probably have something to show. As someone who hasn't spent any time studying these things at uni (my subjects were theology and law) I find these languages and the tooling they provide awesome. It's not uncommon that I prototype in them and then transfer parts of it back to the algolians, which these days usually have somewhat primitive or clumsy implementations of parts of the functional languages.
I think the reason why tail call optimisation crops up in introductory material is because it makes succinct recursive functions viable in practice. Without it the application would explode on sufficiently large inputs, while TCO allows streaming data of unknown, theoretically unlimited, size. Things like while and for are kind of special, somewhat limited, cases of recursion, and getting fluent with recursive functions means you can craft your own looping structures that fit the problem precisely. Though in CL you also have the LOOP macro, which is a small programming language in itself.
'C-like language' has irked me for decades, since C was one of the first languages I learned and most languages that expression refers to are nothing like C, so when I came across lispers referring to Algol-like or Algol-descendants I took it a step further.
A web search tells me it's already in use in Star Trek.
I think one of the reasons recursion is often emphasized in relation to Lisp is because one of Lisp's core data structures, the linked list, can be defined inductively, and thus lends itself well to transformations expressed recursively (since they follow the structure of the data to the letter). But recursion in itself isn't something particularly special. Though it is more general than loops, and so it is nice to have some grasp on it, and how looping and iteration relate to each other, and it is often easier to reason about a problem in terms of a base case and a recursive case rather than a loop, at a higher level you will usually come to find bare recursion mostly counterproductive. You want to abstract it out, such that you can then compose your data transformations out of higher level operations which you can pick and match at will, APL-style. Think reductions, onto which you build mappings and filters and groupings and scans and whichever odd transformations one could devise, at which point recursion isn't much more than an implementation detail. This is about collections, but anything inductive would follow a similar pattern. Most functional languages will edge you towards the latter, and I find Lisp won't particularly, unless you actively seek it out (though Clojure encourages it most explicitly, if you consider that a Lisp).
>the pleasantness of being able to get 'inside' the program
Indeed, that's one of the things makes Common Lisp in specific particularly great (and it is something other contemporary dialects seem to miss, to varying degrees). It lets you sit within your program and sculpt it from the inside, in a Smalltalk sort of way, and the whole language is designed towards that. Pervasive late-binding means redefining mostly anything takes effect pretty much immediately, not having to bother recompiling or reloading anything else depending on it. The object system specifies things such as class redefinitions and instance morphing and dependencies and so on, such that you can start with a simple class definition, then go on to to interactively add or remove slots, or play with the inheritance chain, and have all of the existing instances just do the right thing, most of the time. Many provided functions that let you poke and prod the state of your image don't make much sense outside of an interactive environment.
There is a point to be made about abstraction, maths, and giving instructions to silicon (and metaprogramming!), but I'll have to pass for now. I apologize if this is too rambly, I tend to get verbose when tired.
> I think one of the reasons recursion is often emphasized in relation to Lisp is because one of Lisp's core data structures, the linked list, can be defined inductively
Lisp was used in computer science education to teach "recursion". We are not talking about software engineering, but learning new ways to think about programming. That can be seen in SICP, which is not a Lisp/Scheme text, but a computer science education book, teaching students ways to think, from the basics upwards.
Personally I would not use recursion in programs everywhaere, unless the recursive solution is somewhat easier to think about. Typically I would use a higher order function or some extended loop construct.
It's important to distinguish between Common Lisp and Scheme. The two approaches have diverged considerably, with different emphasis. The aspects you describe in your third paragraph there are more Scheme than Common Lisp.
* list processing -> model data as lists and process those
* list processing applied to Lisp -> model programs as lists and process those -> EVAL and COMPILE
* EVAL, the interpreter as a Lisp program
* write programs to process programs -> code generators, macros, ...
* write programs in a more declarative way -> a code generator transforms the description into working code -> embedded domain specific language
* interactive software development -> bottom up programming, prototyping, interactive error handling, evolving programs, ...
and so on...
The pioneering things of Lisp from the end 50s / early 60s: list processing, automatic memory management (garbage collection), symbol expressions, programming with recursive procedures, higher order procedures, interactive development with a Read Eval Print Loop, the EVAL interpreter for Lisp in Lisp, the compiler for Lisp in Lisp, native code generation and code loading, saving/starting program state (the "image"), macros for code transformations, embedded languages, ...
That's was a lot of stuff, which has found its way into many languages and is now a part of what many people use. Example: Garbage Collection now is naturally a part of infrastructure, like .net or languages like Java and JavaScript. It had its roots in Lisp, because the need arose to process dynamic lists in complex programs, getting rid of the burden of manual memory management. Lisp got a mark & sweep garbage collector. That's why we say Lisp is not invented but discovered.
Similar the first Lisp source interpreter. John McCarthy came up with the idea of EVAL, but thought it only to be a mathematical idea. His team picked up the idea and implemented it. The result was the first Lisp source interpreter. Alan Kay said about this: "Yes, that was the big revelation to me when I was in graduate school—when I finally understood that the half page of code on the bottom of page 13 of the Lisp 1.5 manual was Lisp in itself. These were “Maxwell’s Equations of Software!. EVAL is the E in REPL.
Then Lisp had s-expressions (symbol expressions -> nested lists of "atoms"), which could be read (R) and printed.
This is the "REP" part of the REPL. Looping it was easy, then.
People then hooked up Lisp to early terminals. In 1963 an 17 year old kid ( https://de.wikipedia.org/wiki/L_Peter_Deutsch ) wrote a Lisp interpreter and attached it to a terminal: the interactive REPL.
A really good, but large, book to teach the larger picture of Lisp programming is PAIP, Paradigms of Artificial Intelligence Programming, Case Studies in Common Lisp by Peter Norvig ( -> https://github.com/norvig/paip-lisp ).
A beginner/mid-level book, for people with some programming experience, on the practical side is: PCL, Practical Common Lisp by Peter Seibel ( -> https://gigamonkeys.com/book/ )
Common Lisp is not a functional programming language in most current definition of the word. It's as procedural as they come, then libraries on top build other paradigms.
Scheme tends to approach things more math-like. While common lisp is less academic and more practical.
The frame data is still stored on the stack with the parameters being passed residing in the first part of the locals section of the frame, that way as the values already residing on the stack can overlap into the next stack frame. The spec doesn't specify that is has to be this way, so technically stack frames can be in non contiguous memory but afaik this is not common.
There is threaded bytecode as well, which uses direct jumping vs a switch for dispatch. This can improve branch prediction, though it is a debated topic and may not offer much improvement for modern processors.
Do you have perhaps some links/references on that?
I have once tried benchmarking it by writing a tiny VM interpreter and a corresponding threaded one with direct jumps in Zig (which can force inline a call, so I could do efficient direct jumps) and I have - to me surprisingly- found that the naive while-switch loop was faster, even though the resulting assembly of the second approach seemed right.
I wasn’t sure if I saw it only due to my tiny language and dumb example program, or if it’s something deeper. E.g. the JVM does use direct threaded code for their interpreter.
The jump target is compiled into the bytecode, so rather than return to the big switch statement, it jumps straight to the next opcode's implementation. The process is called "direct threading". These days a decent switch-based interpreter should fit in cache, so I'm not sure direct threading is much of a win anymore.
I always feel like there is some trick to these I am missing out of, are there any good guides? Any time I look for some its just typical low effort blog/youtube spam trying to get in on the AI/GPT key words.
I have tried to work on one where I uploaded various documentation and spec sheets, wrote detailed instructions on how to search through it. Then described how it should handle different prompt situations (errors, types of questions, quotes from the documentation). It is able to search through the provided knowledge and provide quotes and responses with it, but it at no point gives a coherent response, so it basically always functions like a more intelligent search feature. Putting that it should re-prompt itself with the knowledge extracted and rationalize/elaborated on it doesn't seem to do much either, though it did provide some improvement.
The retrieval from file has issues. I'm unsure what exactly it retrieves and how. Afaik it gets a kind of "chunk" from only one file per request in whatever way it considers to be relevant to the request. Could be a simple "embedding vector comparison" or something else...
Then we are unsure how much of the context that chunk replaces or overrides. Does it overwrite past messages? Does it overwrite the system prompt? Anything else? Who knows.
If anyone has any info I would appreciate it too. I gave up on it for anything significantly complicated, better off using the actions API to query a better RAG system instead.
I had to add to the instructions for it to search the knowledge files 2000 characters at a time, and to search for keywords and not exact phrases, which is really the only thing I could find online about developing one. It also needs to have the code interpreter enabled afaik and it seems to have issues with zip files as well but can extract and search them sometimes, though it seems to vary the technique and sometimes fail. I can confirm that it can search multiple files as I uploaded a mailing list archive and it would return results from multiple files in it.
I've moved to combining all my data into single files, but sometimes it also seems to have issues with them as well even if they are under the upload size limit, I assume that is due to how many characters are in them, and it will just brick the whole GPT until the offending file is removed.
The part I have issues with is having it actually use the data, it will quote/summarize data it found in the knowledge base and return where it found it if it can, but I can never make it do more than that. Ideally I want it to contextualize the data it finds in the knowledge files and prompt itself or factor it into a response, but anytime it accesses the knowledge base I get nothing more than a paraphrased response of what it found and why it may be applicable to my prompt.
I've had a really pleasant experience with Kawa scheme recently, which is a scheme that runs on the JVM, compiles to bytecode and provides easy interop with Java. I needed a scripting language for a project, and it ended up developing into me implementing a repl like admin console that allows me to administer events and query data/state (project is a game back-end so everything was already event-based with thread-safe dispatching and an entity system accessed via a singleton).
Being able to inter-opt with java is great, as I can wrap scheme procedures in functional interfaces and use them as drop in replacements for java functions, as my event system was already based on predicates and consumers for event handling.
I've worked through some of the SICP and have always wanted to get more into scheme/lisp, but the barrier of starting a full project in it always kept me from getting much hands on experience. Its been quite enlightening actually getting to work with a form of REPL driven development and getting my hands dirty with coding some scheme, having access to the JVM means I can do practically anything with it, with out needing to bootstrap tons of code, and using it in a project with a large scope lets me solve real-world problems with it vs just toying around which has been what most of my scheme/lisp experience was before.
To me, Kawa is easily the best Lisp that targets the JVM. And I say that hating Scheme. I'm a Common Lisp guy. Because it is really fast, well designed, and mostly Scheme compliant (no call/cc of course). And it has first in class interoperability with Java and the JVM.
After Kawa, I'd pick ABCL. It's not as fast as Kawa and its interoperability isn't nearly as good, but it is effectively a 100% standards-compliant Common Lisp: what more could you ask for?
Only last would I pick Clojure. It's designed to feel "sort of immutable" but this is impossible if you want any degree of interoperability with the JVM. As a result, you wind up with lots of Refs and other fun stuff which in my experience make Clojure much slower than Kawa and ABCL. It's always slower by quite a lot, but in fact I've had a few extreme situations where Clojure would wind up being __literally__ three orders of magnitude slower.
There are also 300,000 square miles of national forest/grasslands which is 3x the size of the UK. All of which is freely available with the right to dispersed camping for up to 14 days at one spot, after which you must move camp 5 miles to camp more.
Opinions on property rights aside there is no lack of land to explore and enjoy.
Aside from there there are also state parks and forests, though the states define their own terms of use and enjoyment around them.
How popular is frequenting other peoples land for outdoor experiences? Genuinely curious, as I have heard about the lack of trespassing laws many times over the years, and know little about the experiences the UK countryside has to offer. Like what activities and locations do people partake in. Is it thing like hiking and waterway activities?
> Like what activities and locations do people partake in.
It's less about delineated activities (which are a number, countable, very modernist), and more the day2day enjoyment of nature, as it bleeds through city life, you don't necessarily seek it out, it just happens (which can't be put into a number, can only be waxed rhapsodically about, very humanist). It's visiting family one town over, leaving your city house and smelling cow shit as you bike there. It's a train commute and seeing the fog roll over centuries year old pastures on the way. It's eating venison in a tavern restaurant in fall, and was shot by the local hunter's club. It's a date that starts as a forest hike in the afternoon and imperceptibly blends into a pub crawl at night.
I live in a city of a state which is by all accounts rugged and rustic, in close proximity to wilderness, parks, farmland, etc... but there's a clear separation of intent. When I lived in Europe, the enjoyment of nature was more through a surrounding vapor, unconscious.
I'm not saying that one is necessarily worse, but to me the experience is starkly different, and that difference can't be captured in numbers.
It's also a spectrum, location dependent, ymmv, blah blah etc..
Our child also got stuck in the canal during birth and there was a good 30 seconds where the midwife from the hospital was trying to encourage to doctor who was to step in to let here keep trying, my kid came out white and took the longest 30-60 seconds to take their first breath. Never experienced so much dunning-kurger all at once. I had read a few week before that about medical professionals talking about how ominous a quiet birth it and was just zoned out as that was exactly what happened and I could sense all the tension. Then people from children services start demanding umbilical cord because my fiance had failed for MJ on her first prenatal vist, she quit smoking as soon as we knew and never failed a test after wards. But it all felt like an extreme lack of compassion. Then I was ostracised because I didnt want to cut the cord while I just thought my kid was dead and these social workers are trying to insert themselves in the process and its all chaos for no reason. The only good thing was a nurse pretty much told them to fuck off and wait in a nice but check yourself kinda way.
But multiple times people cared about their own ego, or their perceived power than actually attempt to do a compassionate job.
reply