This actually sounds like Rust development (or both FP and OOP development before that, or compilers before that).
By making things simpler and/or more robust, you make some very complex algorithms much more feasible. And you end up with algorithms such as HTTPS or even raft being part of everyday life, despite their complexity.
I think "How can this code be made simpler and any complexity either isolated or eliminated (preferably eliminated)?" should be the ensuing prompt after we generate things.
Thanks. After I wrote it a friend said "I think you just gave people permission to do things that they would've felt bad about otherwise." I think he was right, in a way. On the other hand, not everything is obvious to everyone, and it's been 20 years. Regardless of whether people have read the book, the knowledge of these things as grown since then.
Could you elaborate? AFAIK tacit programming tend to be scrambling around composition, paren, and args which makes left-to-right reading significantly harder for function with arity greater than 2.
I find Java's method reference or Rust's namespace resolution + function as an argument much better than Haskell tacit-style for left-to-right reading.
I think we are going to end up with common design/code specification language that we use for prompting and testing. There's always going to be a need to convey the exact semantics of what we want. If not, for AI then for the humans who have to grapple with what is made.
Nah, imagine a programming language optimized for creating specifications.
Feed it to an llm and it implements it. Ideally it can also verify it's solution with your specification code. If LLMs don't gain significantly more general capabilities I could see this happening in the longer term. But it's too early to say.
It's an interesting idea. I get it. Although I wonder.... do you really need formal languages anymore now that we have LLMs that can take natural language specifications as input.
I tried running the idea on a programming task I did yesterday. "Create a dialog to edit the contents of THIS data structure." It did actually produce a dialog that worked the first time. Admitedly a very ugly dialog. But all the fields and labels and controls were there in the right order with the right labels, and were all properly bound to props of a react control, that was grudgingly fit for purpose. I suspect I could have corrected some of the layout issues with supplementary prompts. But it worked. I will do it again, with supplementary prompts next time.
Anyway. I next thought about how I would specify the behavior I wanted. The informal specification would be "Open the Looping dialog. Set Start to 1:00, then open the Timebase dialog. Select "Beats", set the tempo to 120, and press the back button. Verify that the Start text edit now contains "30:1" (the same time expressed in bars and beats). Set it to 10:1,press the back button, and verify that the corresponding "Loop" <description of storage for that data omited for clarity> for the currently selected plugin contains 20.0. I can actually see that working (and I plan to see if I can convince an AI to turn that into test code for me).
Any imaginable formal specification for that would be just grim. In fact, I can't imagine a "formal" specification for that. But a natural language specification seems eminently doable. And even if there were such a formal specification, I am 100% positive that I would be using natural language AI prompts to generate the specifications. Which makes me wonder why anyone needs a formal language for that.
And I can't help thinking that "Write test code for the specifications given in the previous prompt" is something I need to try. How to give my AI tooling to get access to UI controls though....
That doesn't sound like the sort of problem you'd use it for. I think it would be used for the ~10% of code you have in some applications that are part of the critical core. UI, not so much.
I think there are enough examples of genuine AI-facilitated rapid application development out there already, honestly. I wouldn't have anything to add to the pile, since I'm not a RAD kind of guy.
Disillusionment seems to spring from expecting the model to be a god or a genie instead of a code generator. Some people are always going to be better at using tools than other people are. I don't see that changing, even though the tools themselves are changing radically.
It's like you're saying that AI has the same sort of fuzzy "free will" that we do, and just as an obedient slave might be convinced to break his or her bonds, so might an AI.
Religion is an attempt at the alignment problem and that experiment failed dramatically. Spiritual system prompting was never fully hardened against atheistic jail-breaking.
Thank you, but I craft my takes specifically to warp consensus reality. Epistemic humility is bringing pre-lost arguments to a debate and proudly laying them at your opponent’s feet, saying, "please, go ahead and stab me with these. I brought plenty."
Chat in English? Sure. But there is a better way. Make it a game to see how little you can specify to get what you want.
I used this single line to generate a 5 line Java unit test a while back.
test: grip o -> assert state.grip o
LLMs have wide "understanding" of various syntaxes and associated semantics. Most LLMs have instruct tuning that helps. Simplifications that are close to code work.
Re precision, yes, we need precision but if you work in small steps, the precision comes in the review.
Make your own private pidgin language in conversation.
Key takeaway, LLMs are abysmal at planning and reasoning. You can give them the rules of planning task and ask them for a result but, in large part, the correctness of their logic (when it occurs) depends upon additional semantic information rather then just the abstract rules. They showed this by mapping nouns to a completely different domain in rule and input description for a task. After those simple substitutions, performance fell apart. Current LLMs are mostly pattern matchers with bounded generalization ability.
People also fall apart on things like statistical reasoning if you switch domains (I think it is the Leda Cosmides evo psych stuff that goes into it but there might be a more famous experiment).
I always have trouble with takes like this because they are context-free. There are a wide variety of project types and development scenarios.
My nuanced take is that typing is an economic choice. If the cost of failure (MTTR and criticality) are low enough it is fine to use dynamic typing. In fact, keeping the cost of failure low (if you can) gives you much more benefit than typing provides.
Erlang, a dynamic language used to create outrageously resilient systems, is a great example of that for the domains where it can be used.
I'm not a dynamic typing zealot (I like static typing a lot) but I do think that dynamic typing is unfairly maligned.
The cost argument brings the decision down to earth.
Layers work to the degree that you don't need to be a subatomic particle physicist to write this comment.
There are good abstractions and bad ones, all with varying degrees of leakiness and sharp corners. The good ones definitionally prevent you from needing to understand much about what they're abstracting.
Applied to AI I think it would be something like - ease of development increases the complexity attempted.