Hacker Newsnew | past | comments | ask | show | jobs | submit | michaelfeathers's commentslogin

There's something with the same shape as Jevon's paradox - the Peltzman effect. The safer you make something the more risks people will take.

Applied to AI I think it would be something like - ease of development increases the complexity attempted.


This actually sounds like Rust development (or both FP and OOP development before that, or compilers before that).

By making things simpler and/or more robust, you make some very complex algorithms much more feasible. And you end up with algorithms such as HTTPS or even raft being part of everyday life, despite their complexity.


...and complexity created.

I think "How can this code be made simpler and any complexity either isolated or eliminated (preferably eliminated)?" should be the ensuing prompt after we generate things.


Thanks. After I wrote it a friend said "I think you just gave people permission to do things that they would've felt bad about otherwise." I think he was right, in a way. On the other hand, not everything is obvious to everyone, and it's been 20 years. Regardless of whether people have read the book, the knowledge of these things as grown since then.


This is called point-free style in Haskell.

Sometimes it is called a fluent-interface in other languages.


> Sometimes it is called a fluent-interface in other languages.

Where've you heard it called that? I've normally heard tacit programming


The developers of JMock, the original library for Java.


Or "point-less" style ;)

Could you elaborate? AFAIK tacit programming tend to be scrambling around composition, paren, and args which makes left-to-right reading significantly harder for function with arity greater than 2.

I find Java's method reference or Rust's namespace resolution + function as an argument much better than Haskell tacit-style for left-to-right reading.


It's chaining functions with a dot, just like you do in typical OO languages.

When it's OO, it's a virtue that everyone loves - a "fluent interface".

When it's FP - oh it's unreadable! Why don't they just break every line out with an intermediate variable so I know what's going on!


I think we are going to end up with common design/code specification language that we use for prompting and testing. There's always going to be a need to convey the exact semantics of what we want. If not, for AI then for the humans who have to grapple with what is made.


Sounds like "Heavy process". "Specifying exact semantics" has been tried and ended up unimaginably badly.


Nah, imagine a programming language optimized for creating specifications.

Feed it to an llm and it implements it. Ideally it can also verify it's solution with your specification code. If LLMs don't gain significantly more general capabilities I could see this happening in the longer term. But it's too early to say.

In a sense the llm turns into a compiler.


It's an interesting idea. I get it. Although I wonder.... do you really need formal languages anymore now that we have LLMs that can take natural language specifications as input.

I tried running the idea on a programming task I did yesterday. "Create a dialog to edit the contents of THIS data structure." It did actually produce a dialog that worked the first time. Admitedly a very ugly dialog. But all the fields and labels and controls were there in the right order with the right labels, and were all properly bound to props of a react control, that was grudgingly fit for purpose. I suspect I could have corrected some of the layout issues with supplementary prompts. But it worked. I will do it again, with supplementary prompts next time.

Anyway. I next thought about how I would specify the behavior I wanted. The informal specification would be "Open the Looping dialog. Set Start to 1:00, then open the Timebase dialog. Select "Beats", set the tempo to 120, and press the back button. Verify that the Start text edit now contains "30:1" (the same time expressed in bars and beats). Set it to 10:1,press the back button, and verify that the corresponding "Loop" <description of storage for that data omited for clarity> for the currently selected plugin contains 20.0. I can actually see that working (and I plan to see if I can convince an AI to turn that into test code for me).

Any imaginable formal specification for that would be just grim. In fact, I can't imagine a "formal" specification for that. But a natural language specification seems eminently doable. And even if there were such a formal specification, I am 100% positive that I would be using natural language AI prompts to generate the specifications. Which makes me wonder why anyone needs a formal language for that.

And I can't help thinking that "Write test code for the specifications given in the previous prompt" is something I need to try. How to give my AI tooling to get access to UI controls though....


That doesn't sound like the sort of problem you'd use it for. I think it would be used for the ~10% of code you have in some applications that are part of the critical core. UI, not so much.


We've had that for a long, long time. Notably RAD-tooling running on XML.

The main lesson has been that it's actually not much of an enabler and the people doing it end up being specialised and rather expensive consultants.


RAD before transformers was like trying to build an iPhone before capacitive multitouch: a total waste of time.

Things are different now.


I'm not so sure. What can you show me that you think would be convincing?


I think there are enough examples of genuine AI-facilitated rapid application development out there already, honestly. I wouldn't have anything to add to the pile, since I'm not a RAD kind of guy.

Disillusionment seems to spring from expecting the model to be a god or a genie instead of a code generator. Some people are always going to be better at using tools than other people are. I don't see that changing, even though the tools themselves are changing radically.


"Nothing" would have been shorter and more convenient for us both.


That's a straw man. Asking for real examples to back up your claims isn't overt perfectionism.


If you weren't paying attention to what's been happening for the last couple of years, you certainly won't believe anything I have to say.

Trust me on this, at least: I don't need the typing practice.


The trajectory of AI is: emulating humans. We've never been able to align humans completely, so it would be surprising if we could align AI.


It's like you're saying that AI has the same sort of fuzzy "free will" that we do, and just as an obedient slave might be convinced to break his or her bonds, so might an AI.


Religion is an attempt at the alignment problem and that experiment failed dramatically. Spiritual system prompting was never fully hardened against atheistic jail-breaking.


a wise person once told me -- avoid using "is" when entering complex idea spaces


Thank you, but I craft my takes specifically to warp consensus reality. Epistemic humility is bringing pre-lost arguments to a debate and proudly laying them at your opponent’s feet, saying, "please, go ahead and stab me with these. I brought plenty."


> Epistemic humility is bringing pre-lost arguments to a debate

hubris


will to power


Code for violence, no?


maybe, but apparently "is" in complex idea spaces brings "power via will"


Sure, but the problem is the call to violence.


Chat in English? Sure. But there is a better way. Make it a game to see how little you can specify to get what you want.

I used this single line to generate a 5 line Java unit test a while back.

test: grip o -> assert state.grip o

LLMs have wide "understanding" of various syntaxes and associated semantics. Most LLMs have instruct tuning that helps. Simplifications that are close to code work.

Re precision, yes, we need precision but if you work in small steps, the precision comes in the review.

Make your own private pidgin language in conversation.


I think it makes sense to see friction as disincentive, the opposite of incentive.


This is a good talk about the problem: https://youtu.be/hGXhFa3gzBs?si=15IJsTQLsyDvBFnr

Key takeaway, LLMs are abysmal at planning and reasoning. You can give them the rules of planning task and ask them for a result but, in large part, the correctness of their logic (when it occurs) depends upon additional semantic information rather then just the abstract rules. They showed this by mapping nouns to a completely different domain in rule and input description for a task. After those simple substitutions, performance fell apart. Current LLMs are mostly pattern matchers with bounded generalization ability.


People also fall apart on things like statistical reasoning if you switch domains (I think it is the Leda Cosmides evo psych stuff that goes into it but there might be a more famous experiment).


I always have trouble with takes like this because they are context-free. There are a wide variety of project types and development scenarios.

My nuanced take is that typing is an economic choice. If the cost of failure (MTTR and criticality) are low enough it is fine to use dynamic typing. In fact, keeping the cost of failure low (if you can) gives you much more benefit than typing provides.

Erlang, a dynamic language used to create outrageously resilient systems, is a great example of that for the domains where it can be used.

I'm not a dynamic typing zealot (I like static typing a lot) but I do think that dynamic typing is unfairly maligned.

The cost argument brings the decision down to earth.


Obligatory reference to the Law of Leaky Abstractions:

https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a...

Layers work to the degree that they are trivial, but we really only need them when they are non-trivial.


Layers work to the degree that you don't need to be a subatomic particle physicist to write this comment.

There are good abstractions and bad ones, all with varying degrees of leakiness and sharp corners. The good ones definitionally prevent you from needing to understand much about what they're abstracting.


Abstraction is real. So is bad code that pretends to be an abstraction.

Small leaks in an abstraction are survivable. Small leaks in many layers of abstraction are ruinous.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: