This is the glaring fallacy! We are turning to unreliable stochastic agents to churn out boilerplate and do toil that should just be abstracted or automated away by fully deterministic, reliably correct programs. This is, prima facie, a degenerative and wasteful way to develop software.
Saying boilerplate shouldn’t exist is like saying we shouldn’t need nails or screws if we just designed furniture to be cut perfectly as one piece from the tree. The response is “I mean, sure, that’d be great, not sure how you’ll actually accomplish that though”.
Great analogy. We've attempted to produce these systems and every time what emerges is software which makes easy things easy and hard things impossible.
Reason Japanese carpenters do or did that is that sea air + high humidity would absolutely rot anything with nail and screw.
No furniture is really designed from a single tree, though. They aren't massive enough.
I agree with overall sentiment. But the analogy is higly flawed. You can't compare physical things with software. Physical things are way more constrained while software is super abstract.
I can and will compare them, analogies don’t need to be perfect so long as they get a point across. That’s why they’re analogies, not direct perfect comparisons.
I very much enjoy the Japanese carpentry styles that exist though, off topic but very cool.
I can tell you about 1000 ways, the problem is there are no corporate monetary incentives to follow them, and not much late-90s-era FOSS ethos going around either...
This is a terribly confused analogy, afaict. But maybe if you could explain in what sense boilerplate, as defined in https://en.wikipedia.org/wiki/Boilerplate_text, is anything like a nail, it could be less confusing.
Saying boilerplate should exist is like saying every nail should have its own hammer.
Some amount of boilerplate probably needs to exist, but in general it would be better off minimized. For a decade or so there's sadly been a trend of deliberately increasing it.
While it sounds likely true for the US, it's the opposite in Germany:
likely due to societal expectations on "creature comforts" and German homes not being framed with 2x4's but instead getting guild-approved craftsmen to construct a roof for a brick building (with often precast concrete slabs forming the intermediate floors; they're segmented along the non-bridging direction to be less customized).
We’re limited by the limits of our invention though. We can’t set the parameters and features to whatever we want, or we’d set them to “infinitely powerful” and “infinitely simple” - it doesn’t work like that however.
Well, depending on the value proposition, or the required goals, that’s not necessarily true. There are pros and cons to different approaches, and pretending there aren’t downsides to such a switch is problematic.
Yes and its why AI fills me with impending doom: handing over the reigns to an AI that can deal with the bullshit for us means we will get stuck in a groundhog day scenario of waking up with the same shitty architecture for the foreseeable future. Automation is the opposite of plasticity.
Maybe if you fully hand over the reigns and go watch Youtube all day.
LLMs allow us to do large but cheap experiments that we would never attempt otherwise. That includes new architectures. Automation in the traditional sense is opposite of plasticity (because it's optimizing and crystalizing around a very specific process), but what we're doing with LLMs isn't that. Every new request can be different. Experiments are more possible, not less. We don't have to tear down years of scaffolding like old automated systems. We just nudge it in a new direction.
I don’t think that will happen. It’s more like a 3d printer where you can feed in a new architecture and new design every day and it will create it. More flexibility instead of less.
Ground Hog day is optimistic, I think. It will be like "The Butterfly Effect": every attempt to fix the systems using the same dumb, wrote solutions will make the next iteration of the architecture worse and more shitty.
When humans are in the loop everything pretty much becomes stochastic as well. What matters more is the error rate and result correctness. I think this shifts the focus towards test cases, measurement, and outcome.
A few days ago I lost some data including recent code changes. Today I'm trying to recreate the same code changes - i.e. work I've just recently worked through - and for the life of me I can't get it to work the same way again. Even though "just" that is what I set out to do in the first place - no improvements, just to do the same thing over again.
Everything we do is a stochastic process. If you throw a dart 100 times at a target, it's not going to land at the same spot every time. There is a great deal of uncertainty and non-deterministic behavior in our everyday actions.
> throw a dart ... great deal of uncertainty and nongdeterministic behavior in our everyday actions.
Throwing a dart could not be further away from programming a computer. It's one of the most deterministic things we can do. If I write if(n>0) then the computer will execute my intent with 100% accuracy. It won't compare n to 0.005.
You see arguments like yours a lot. It seems to be a way of saying "let's lower the bar for AI". But suppose I have a laser guided rifle that I rely on for my food and someone comes along with a bow and arrow and says "give it a chance, after all lots of things we do are inaccurate, like throwing darts for example". What would you answer?
As much as it’s true that there’s stochasticity involved in just about everything that we do, I’m not sure that that’s equivalent to everything we do being a stochastic process. With your dart example, a very significant amount of the stochasticity involved in the determination of where the dart lands is external to the human thrower. An expert human thrower could easily make it appear deterministic.
If we are talking in terms of IRL/physics, there is no such thing as a deterministic system outside of theory - everything is stochastic to differing degrees - including you brain that came up with these thoughts.
I think that both of you are right to some extent.
It’s undeniable that humans exhibit stochastic traits, but we’re obviously not stochastic processes in the same sense as LLMs and the like. We have agency, error-correction, and learning mechanisms that make us far more reliable.
In practice, humans (especially experts) have an apparent determinism despite all of the randomness involved (both internally and externally) in many of our actions.
stochastic vs deterministic is arguable a property of modelling, not reality.
Something so complex that we cannot model it as deterministic is hence stochastic. We can just as easily model a stochastic thing by ignoring the stochastic parts.
separating subjective appearance of things from how we can conceptualise them as models begs a deeper philosophical question of how you can talk about the nature of things you cannot perceive.
Not interested in joining a pile-on, but I just wanted to point out how difficult reproducible builds are. I think there's still a bit of unpredictability in there, unless we go to extraordinary lengths (see also: software proofs).
This is very true. For the most basic approaches of using stochastic agents for this purpose, especially with genralized agents and approaches.
It is possible to get much higher quality with not just oversight, but creating the alignment from the stochastic agents to have no choice but to converge towards the desired vector of work reliably.
Human in the loop AI is fine, I'm not sure that everything doesn't to be automated, it's entirely possible to get further and more reps in on a problem with the tool as long as the human is the driver and using the stochastic agent as a thinking partner and not the other way around.
How big a dent do you think we could make if poured $252 billion dollars[0] just into paying down all our towers of tech debt and developing clean abstractions for all these known problems?
nothing prevents stochastic agents from producing reliable, deterministic and correct programs. it's literally what the agents are designed for. it's much less wasteful than me doing the same work and much much less wasteful trying to find a framework for all frameworks.
Reduced mental load. When it’s proven that a set of input will always result in the same output, you don’t have to verify the output. And you can just chain process together and not having to worry about time wasted because of deviations.
Good point. Non-determinism is not fundamentally problematic on many levels. What is important is that the essential behavioral invariants of the systems are maintained.