Because bash is everywhere. Stability is a separate concern. And we know this because LLMs routinely generate deprecated code for libraries that change a lot.
I've been working with the shell long enough that I know just by looking at it.
Anyway, it was rethorical. I was making a point about portability. Scripts we write today run even on ancient versions, and it has been an effort kept by lots of different interpreters (not only bash).
I'm trying to give sane advice here. Re-implementing bash is a herculean task, and some "small incompatibilities" sometimes reveal themselves as deep architectural dead-ends.
The issue here is not language, is basic understanding of how LLMs are trained, how agents act on that training and what is the role of the shell from a systems perspective.
I can't have a meaningful conversation with someone that doesn't fully grasp those, no matter in which language.
My work machine is Win11 and the new Notepad is hilariously buggy. Repeatedly encountered bugs where the screen fails to paint, takes multiple seconds to load, hard refuses to open files of a certain size, etc.
Notepad was never fancy, but it was a reliable tool to strip formatting or take a quick note, and now I cannot even count on that.
They dont care. Their sales reps absolutely know that if you are using Microsoft products it is because you are locked in so deeply that escape is nearly impossible.
I like CUE a lot. We use it pretty heavily for schema enforcement of CRDs. That being said, it is pretty complex and learning to use it was anything but straight forward.
Lets put it this way, no engineer is choosing to use bitbucket. You use it because some SVP made the mistake of choosing atlassian software a decade ago and refuses to change.
Yes. poetry & pyenv was already a big improvement, but now uv wraps everything up, and additionally makes "temporary environments" possible (eg. `uv run --with notebook jupyter-notebook` to run a notebook with my project dependencies)
This is it. Later versions of python .11/.12/.13 have significant improvements and differences. Being able to seamlessly test/switch between them is a big QOL improvement.
I don't love that UV is basically tied to a for profit company, Astral. I think such core tooling should be tied to the PSF, but that's a minor point. It's partially the issue I have with Conda too.
> Later versions of python .11/.12/.13 have significant improvements and differences. Being able to seamlessly test/switch between them is a big QOL improvement.
I just... build from source and make virtual environments based off them as necessary. Although I don't really understand why you'd want to keep older patch versions around. (The Windows installers don't even accommodate that, IIRC.) And I can't say I've noticed any of those "significant improvements and differences" between patch versions ever mattering to my own projects.
> I don't love that UV is basically tied to a for profit company, Astral. I think such core tooling should be tied to the PSF, but that's a minor point. It's partially the issue I have with Conda too.
In my book, the less under the PSF's control, the better. The meager funding they do receive now is mostly directed towards making PyCon happen (the main one; others like PyCon Africa get a pittance) and to certain grants, and to a short list of paid staff who are generally speaking board members and other decision makers and not the people actually developing Python. Even without considering "politics" (cf. the latest news turning down a grant for ideological reasons) I consider this gross mismanagement.
I think the big difference is that these aren't AI generated bug reports. They are bugs found with the assistance of AI tools that were then properly vetted and reported in a responsible way by a real person.
From what I understand some of the bugs where in code the AI made up on the spot, other bug reports had example code that didn't even interact with curl. These things should be relatively easy to verify by a human, just do a text search in the curl source to see if the AI output matches anything.
Hard to compute, easy to verify things should be the case where AI excel at. So why do so many AI users insist on skipping the verify step?
The issue I keep seeing with curl and other projects is that people are using AI tools to generate bug reports and submitting them without understanding (that's the vetting) the report. Because it's so easy to do this and it takes time to filter out bug report slop from analyzed and verified reports, it's pissing people off. There's a significant asymmetry involved.
Until all AI used to generate security reports on other peoples' projects is able to do it with vanishingly small wasted time, it's pretty assholeish to do it without vetting.
Thats a bit uncalled for. This is a game made by someone shaped by their perspective on the world. It can be appreciated as such without applying your own additional intent.
reply