Hacker Newsnew | past | comments | ask | show | jobs | submit | athorax's commentslogin

Why do you think there is a lot of training data? Could it be because it's stable and virtually unchanged for decades? Hmmm.

Because bash is everywhere. Stability is a separate concern. And we know this because LLMs routinely generate deprecated code for libraries that change a lot.

This project runs on all shells, totally portable:

https://github.com/alganet/coral

busybox, bash, zsh, dash, you name it. If smells bourne, it runs. Here's the list: https://github.com/alganet/coral/blob/main/test/matrix#L50 (more than 20 years of compatibility, runs even on bash 3)

It's a great litmus test, that many have passed. Let me know when just-bash is able to run it.


I have no connection to coral or just-bash. Why don't you do it yourself and let us know, since you are familiar with it?

I've been working with the shell long enough that I know just by looking at it.

Anyway, it was rethorical. I was making a point about portability. Scripts we write today run even on ancient versions, and it has been an effort kept by lots of different interpreters (not only bash).

I'm trying to give sane advice here. Re-implementing bash is a herculean task, and some "small incompatibilities" sometimes reveal themselves as deep architectural dead-ends.


The project does not list portability as a concern. It's for agent use; they are not trying to re-use existing bash code.

Before, you said:

> they use it because there's a lot of training material.

Now, you say:

> they are not trying to re-use existing bash code.

Can't you see how this is a contradiction?

---

I'm sorry, I can't continue like this. I want to have meaningful conversations.


Is English your second language? "They" refers to very different things here.

The issue here is not language, is basic understanding of how LLMs are trained, how agents act on that training and what is the role of the shell from a systems perspective.

I can't have a meaningful conversation with someone that doesn't fully grasp those, no matter in which language.


It's like they are trying to do the opposite of the Unix philosophy. Do many things very poorly.

Why’s this poor?

My work machine is Win11 and the new Notepad is hilariously buggy. Repeatedly encountered bugs where the screen fails to paint, takes multiple seconds to load, hard refuses to open files of a certain size, etc.

Notepad was never fancy, but it was a reliable tool to strip formatting or take a quick note, and now I cannot even count on that.


They've rewritten it in React?

They dont care. Their sales reps absolutely know that if you are using Microsoft products it is because you are locked in so deeply that escape is nearly impossible.


I like CUE a lot. We use it pretty heavily for schema enforcement of CRDs. That being said, it is pretty complex and learning to use it was anything but straight forward.

For more basic configs, I would potentially look into KCL https://www.kcl-lang.io/

It has a much simpler usage overall especially if you are only really trying to enforce some config rules.

The other alternative is to just use whatever language you are writing your software in and build a basic validator


Lets put it this way, no engineer is choosing to use bitbucket. You use it because some SVP made the mistake of choosing atlassian software a decade ago and refuses to change.


For me the biggest value of uv was replacing pyenv for managing multiple versions of python. So uv replaced pyenv+pyenv-virtualenv+pip


Yes. poetry & pyenv was already a big improvement, but now uv wraps everything up, and additionally makes "temporary environments" possible (eg. `uv run --with notebook jupyter-notebook` to run a notebook with my project dependencies)

Wonderful project


This is it. Later versions of python .11/.12/.13 have significant improvements and differences. Being able to seamlessly test/switch between them is a big QOL improvement.

I don't love that UV is basically tied to a for profit company, Astral. I think such core tooling should be tied to the PSF, but that's a minor point. It's partially the issue I have with Conda too.


> Later versions of python .11/.12/.13 have significant improvements and differences. Being able to seamlessly test/switch between them is a big QOL improvement.

I just... build from source and make virtual environments based off them as necessary. Although I don't really understand why you'd want to keep older patch versions around. (The Windows installers don't even accommodate that, IIRC.) And I can't say I've noticed any of those "significant improvements and differences" between patch versions ever mattering to my own projects.

> I don't love that UV is basically tied to a for profit company, Astral. I think such core tooling should be tied to the PSF, but that's a minor point. It's partially the issue I have with Conda too.

In my book, the less under the PSF's control, the better. The meager funding they do receive now is mostly directed towards making PyCon happen (the main one; others like PyCon Africa get a pittance) and to certain grants, and to a short list of paid staff who are generally speaking board members and other decision makers and not the people actually developing Python. Even without considering "politics" (cf. the latest news turning down a grant for ideological reasons) I consider this gross mismanagement.


> I think such core tooling should be tied to the PSF, but that's a minor point.

The PSF is busy with social issues and doesn't concern itself with trivia like this.


Didn't Astral get created out of uv (and other tools), though? Isn't it fair for the creators to try and turn it into a sustainable job?

Edit: or was it ruff? Either way. I thought they created the tools first, then the company.


With uvx it also replaces pipx.


I am confused on this as well, they list polyglot teams[0] as their top use case and consider not needing schema files a feature

[0] https://fory.apache.org/blog/2025/10/29/fory_rust_versatile_...


I think the big difference is that these aren't AI generated bug reports. They are bugs found with the assistance of AI tools that were then properly vetted and reported in a responsible way by a real person.


Basically using AI the way we have used linters and other static analysis tools, rather than thinking it's magic and blindly accepting its output.


In the defense of the language models, the bugs were written by humans in the first place. Human vetting is not much of a defense.


From what I understand some of the bugs where in code the AI made up on the spot, other bug reports had example code that didn't even interact with curl. These things should be relatively easy to verify by a human, just do a text search in the curl source to see if the AI output matches anything.

Hard to compute, easy to verify things should be the case where AI excel at. So why do so many AI users insist on skipping the verify step?


> Human vetting is not much of a defense.

The issue I keep seeing with curl and other projects is that people are using AI tools to generate bug reports and submitting them without understanding (that's the vetting) the report. Because it's so easy to do this and it takes time to filter out bug report slop from analyzed and verified reports, it's pissing people off. There's a significant asymmetry involved.

Until all AI used to generate security reports on other peoples' projects is able to do it with vanishingly small wasted time, it's pretty assholeish to do it without vetting.


Thats a bit uncalled for. This is a game made by someone shaped by their perspective on the world. It can be appreciated as such without applying your own additional intent.


$10/year for 10,000 messages is a tenth of a penny per message


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: