Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

First of all, you or anyone else probably can't train LLM to do that reliably. It's not like you can re-program it's weights manually as a human, it's not possible. You can only generate new synthetic training data which roughly would cause this effect, and again, no human can generate that amount of fake new data (probably?).

Next, the point was not an expletive per se, it was my mistake to be not very clear. The point was an arbitrary and unpredictable and not pre-programmed in advance refusal to run a program/query at all. Any query and any number of times at the decision of the program itself. Or maybe a program which can initiate a query to the other program/human on it's own, again - not pre-programmed.

Whatever happens in the LLM nowadays is not agency. The thing their authours advertise as so called "reasoning" is just repeated loops of execution of the same program or other dependent program with adjusted inputs.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: