Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At that point we're basically back to the `AI is just nested if-else expressions` story. The only difference is that now there is a language reader on top that understands the semantic of your language. But actors (or agents in LangChain lingo) are just if-else. The tools you connect them to must be developed separately.


Sure, you could also say that human language/action capacity is just a biological LLM with some ifs on top that give it access to actions.

In the case you describe, you can have an LLM write the tools.

Yes, the first tools and bridge code might need to be manually built. But after that it could be LLMs all the way down.

Kind of similar to writing a new programming language. At first you write it in another language but after compiling it for the first time, you can then write the language in the language itself.


Very good point. Once you start breaking down a llm into presets/delegators, you introduce basically if-else, with all the problems of that split. Lack of visibility, local vs global optimization, lack of control and predictability, asymmetry of information. I wonder if the current Agents approach is a stopgap solution.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: