Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My mental model is a bit different:

Context -> Attention Span

Model weights/Inference -> System 1 thinking (intuition)

Computer memory (files) -> Long term memory

Chain of thought/Reasoning -> System 2 thinking

Prompts/Tool Output -> Sensing

Tool Use -> Actuation

The system 2 thinking performance is heavily dependent on the system 1 having the right intuitive models for effective problem solving via tool use. Tools are also what load long term memories into attention.



Very cool, good way to think about it. I wouldn’t be surprised if non-AGI LLMs help write the code to augment themselves into AGI.

The unreasonable effectiveness of deep learning was a surprise. We don’t know what the future surprises will be.


I like this mental model. Orchestration / Agents and using smaller models to determine the ideal tool input and check the output starts to look like delegation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: