Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

when LLM popped out and people started to say 'this is just markov chain on steroid and not thinking' i was a bit confused because a lot of my "thinking" is statistical too.. I very often try to solve an issue by switching a known solution with a different "probable" variant of it (tweaking a parameter)

LLMs have higher dimensions (they map token to grammatical and semantical space) .. it might not be thinking but it seems on its way we're just thinking with more abstractions before producing speech ?... dunno



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: