Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Since an LLM has no sense of self or instances

While I'd be surprised to learn they have anything a normal person would call a sense of self, it would only be mild surprise and even then mainly because it means we finally have a testable definition. (Amongst other things, I don't buy that the mirror test is a good test, but rather I think it's an OK first attempt at a test).

We're really bad at this.

> In a way, doesn't it already "talk to itself" when generating sentences, e.g., its output token gets added to the input tokens successively?

I'm not sure if that counts as talking to itself or not; I think that I tend to form complete ideas first and then turn them into words which I may edit afterwards, but is that editing process "talking to myself"?

And this might well be one kind of "sense of self". Possibly.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: