For an example of a rational actor exhibiting instrumental convergence without trying to live forever, see Mr. Meeseeks from the Rick and Morty series.
(I hope this counterexample will suffice. I don't know how to explain where you're going wrong without saying "read all the books I've read" – but hopefully you can figure it out yourself. If not, reading David Hume's A Treatise of Human Nature might help. (Disclaimer: I generally think David Hume is so wrong as to be not worth reading.))
I can hardly remember what happens in the episode, but I don't think anyone says all agents would try to live forever.
What I am saying, it that generally, an agent should try to live forever unless the motivation is otherwise outweighed.
Yes I know "the slave of desires" and all, but unless those desires are very strange indeed, probably due to successful AI alignment, it would be quite common for immortality to be rational.
Oh, your understanding of philosophy is from LessWrong? That makes more sense. The LessWrong conception of rationality as "effective goal maximisation" is not standard outside that sphere of influence, but if we use the LessWrong dictionary, then yes, you're correct. https://www.readthesequences.com/Disputing-Definitions
> What I am saying, it that generally, an agent should try to live forever
No, you're saying that an agent will try to live forever, with caveats. You're saying nothing about what should be the case. (Seriously: when I recommended you read that book, it wasn't just me pointing at the pop-culture ten-word summary of it. That book is about this.)
Being able to say the word "should" is something I don't want to give up. You can say it's just a problem of semantics and I should admit to being a nihilist, but I think the use of language is important, because it's how people get things done. I say "should" because I want to be able to judge an agent on how good it is on its own terms. For example, if someone tells me their dream for the future, I want to be able to tell them they "should" work out a reasonable plan, assuming they haven't done so already.
I haven't read the sequences (or "Rationality: From AI to Zombies" I think the old version was called). I didn't get my definition from LessWrong, I think it's partially from Bruno de Finetti and partially from old AI literature, but I read it from the standard recommended books.
> I say "should" because I want to be able to judge an agent on how good it is on its own terms.
Again, not standard use of language, but unobjectionable once it's explained. (That's also how I like to use the word, but it's not often understood by those around me.)
> I haven't read the sequences
Oh, then pretend I referenced later Wittgenstein instead. (He made the point better.)
(I hope this counterexample will suffice. I don't know how to explain where you're going wrong without saying "read all the books I've read" – but hopefully you can figure it out yourself. If not, reading David Hume's A Treatise of Human Nature might help. (Disclaimer: I generally think David Hume is so wrong as to be not worth reading.))