Thanks for checking out our work! Yeah, great point, this is even more critical in those scenarios when you have very limited memory. You can play around with different context sizes using the code on GitHub: https://github.com/cpacker/MemGPT/blob/main/memgpt/constants...