Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How to write code so that LLMs can extend it (grbsh.substack.com)
10 points by grbsh on July 16, 2023 | hide | past | favorite | 3 comments


Good article. Should have at least mentioned Claude 2 which has 100k context.

This is an example of why "mastering" a high technology doesn't have the same value as doing so for an old-fashioned skill in an area that changes less rapidly.


Good point! I haven't been able to play around with Claude 2, but I have with GPT4-32k. To me, 32k sounded like a lot, but in practice it came out to fit only a few of the dozen or so files I was interested in working with. I still had to splice in only the most relevant context. I think we're going to have a lot of the same problems until context lengths get several orders of magnitude longer.


Several order of magnitude longer would mean something like at least four orders of magnitude.

So you are implying that it doesn't matter much until it's 1 billion tokens context length or approximately 1600 pages.

That's a ludicrous statement.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: