Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's a blog post I wrote last week on the same topic: https://blog.oumi.ai/p/small-fine-tuned-models-are-all-you

I discuss a large-scale empirical study of fine-tuning 7B models to outperform GPT-4 called "LoRA Land", and give some arguments in the discussion section making the case for the return of fine-tuning, i.e. what has changed in the past 6 months



insightful, thanks




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: