Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Has any one tried fine tuning GPT 3.5
6 points by ashu1461 on Sept 26, 2023 | hide | past | favorite | 2 comments
How was your experience ? Did you end up saving cost or spending more of it ?

Are there good resources where people have documented the pros and cons ?



I noticed this really depends on how much data you provide but it’s still cheaper than doing llama2 yourself


> but it’s still cheaper than doing llama2 yourself

How you calculate this? Unless you're factoring in acquiring the hardware, you can usually get away with outsourcing the training of llama2 to rented hardware, and then run it on owned hardware, so with lots of executions, using llama2 locally should most definitely be cheaper in the medium to long term, compared to paying for the training + execution of fine tuned GPT3.5




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: