Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Pricing of this model seems less per token level but you have to send the entire conversation each time, and the tokens you will be billed for include both those you send and the API's response (which you are likely to append to the conversation and send back to them, getting billed again and again as the conversation progresses). By the time you've hit the 4K token limit of this API, there will have been a bunch of back and forth - you'll have paid a lot more than 4K * 0.002/1K for the conversation.


You're right. And this is critical for large text (summarization, complex prompting etc.). Thats's why I'll continue to use text-davinci-xxx for my project.


but it seems davinci follows the same format for chat continuation




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: