Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> So did you run the model offline on your own computer and get realtime audio?

At the top of the README of the GitHub repository, there are a few links to demos where you can try the model.

> It seems it needs around a $2,500 GPU

You can get a used RTX 3090 for about $700, which has the same amount of VRAM as the RTX 4090 in your ChatGPT response.

But as far as I can tell, quantized inference implementations for this model do not exist yet.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: