Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

llama.cpp already supports Vulkan. This is where all the hard work is at. Ollama hardly does anything on top of it to support Vulkan. You just check if the libraries are available, and get the available VRAM. That is all. It is very simple.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: