Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

“Caniuse” equivalent for LLMs depending on machine specs would be extremely useful!


There are too many variables at play, unfortunately.

One can ran local LLMs even on RaspberryPi, although it will be horribly slow.


Maybe it wouldn’t be an algorithm, maybe it would be a reporting site where you can review your experience if there’s no way to calculate it.


LocalLLaMA subreddit usually has some interesting benchmarks and reports.

Here is one example, testing performance of different GPUs and Macs with various flavours of Llama:

https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inferen...


LM Studio on MacOS provides an estimate of whether a model will run on the GPU, also lets you partially offload.

The underlying CLI tools do this, the app makes it easier to see and manage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: