One can ran local LLMs even on RaspberryPi, although it will be horribly slow.
Here is one example, testing performance of different GPUs and Macs with various flavours of Llama:
https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inferen...
The underlying CLI tools do this, the app makes it easier to see and manage.