Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
jaggs
on May 22, 2024
|
parent
|
context
|
favorite
| on:
Ask HN: Which LLMs can run locally on most consume...
Why would you get costs with a local model?
FezzikTheGiant
on May 22, 2024
[–]
yeah that's what I'm saying - it would eliminate inference costs. What I was asking is how feasible is it to package these local llms with another standalone app. For ex. a game
jaggs
on May 22, 2024
|
parent
[–]
Oh sorry. Hm..I actually have no idea. It sounds like a neat idea though. :)
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: