Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
UncleOxidant
13 days ago
|
parent
|
context
|
favorite
| on:
The RAM shortage comes for us all
Or maybe models that are much more task-focused? Like models that are trained on just math & coding?
agoodusername63
13 days ago
[–]
isn't that what the mixture of experts trick that all the big players do is? Bunch of smaller, tightly focused models
reply
irthomasthomas
12 days ago
|
parent
[–]
Not exactly. MoE uses a router model to select a subset of layers per token. This makes them faster but still requires the same amount of RAM.
reply
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: