Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
erichocean
19 days ago
|
parent
|
context
|
favorite
| on:
Zebra-Llama – Towards efficient hybrid models
Kimi K2 also uses MLA, and Kimi Linear runs Kimi Delta Attention (it's SSM-like) for three out of every four layers (the fourth uses MLA).
jychang
19 days ago
[–]
Kimi K2 is literally a "copy Deepseek's homework" model. Seriously. It's even exactly 61 layers, the same as Deepseek V3/R1.
logicprog
19 days ago
|
parent
[–]
For a "copy Deepseek's homework" model, it's really good, preferable to DeepSeek for me (at least prior to V3.2, which I haven't been able to fully put through its paces yet). post-training really makes that much of a difference I guess
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: