Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Kimi K2 also uses MLA, and Kimi Linear runs Kimi Delta Attention (it's SSM-like) for three out of every four layers (the fourth uses MLA).


Kimi K2 is literally a "copy Deepseek's homework" model. Seriously. It's even exactly 61 layers, the same as Deepseek V3/R1.


For a "copy Deepseek's homework" model, it's really good, preferable to DeepSeek for me (at least prior to V3.2, which I haven't been able to fully put through its paces yet). post-training really makes that much of a difference I guess




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: