I'll accept Meta's frontier AI demise if they're in their current position a year from now. People killed Google prematurely too (remember Bard?), because we severely underestimate the catch-up power bought with ungodly piles of cash.
It's insane numbers like that that give me some concern for a bubble. Not because AI hits some dead end, but due to a plateau that shifts from aggressive investment to passive-but-steady improvement.
Maverick and Scout were not great, even with post-training in my experience, and then several Chinese models at multiple sizes made them kind of irrelevant (dots, Qwen, MiniMax)
If anything this helps Meta: another model to inspect/learn from/tweak etc. generally helps anyone making models
Part of the secret sauce since O1 has been accesss the real reasoning traces, not the summaries.
If you even glance at the model card you'll see this was trained on the same CoT RL pipeline as O3, and it shows in using the model: this is the most coherent and structured CoT of any open model so far.
Having full access to a model trained on that pipeline is valuable to anyone doing post-training, even if it's just to observe, but especially if you use it as cold start data for your own training.