It's hard to tell from the data, it's so concentrated within a handful of companies who are all buying from eachother, so it feels like the contagion risk is low. At the same time it feels very clearly overvalued and the size of the inflows are huge.
The contagion risk is huge. As the article points out in several ways, the AI bubble is the only part of the US economy where number go up.
Every single bank, fund and retail investor is heavily, if not existentialy, exposed to this house of cards. Absurd promises are being made with national economy-sized volumes of cash.
This is going to take everyone down when it blows.
The post mentions an approach of using a large model to generate labels and then distilling this into a smaller model to lower cost (though it doesn't provide an example)
I think this is a really interesting paper from Cohere, it really feels that at this point in time you can't trust any public benchmark, and you really need your own private evals.
I would pick one of two parts of that analysis that are most relevant to you and zoom in. I'd choose something difficult that the model fails at, then look carefully at how the model failures change as you test different model generations.
Yup in my private evals I have repeatedly found that DeepSeek has the best models for everything and yet in a lot of these public ones it always seems like someone else is on the top. I don't know why.
If I had to hazard a guess, as a poor soul doomed to maintain several closed and open source models acting agentically, I think you are hyper focused on chat trivia use cases (DeepSeek has a very, very, hard time tool calling and they say as much themselves in their API docs)
The big news here is the training costs, $5.576m total cost, equivalent to 2788k training hours on H800 GPU at $2 per hour. This for a model that is (according to DeepSeek's own benchmarks) SOTA for open source.