Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The winner takes it all, so it is reasonable to bet big to be the one.


The one what? What is the secret sauce that will distinguish one LLM from another? Is it patentable? What's going to prevent all of the free LLMs from winning the prize? An AI crash seems inevitable.


It could end up like Search did, at first you had Lycos, AskJeeves, Altavista etc. and then Google became absolutely dominant.

They want to be the Google in this scenario.


Then they're doing it backwards. Google first built a far superior product, then pursued all the tricks to maintain their monopoly. OpenAI at best has the illusion of a superior product, and even that is a stretch.


Google was by far the best product. Maybe an LLM provider will emerge in that way, but it seems they are all very similar in capability right now.


I don't believe google won the search engine wars because they had the best product, while it may be true, the won because the of the tools they provided to their users. Email, cloud storage, docs/sheets/drive, Chrome, etc


They were already pretty dominant in search by the time they released most if not all of those. They got into that position by being the better search engine - better results and nicer to use (clean design, faster loading times).


They had already won the search engine wars well before any of those additional products existed.


You need the infrastructure, not just the model.

The model can be free, but the infrastructure (data center) ain't.


Silicon valley capital investment firms have always exploited regulatory capture to "compete". The public simply has a ridiculously short memory of the losers pushed out of the market during the loss-leader to exploit transition phase.

Currently, the trend is not whether one technology will outpace the other in the "AI" hype-cycle ( https://en.wikipedia.org/wiki/Gartner_hype_cycle ), but it does create perceived asymmetry with skilled-labor pools. That alone is valuable leverage to a corporation, and people are getting fired or ripped off anticipating the rise of real "AI".

https://www.youtube.com/watch?v=_zfN9wnPvU0

One day real "AI" may exist, but a LLM or current reasoning model is unlikely going to make that happen. It is absolutely hilarious there is a cult-like devotion to the AstroTurf marketing.

The question is never whether this is right or wrong... but simply how one may personally capture revenue before the Trough of disillusionment. =3


The goal isn't to be the best LLM, the goal is to be the first self-improving LLM.

On paper, whoever gets there first, along with the needed compute to hand over to the AI, wins the race.


Maybe in paper, but only on paper. There are so many half baked assumptions in that self-improvement logic.


The moment properly self-improving AI (that doesn't run into some logistic upper bound of performance) is released, the economy breaks.

The AI, having theoretically the capacity to do anything better than everyone else, will not need support (in resources or otherwise) from any other business except perhaps once to kickstart its exponential growth. If it's guarded, every other company becomes instantly worthless on the long term, and if not anyone with a bootstrap-level of compute will be able to also, do anything ever on a long enough time frame.

It's not a race for ROI, it's to have your name go in the book as one of the guys that first obsoleted the relationship between effort, willpower, intelligence, etc. and the ability to bring arbitrary change to the world.


The machine god would still need resources provided by humans on their terms to run; the AI wouldn’t sweat having to run, for instance, 5 years straight of its immortality just to figure out a 10 years plan to eventually run at 5% less power than now, but humans may not be willing to foot the bill for this.

There’s no guarantee that the singularity makes economic sense for humans.


Presuming the kind of runaway superintelligence people usually discuss, the sort with agency, this just turns into a boxing problem.

Are we /confident/ a machine god with `curl` can't gain its own resilient foothold on the world?


Self-improving LLM is as probable as a perpetual motion machine.

Practically, LLMs train on data. Any output of an LLM is a derivative of the training data and can't teach it anything new.

Conceptually, if a stupid AI can build a smart AI, it would mean that the stupid AI is actually smart, otherwise it wouldn't have been able too.


Your logic might make intuitive sense, but I don't think it is as ironclad as you portray it.

The fact is, there is no law of physics that prevents the existence of a system that can decrease its internal entropy (complexity) on its own, provided you constantly supply it with energy (negative entropy). Evolutionary algorithm (or "life") is an example of such a system. It is conceivable that there is a point when a LLM is smart enough to be useful for improving its own training data, which then can be used to train a slightly smarter version, which can be used to improve the data even more etc... Every time you inference to edit the training data and train, you are supplying a large amount of energy into the system (both inferencing and training consumes a lot of energy). This is where the decrease in entropy (increase in internal model complexity and intelligence) can come from.


I don't really believe that, and I thought it was interesting on Meta's earnings call that Zuck (or the COO) said that it seems unlikely at this point that a single company will dominate every use of LLMs/image models, and that we should expect to see specialization going forward.


As I understand the argument, it's that AI will reach a level where it's smart enough to improve itself, leading to a feedback loop where it takes off like a rocket. In this scenario, whoever is in second place is left so far in the dust that it doesn't matter. Whichever model is number one is so smart that it's able to absorb all economic demand, and all the other models will be completely obsolete.

This would be a terrifyingly dystopian outcome. Whoever owns this super intelligence is not going to use it for the good of humanity, they're going to use it for personal enrichment. Sam Altman says OpenAI will cure cancer, but in practice they're rolling out porn. There's more immediate profit to be made from preying on loneliness and delusion than there is from empowering everyone. If you doubt the other CEOs would do the same, just look at them kissing the ass of America's wannabe dictator in the White House.

Another possible outcome is that no single model or company wins the AI race. Consumers will choose the AI models that best suit their varying needs, and suppliers will compete on pricing and capability in a competitive free market. In this future, the winners will be companies and individuals who make best use of AI to provide value. This wouldn't justify the valuations of the largest AI companies, and it's absolutely not the future that they want.


Do you have any reasoning to support the notion that this market is winner takes all?


With enough money to lobby, they can make it a winner takes all market (ala, a regulated monopoly).


Want to bet? I see this claim all over the internet and do not believe it for a moment.


But then you get stuff like Deepseek R1.


Does the winner take it all?

I agree this is a reasonable bet though but for different reason, I believe this is a large scale exploitation where money is systematically siphoned away from workers and into billionaires via e.g. hedgefunds, bailouts, dividend payouts, underpay, wagetheft, etc. And the more they blow out this bubble the more money they can exploit out from workers. As such it is not really a bet, but rather the cost of business. Profits are guaranteed as long as workers are willing to work for yours.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: