Real question -- how else is OpenAI supposed to fund itself? It has capital requirements that the most moneyed business companies can't provide. So it has to come up with ways to get access to money while de-risking the terms. Not saying the circularity works but I don't know how else you raise at their scale.
This money is well beyond VC capability.
Either this lets them build to net positive without dying from painful financing terms or they explode spectacularly. Their rate of adoption it seems to be the former.
The tentacles seem a bit limp and disorientated on this one. There are lots of them but they just seem to flop wetly against the windows. I hope they're not going to start decomposing and stink the place up.
If you can only continue to fund a venture using scam-like structures, then maybe it's time to re-evaluate what the goals and value prop of the unfundable venture is.
Edit: the following is incorrect. I didn't know that the change to IRC § 174 was cancelled this summer.
------
What's crazy is that with the
2021 changes to IRC § 174 most software r&d spending is considered capital investment and can't be immediately expensed. Has to be amortized over 5 years.
I don't know how that 11.5B number was derived, but I would wager that the net loss on income statement is a lot lower than the net negative cash flow on cash flow statement.
If that 11.5B is net profit/loss, then whatever the portion of the expense part of the calculation that's software R&D could be 5x larger if it weren't for the new amortization rule.
It's incredible how Tesla used to lose a few hundred million a year and analysis shows would freak out claiming they'd never be profitable. Now Rivian can lose 5 billion a year and I don't hear anything about it, and OpenAI can lose 11 billion in a quarter and Microsoft still backs them.
I do think this is going to be a deeply profitable industry, but this feels a little like the WeWork CEO flying couches to offices in private jets
> Now Rivian can lose 5 billion a year and I don't hear anything about it, and OpenAI can lose 11 billion in a quarter and Microsoft
Rivian stock is down 90%, and I fairly regularly read financial news about it having bad earnings, stock going even lower, worst-in-industry reliability, etc etc.
I don't know why you don't hear about it, but it might be because it's already looking dead in the water so there's no additional news juice to squeeze out of it.
That's true, I shouldn't have written it off and was too eager to make the analogy.
There was a point where because of Tesla's enormous profits, it was seen as ok for Rivian to lose that much in a year, which was incredible because it's about the same amount of money Tesla lost during its entire tenure as a public company. You're right though they've been criticized for it and have paid the (stock) price for it.
Rivian lost something like $5B in 2024, but they're on track to only lose $2.25B in 2025. That trend line is clear. In 2026 they release a much lower cost model, and a lot of that loss has been development of that model. They probably won't achieve profitability in 2026, but if they get their loss down to $1B in 2026, in 2027 we'll likely see them go net positive.
Investors are trying to bet on OpenAI being the first to replace all human skilled labor. Of course, this is foolish for a few reasons:
1. Performance of AI tools improving but marginally so in practice
2. If human labor was replaced, it's the start of global societal collapse so any winnings would be moot.
We had an impressive new technology (the Web), and everyone could see it was going to change the world, which fueled a huge gold rush that turned into a speculative bubble. And yes, ultimately the Web did change the world and a lot of people made a lot of money off of it. But that largely happened later, after the bubble burst, and in ways that people didn't quite anticipate. Many of the companies people were making big bets on at the time are now fertile fodder for YouTube video essays on spectacular corporate failures, and many of the ones that are dominant now were either non-existent or had very little mindshare back in the late '90s.
For example, the same year the .com bubble burst, Google was a small new startup that failed to sell their search engine to Excite, one of the major Web portal sites at the time. Excite turned them down because they thought $750,000 was too high a price. 2 years later, after the dust had started to settle, Excite was bankrupt and Google was Google.
And things today sure do strike me as being very similar to things 25, 30 years ago. We've got an exciting new technology, we've got lots of hype and exuberant investment, we've got one side saying we're in a speculative bubble, and the other side saying no this technology is the real deal. And neither side really wants to listen to the more sober voices pointing out that both these things have been true at the same time many times in the past, so maybe it's possible for them to both be true at the same time in the present, too. And, as always, the people who are most confident in their ability to predict the future ultimately prove to be no more clairvoyant than the rest of us.
> we've got one side saying we're in a speculative bubble, and the other side saying no this technology is the real deal.
Um I think nobody is really denying that we are in a bubble. It's normal for new tech and the hype around it. Eventually the bad apples are weeded out and some things survive, others die out.
The first disagreement is how big the bubble is, i.e. how much air is in it that could vanish. And that's because of the second disagreement, which is about how useful this tech is and how much potential it has. It's clear that it has some undeniable usefulness. But some people think we'll soon have AGI replacing everybody and the opposite is that's all useless crap beyond a few niche applications. Most people fall somewhere in between, with a somewhat bimodal split between optimists and skeptics. But nobody really contends that it's a bubble.
>and OpenAI can lose 11 billion in a quarter and Microsoft still backs them.
For Microsoft, and the other hyperscalers supporting OpenAI, they're all absolutely dependent on OpenAI's success. They can realistically survive through the difficult times, if the bubble bursts because of a minor player - for example if Coreweave or Mistral shuts down. But if the bubble bursts because the most visible symbol of AI's future collapses, the value-destruction for Microsoft's shareholders will be 100x larger than OpenAI's quarterly losses. The question for Microsoft is literally as fundamental as "do we want to wipe $1tn off our market cap, or eat $11bn losses per quarter for a few years?" and the answer is pretty straightforward.
Altman has played an absolute blinder by making the success of his company a near-existential issue for several of the largest companies to have ever existed.
> Altman has played an absolute blinder by making the success of his company a near-existential issue for several of the largest companies to have ever existed.
Yeah true, the whole pivot from non-profit to Too Big to Fail is pretty amazing tbh.
They’re dependent on usage of their cloud. I don’t agree that they are as dependent on OAI as you suggest. Ultimately, we’ve unlocked a new paradigm and people need GPUs to do things - regardless of whether that GPU is running OAI branded software or not.
Why? Microsoft has permanent, royalty free access to the frontier models. If OpenAI went under, MSFT would continue hosting GPT-5 on Azure, GitHub Copilot, etc. and not be affected in the slightest.
The couch fascinate me the most because it's almost justifiable. Like offices need furniture and grand openings should be nice; however the cost could never be recovered and the company was way too big to be doing things that don't scale.
In a similar vein, LLM's/AI are clearly impressive technologies that can be done profitably. Spending billions on a model however may not be economically feasible. It's a great example of runaway spending, whereas the weed thing feels more along the lines of a drug problem to me.
And as we saw, once a model is trained you need very little compute to run it and there is very little advantage in begin the 1st model and the 10th model.
Monopoly in this field is impossible, your product won't ever be so good that the competition does not make sense
I’m not so sure. Look for more gov regulations that make it hard for startups. Look for stricter enforcement of copyright (or even updates to laws) once the big players have secured licensing deals, to cut off the supply of cheap training data.
Don't forget the perfectly legal use of legislation and bureaucratic precedent that gives them "soft/lossy monopoly" power or all but forces people do to business with them.
True, yet all ills are blamed on the so-called "free market". There is nothing free about our economy, it's manipulation and lobbying all the way down.
The one what? What is the secret sauce that will distinguish one LLM from another? Is it patentable? What's going to prevent all of the free LLMs from winning the prize? An AI crash seems inevitable.
Then they're doing it backwards. Google first built a far superior product, then pursued all the tricks to maintain their monopoly. OpenAI at best has the illusion of a superior product, and even that is a stretch.
I don't believe google won the search engine wars because they had the best product, while it may be true, the won because the of the tools they provided to their users. Email, cloud storage, docs/sheets/drive, Chrome, etc
They were already pretty dominant in search by the time they released most if not all of those. They got into that position by being the better search engine - better results and nicer to use (clean design, faster loading times).
Silicon valley capital investment firms have always exploited regulatory capture to "compete". The public simply has a ridiculously short memory of the losers pushed out of the market during the loss-leader to exploit transition phase.
Currently, the trend is not whether one technology will outpace the other in the "AI" hype-cycle ( https://en.wikipedia.org/wiki/Gartner_hype_cycle ), but it does create perceived asymmetry with skilled-labor pools. That alone is valuable leverage to a corporation, and people are getting fired or ripped off anticipating the rise of real "AI".
One day real "AI" may exist, but a LLM or current reasoning model is unlikely going to make that happen. It is absolutely hilarious there is a cult-like devotion to the AstroTurf marketing.
The question is never whether this is right or wrong... but simply how one may personally capture revenue before the Trough of disillusionment. =3
The moment properly self-improving AI (that doesn't run into some logistic upper bound of performance) is released, the economy breaks.
The AI, having theoretically the capacity to do anything better than everyone else, will not need support (in resources or otherwise) from any other business except perhaps once to kickstart its exponential growth. If it's guarded, every other company becomes instantly worthless on the long term, and if not anyone with a bootstrap-level of compute will be able to also, do anything ever on a long enough time frame.
It's not a race for ROI, it's to have your name go in the book as one of the guys that first obsoleted the relationship between effort, willpower, intelligence, etc. and the ability to bring arbitrary change to the world.
The machine god would still need resources provided by humans on their terms to run; the AI wouldn’t sweat having to run, for instance, 5 years straight of its immortality just to figure out a 10 years plan to eventually run at 5% less power than now, but humans may not be willing to foot the bill for this.
There’s no guarantee that the singularity makes economic sense for humans.
Your logic might make intuitive sense, but I don't think it is as ironclad as you portray it.
The fact is, there is no law of physics that prevents the existence of a system that can decrease its internal entropy (complexity) on its own, provided you constantly supply it with energy (negative entropy). Evolutionary algorithm (or "life") is an example of such a system. It is conceivable that there is a point when a LLM is smart enough to be useful for improving its own training data, which then can be used to train a slightly smarter version, which can be used to improve the data even more etc... Every time you inference to edit the training data and train, you are supplying a large amount of energy into the system (both inferencing and training consumes a lot of energy). This is where the decrease in entropy (increase in internal model complexity and intelligence) can come from.
I don't really believe that, and I thought it was interesting on Meta's earnings call that Zuck (or the COO) said that it seems unlikely at this point that a single company will dominate every use of LLMs/image models, and that we should expect to see specialization going forward.
As I understand the argument, it's that AI will reach a level where it's smart enough to improve itself, leading to a feedback loop where it takes off like a rocket. In this scenario, whoever is in second place is left so far in the dust that it doesn't matter. Whichever model is number one is so smart that it's able to absorb all economic demand, and all the other models will be completely obsolete.
This would be a terrifyingly dystopian outcome. Whoever owns this super intelligence is not going to use it for the good of humanity, they're going to use it for personal enrichment. Sam Altman says OpenAI will cure cancer, but in practice they're rolling out porn. There's more immediate profit to be made from preying on loneliness and delusion than there is from empowering everyone. If you doubt the other CEOs would do the same, just look at them kissing the ass of America's wannabe dictator in the White House.
Another possible outcome is that no single model or company wins the AI race. Consumers will choose the AI models that best suit their varying needs, and suppliers will compete on pricing and capability in a competitive free market. In this future, the winners will be companies and individuals who make best use of AI to provide value. This wouldn't justify the valuations of the largest AI companies, and it's absolutely not the future that they want.
I agree this is a reasonable bet though but for different reason, I believe this is a large scale exploitation where money is systematically siphoned away from workers and into billionaires via e.g. hedgefunds, bailouts, dividend payouts, underpay, wagetheft, etc. And the more they blow out this bubble the more money they can exploit out from workers. As such it is not really a bet, but rather the cost of business. Profits are guaranteed as long as workers are willing to work for yours.
https://www.theregister.com/2025/10/29/microsoft_earnings_q1...
Microsoft seemingly just revealed that OpenAI lost $11.5B last quarter