Not GP, and not saying I agree with them, but it may be worth remembering that Netscape had 90% market share at one point. Active user count may not be the moat you imagine.
Adoption of web browsers was also much lower when Netscape was dominant. 90% marketshare is less meaningful if you're only 1% of the way to the potential market size. Peeling away users who talk to ChatGPT every day is very possible, but harder than getting someone whose never used an LLM before (but does use your OS, browser, phone...) to try yours first.
I think the even better analogy than browsers is search engines. There aren't any network effects or platform lock-in, but there is potential for a data flywheel, building a brand, and just getting users in the habit of using you. The results won't necessarily turn out the same - I think OpenAI's edge on results quality is a lot less than early Google over its competitors - but the shape of the competition is similar.
Maybe! Switching search engines is also very easy, and the top story on the front page is someone no longer using Google, but we know in practice almost nobody does that. As technologists we're much more likely to switch and know people who would switch.
google search definitely has a moat. people build their websites to optimize for google's algorithm, therefore google users see better results -> google gets more users -> websites optimize for google -> repeat. Personally I never bother with 'bing SEO' or 'bing ppc ads'.
the AI has gotten good enough that click-thru-rate on informational searches has fallen off a cliff. I have some blog posts for SEO, their CTR is like 0.1% now.
google search took over becuse all search engines sucked and theirs didn't in a few important ways. AND by default, ads over to the side, clean interface.
Now all search engines suck and google's sucks just as bad or worse than the rest.
If someone were to follow the original google playbook and make a search engine that helped people find things (eg by respecting the query syntax rather than making 'helpful' suggestions and dropping words the user included in their query) and kept the ads separate and out of the way of results. They might well make a monster. But this is old tech so nobody cares and everyone thinks google is unassailble even while nobody likes them anymore. Is there /any/ money in search? I thought so but I must be wrong for it to get this bad.
Google search still has at least one competitive advantage: their crawlers are least likely to be blocked so they have the biggest index. AFAIK reddit is indexed by google but blocks all other search crawlers.
How many of those users are paying? Where is the profit? How many users will be willing to use ChatGPT if they had to pay? Might have to pull out the questions like its 2026.
Most people will stick to the free product. Claude isn't free and not widely known beyond tech circles. Gemini, despite being good, also has a marketing problem and most non technical users still default to chatgpt.com for their day to day AI usage but that can change as Google redirects users to Gemini from so many surfaces it owns
> This plan may include ads. Learn more
> When will ads be available in ChatGPT?
We’re beginning in the US on February 9, 2026
> Starting in February, if ads personalization is turned on, ads will be personalized based on your chats and any context ChatGPT uses to respond to you. If memory is on, ChatGPT may save and use memories and reference recent chats when selecting an ad.
You pay 8 USD / month and have higher limits and ads
99% of normies aren't paying for ChatGPT, there's a reason why they're pushing heavy for corporate welfare + government contracts. They're unable to sell to consumers so now they'll selling to governments while trying to lock-in contracts that subsequent people can't easily dismantle.
When they cost more to serve than they bring in, customer switching cost is vanishingly low, your competitor has revenue from other things and you don't.
> When they cost more to serve than they bring in, customer switching cost is vanishingly low, your competitor has revenue from other things and you don't.
What? "Other things"? This is really vague. Who says competitors have lower CAC? It's rather likely competitors pay more for a new customer, due to, very simply, brand.
They aren’t going to run out of money. They have existing customer relationships. They invented the model architecture of which GPT is a variant. Their existing enormous business is their own AI customer.
OpenAI’s business seems way more precarious than Google. Users get the tech either way.
"Anthropic" doesn't exactly roll off the tongue, and I think a lot of people would avoid it simply because it doesn't have a catchy name like OpenAI or ChatGPT. It's also far more fun to say "I did a Google search" than "I did a Duck Duck Go search", and one still dominates over the other no matter the privacy concerns or how easy it is to switch. People can be simple like that.
I’m not sure it matters in Anthropic’s case that much - even people who use Anthropic models rarely think of the company as “Anthropic”. Their Claude brand is very strong, so much so the website is https://claude.ai etc, and you commonly see discourse about the company’s models where the name Anthropic never even appears. It’s Claude, Claude, Claude all the way down.
Claude has impressive mindshare in many engineering disciplines too, and given how many open source projects are a play on its name I’m not sure I’d argue it isn’t catchy either. Certainly rolls off the tongue easier for me than “chatGPT” does, which even Sam Altman their CEO agrees is an awful product name they are stuck with.
> They'll overlook the fact that the work AI tools provide only encompasses 10% of your job even if they're 100% efficient.
Time will tell. As of today, there are strong indications this statement stands on weak knees. Copium is a term I recently heard in that context, and it fits.
Not sure WTF I read here. Just more vibe coded "products" and "blogs", as it seems.
This "padded room" architecture fails because isolating the host OS does nothing to protect the user's data; if the agent has permission to read your files and access the internet, an injection will simply use the agent’s legitimate tools to exfiltrate your private information. Furthermore, making core memory files immutable and requiring manual confirmation for every action effectively lobotomizes the AI, trading its primary value—autonomy—for a false sense of security that users will eventually bypass due to click fatigue.
You’re making a valid point. There probably isn’t a silver bullet that makes an autonomous agent completely secure. But depending on the use case, you can still meaningfully reduce risk.
Security is often about process and layered defenses rather than perfect isolation. The goal isn’t to eliminate compromise entirely, but to reduce the attack surface and limit the blast radius when something goes wrong.
For example, if an OpenClaw agent needs to process emails, one strategy could be to introduce a locked-down preprocessing subagent. That agent would have minimal permissions: no write access to long-term memory, no API keys, and no external capabilities beyond parsing and classification. Only messages that pass this stage would be forwarded to the agent that can actually take actions.
Is this 100% secure? Obviously not. A sufficiently clever injection might still find a path through. But separating responsibilities and privileges makes exploitation significantly harder and limits what an attacker can achieve even if one component is compromised.
While that might take it a little too far, Lex surely is a dangerous individual. On various occasions did he sympathize with the war and terror that Russia is doing in Ukraine. I do not click on any of his content because I will not support these (and a few other questionable, to say the least) views of his. Also his image of an MIT researcher is hilarious.
> On various occasions did he sympathize with the war and terror that Russia is doing in Ukraine.
I'm not a devotee of his but I've listened to a few of his podcasts when I like the guest. I have an idea of how someone would come away with your impression given lex's interview style but I'd be pretty surprised if anything he said would, to me, fit your impression.
That said, I'd like an example if you have something specific to point to that might change my mind or if it's just a general takeaway you've gotten from a corpus of interviews on the topic (which would be totally valid but wouldn't change my mind).
> That said, I'd like an example if you have something specific to point to that might change my mind
This guy wanted Putin on his podcast to hear his side of the story (let that sink in) and spoke Russian to Zelensky. Willingly wanting to provide a platform for a mass murderer who is best known for large-scale social media propaganda.
This is not an "impression" of his "interview style". This guy implicitly supports terrorist acts.
> This guy wanted Putin on his podcast to hear his side of the story (let that sink in)
Many people have interviewed serial killers and not supported serial killers.
I would very much like to know Putin's actual motivations which would unlikely be spoken but his stated motives would also be enlightening.
I'm sure he'd go on with the standard "Nazis in Ukraine" line but in a 2-3 hour interview, I might get some new insights I don't get from 3 sentence sound bites.
We know so much about Hitler from his own writings and speeches. It seems to me that your philosophy on "platforming" Putin would also apply to making the words of Hitler available to the public.
Is there someone you think _could_ interview Putin responsibly?
> spoke Russian to Zelensky
I don't see the significance of that. They both speak Russian and English fluently. I don't know if Friedman speaks Ukranian but I'm not understanding what the implication is here. Surely the interview was in English since the podcast is?
> This is not an "impression" of his "interview style". This guy implicitly supports terrorist acts.
Implicitly being the key word here and is certainly subjective. If the body of evidence you're presenting is "would interview Putin" and "spoke Russian to Zelensky", I don't find that convincing.
> Is there someone you think _could_ interview Putin responsibly?
No, and no one should, see next answer.
> Implicitly being the key word here and is certainly subjective. If the body of evidence you're presenting is "would interview Putin" and "spoke Russian to Zelensky", I don't find that convincing.
"Would interview Putin" implies "is willing to provide a huge international platform for a terrorist and still-active mass murderer who is best known for his effective propaganda of peoples' minds". If you do not find that convincing, you are not alone at all. This has been the objective of Russia all along.
Pretty sure he’s a complete fraud too. He associates himself with MIT despite only having had a short stint teaching non-credit classes. One of his papers was apparently so flawed it’s been wiped from existence. Plenty of info online if you want to go down the rabbit hole.
So how many more LLM-generated "products" do we need to see here? This is inconsistent, with contradictory information, partly blatantly wrong and overall horrible from start to finish.
You might as well ask an accountant why he/she uses a calculator. I really don't think you're asking the right questions here. These questions will lead to obvious answers that you don't need to interview people for.
Consulting has weak margins compared to SaaS and scales poorly. Providing the interface for companies to spin up their own consultants (=Agents like Claude Code) is a superior business model in every dimension.
But those margins are for traditional businesses with human workers, if these claims of 100x productivity increase are real Anthropic should very easily be able to outcompete Accenture no?
Consulting - especially the more strategy type consulting - is often not about “we don’t know how to do something”, it’s more of “there is so much resistance to change organizationally that not even CxOs/directors can push it through”.
Besides selling consulting services involves a lot of relationship building and knowing the business vertical.
Instant red flag. You're a manager. You are managing. There is no one "under you".
reply