Hacker Newsnew | past | comments | ask | show | jobs | submit | beernet's commentslogin

> I have people under me who [...]

Instant red flag. You're a manager. You are managing. There is no one "under you".


That's pretty standard lingo. Would you prefer "reporting to me"?

Sam in very particular here. This guy will say whatever for status and "power".

Oh look! Altman just made a deal with the DoW

Chatgpt, now with 10% more war crimes

How are ~1B active users not "moat"? Might have to pull out the "Haters gonna hate" like it's 2007

Not GP, and not saying I agree with them, but it may be worth remembering that Netscape had 90% market share at one point. Active user count may not be the moat you imagine.

Adoption of web browsers was also much lower when Netscape was dominant. 90% marketshare is less meaningful if you're only 1% of the way to the potential market size. Peeling away users who talk to ChatGPT every day is very possible, but harder than getting someone whose never used an LLM before (but does use your OS, browser, phone...) to try yours first.

I think the even better analogy than browsers is search engines. There aren't any network effects or platform lock-in, but there is potential for a data flywheel, building a brand, and just getting users in the habit of using you. The results won't necessarily turn out the same - I think OpenAI's edge on results quality is a lot less than early Google over its competitors - but the shape of the competition is similar.


Switching is super easy and people are doing it.

There is no moat


Maybe! Switching search engines is also very easy, and the top story on the front page is someone no longer using Google, but we know in practice almost nobody does that. As technologists we're much more likely to switch and know people who would switch.

Same strategy as for search. Gemini is going be shoveled down the mouth of users and they just won't change the default.

On iOS with the Apple agreement, and on Android (though the question of hardware remains when considering beyond Pixel phones).


But that doesn't translate to paying Gemini customers

Interesting you make that comment so confidently

google search definitely has a moat. people build their websites to optimize for google's algorithm, therefore google users see better results -> google gets more users -> websites optimize for google -> repeat. Personally I never bother with 'bing SEO' or 'bing ppc ads'.

Google backfilled their moat with sponsored results and crappy AI summaries

the AI has gotten good enough that click-thru-rate on informational searches has fallen off a cliff. I have some blog posts for SEO, their CTR is like 0.1% now.

google search took over becuse all search engines sucked and theirs didn't in a few important ways. AND by default, ads over to the side, clean interface.

Now all search engines suck and google's sucks just as bad or worse than the rest.

If someone were to follow the original google playbook and make a search engine that helped people find things (eg by respecting the query syntax rather than making 'helpful' suggestions and dropping words the user included in their query) and kept the ads separate and out of the way of results. They might well make a monster. But this is old tech so nobody cares and everyone thinks google is unassailble even while nobody likes them anymore. Is there /any/ money in search? I thought so but I must be wrong for it to get this bad.


Google search still has at least one competitive advantage: their crawlers are least likely to be blocked so they have the biggest index. AFAIK reddit is indexed by google but blocks all other search crawlers.

Kagi works quite well.

“In December, Gemini traffic increased by 28.4% month-over-month, while ChatGPT traffic decreased by 5.6%”

https://www.businessinsider.com/openai-chatgpt-vs-gemini-web...

"What's you number one piece of hiring advice?"

"Hire for slope, not Y-intercept. This is actually my number one piece of life advice."

-@sama, who I’m generally a big fan of. But the job is now harder


Or maybe hire for 2nd (acceleration) or even 3rd (jerk) derivative.

How many of those users are paying? Where is the profit? How many users will be willing to use ChatGPT if they had to pay? Might have to pull out the questions like its 2026.

> How many of those users are paying?

About 5% according to a news article a few months ago.

Will the other 95% stick around once ads or payments are required?


Most people will stick to the free product. Claude isn't free and not widely known beyond tech circles. Gemini, despite being good, also has a marketing problem and most non technical users still default to chatgpt.com for their day to day AI usage but that can change as Google redirects users to Gemini from so many surfaces it owns

How is it a moat? Myspace had 300M active users on an early internet.

If market share is a moat, IBM should still be the biggest tech company.


MySpace would have won had they not been outcompeted by virtue of their momentum though.

But why are these users sticking to ChatGPT specifically ?

If it’s not the quality of their answers ?


They'll stay as long as it's cheap. The moment any attempt is made to raise the price, the number will crater.

Maybe: “ok I’m lazy, the app is preinstalled on my phone and it’s free, there are some ads but ok”

Isn't that the 'bull case' for Gemini?

Have the same feeling, they have Gemma-3 that is preparing to be on-device stuff, and getting deployed on iPhone if I understand it right.

Then it can be something along the lines of "subscribe to Google XXX or Apple +++ and have 'unlimited' cloud requests"


Also when they start seeing real ads.

It started to get deployed: https://chatgpt.com/pricing/ it's called "ChatGPT Go"

    > This plan may include ads. Learn more 

    > When will ads be available in ChatGPT?
    We’re beginning in the US on February 9, 2026

    > Starting in February, if ads personalization is turned on, ads will be personalized based on your chats and any context ChatGPT uses to respond to you. If memory is on, ChatGPT may save and use memories and reference recent chats when selecting an ad. 

You pay 8 USD / month and have higher limits and ads

Remember when everyone said Facebook would be dead if they started running ads

Facebook is dead

for 99% of normies ChatGPT is the only LLM provider they know or have heard of.

99% of normies aren't paying for ChatGPT, there's a reason why they're pushing heavy for corporate welfare + government contracts. They're unable to sell to consumers so now they'll selling to governments while trying to lock-in contracts that subsequent people can't easily dismantle.

Google has llm on front page and have more users

> How are ~1B active users not "moat"?

When they cost more to serve than they bring in, customer switching cost is vanishingly low, your competitor has revenue from other things and you don't.


> When they cost more to serve than they bring in, customer switching cost is vanishingly low, your competitor has revenue from other things and you don't.

What? "Other things"? This is really vague. Who says competitors have lower CAC? It's rather likely competitors pay more for a new customer, due to, very simply, brand.


Google is the competitor I had in mind.

They aren’t going to run out of money. They have existing customer relationships. They invented the model architecture of which GPT is a variant. Their existing enormous business is their own AI customer.

OpenAI’s business seems way more precarious than Google. Users get the tech either way.


Users are not a moat because there is no network effect here.

Are those users Locked in or are they treating the service like a commodity easily changed when the price goes up to stop hemorrhaging money.

Google worked as a free service because their backend was cheap. AI models lack that same benefit. The business model seems to be missing a step 2.


yeah, ~1B active users + when non-tech people think of AI, they think of "ChatGPT" not many of the competitors.

"Anthropic" doesn't exactly roll off the tongue, and I think a lot of people would avoid it simply because it doesn't have a catchy name like OpenAI or ChatGPT. It's also far more fun to say "I did a Google search" than "I did a Duck Duck Go search", and one still dominates over the other no matter the privacy concerns or how easy it is to switch. People can be simple like that.

I’m not sure it matters in Anthropic’s case that much - even people who use Anthropic models rarely think of the company as “Anthropic”. Their Claude brand is very strong, so much so the website is https://claude.ai etc, and you commonly see discourse about the company’s models where the name Anthropic never even appears. It’s Claude, Claude, Claude all the way down.

Claude has impressive mindshare in many engineering disciplines too, and given how many open source projects are a play on its name I’m not sure I’d argue it isn’t catchy either. Certainly rolls off the tongue easier for me than “chatGPT” does, which even Sam Altman their CEO agrees is an awful product name they are stuck with.


A moat is something that can't be crossed. User count doesn't seem that insurmountable.

700 million and declining with no clear story to levering either the attention economy or paying

How do you think this compares to Google and the AI search?

You're parroting misleading "monthly/weekly active user" numbers from OpenAI that include free accounts.

It's much more important to look at "paid." Only up to 50M (est.) are paid with a substantial chunk (10M) as enterprise/edu/promotional paid accounts.


99% of those 1B have negative value.

How many users did Netscape have?

> At this stage in the game I don’t really understand where this skepticism of the value these tools provides comes from.

Fear


I get it. I’m scared too. I’d be lying if I said I wasn’t.

> They'll overlook the fact that the work AI tools provide only encompasses 10% of your job even if they're 100% efficient.

Time will tell. As of today, there are strong indications this statement stands on weak knees. Copium is a term I recently heard in that context, and it fits.


Not sure WTF I read here. Just more vibe coded "products" and "blogs", as it seems.

This "padded room" architecture fails because isolating the host OS does nothing to protect the user's data; if the agent has permission to read your files and access the internet, an injection will simply use the agent’s legitimate tools to exfiltrate your private information. Furthermore, making core memory files immutable and requiring manual confirmation for every action effectively lobotomizes the AI, trading its primary value—autonomy—for a false sense of security that users will eventually bypass due to click fatigue.


You’re making a valid point. There probably isn’t a silver bullet that makes an autonomous agent completely secure. But depending on the use case, you can still meaningfully reduce risk. Security is often about process and layered defenses rather than perfect isolation. The goal isn’t to eliminate compromise entirely, but to reduce the attack surface and limit the blast radius when something goes wrong. For example, if an OpenClaw agent needs to process emails, one strategy could be to introduce a locked-down preprocessing subagent. That agent would have minimal permissions: no write access to long-term memory, no API keys, and no external capabilities beyond parsing and classification. Only messages that pass this stage would be forwarded to the agent that can actually take actions. Is this 100% secure? Obviously not. A sufficiently clever injection might still find a path through. But separating responsibilities and privileges makes exploitation significantly harder and limits what an attacker can achieve even if one component is compromised.

While that might take it a little too far, Lex surely is a dangerous individual. On various occasions did he sympathize with the war and terror that Russia is doing in Ukraine. I do not click on any of his content because I will not support these (and a few other questionable, to say the least) views of his. Also his image of an MIT researcher is hilarious.


> On various occasions did he sympathize with the war and terror that Russia is doing in Ukraine.

I'm not a devotee of his but I've listened to a few of his podcasts when I like the guest. I have an idea of how someone would come away with your impression given lex's interview style but I'd be pretty surprised if anything he said would, to me, fit your impression.

That said, I'd like an example if you have something specific to point to that might change my mind or if it's just a general takeaway you've gotten from a corpus of interviews on the topic (which would be totally valid but wouldn't change my mind).


> That said, I'd like an example if you have something specific to point to that might change my mind

This guy wanted Putin on his podcast to hear his side of the story (let that sink in) and spoke Russian to Zelensky. Willingly wanting to provide a platform for a mass murderer who is best known for large-scale social media propaganda.

This is not an "impression" of his "interview style". This guy implicitly supports terrorist acts.


> This guy wanted Putin on his podcast to hear his side of the story (let that sink in)

Many people have interviewed serial killers and not supported serial killers.

I would very much like to know Putin's actual motivations which would unlikely be spoken but his stated motives would also be enlightening.

I'm sure he'd go on with the standard "Nazis in Ukraine" line but in a 2-3 hour interview, I might get some new insights I don't get from 3 sentence sound bites.

We know so much about Hitler from his own writings and speeches. It seems to me that your philosophy on "platforming" Putin would also apply to making the words of Hitler available to the public.

Is there someone you think _could_ interview Putin responsibly?

> spoke Russian to Zelensky

I don't see the significance of that. They both speak Russian and English fluently. I don't know if Friedman speaks Ukranian but I'm not understanding what the implication is here. Surely the interview was in English since the podcast is?

> This is not an "impression" of his "interview style". This guy implicitly supports terrorist acts.

Implicitly being the key word here and is certainly subjective. If the body of evidence you're presenting is "would interview Putin" and "spoke Russian to Zelensky", I don't find that convincing.


> Is there someone you think _could_ interview Putin responsibly?

No, and no one should, see next answer.

> Implicitly being the key word here and is certainly subjective. If the body of evidence you're presenting is "would interview Putin" and "spoke Russian to Zelensky", I don't find that convincing.

"Would interview Putin" implies "is willing to provide a huge international platform for a terrorist and still-active mass murderer who is best known for his effective propaganda of peoples' minds". If you do not find that convincing, you are not alone at all. This has been the objective of Russia all along.


Yes. How do so many people fall for this guy? I find him pretty creepy, to be honest.


Pretty sure he’s a complete fraud too. He associates himself with MIT despite only having had a short stint teaching non-credit classes. One of his papers was apparently so flawed it’s been wiped from existence. Plenty of info online if you want to go down the rabbit hole.


So how many more LLM-generated "products" do we need to see here? This is inconsistent, with contradictory information, partly blatantly wrong and overall horrible from start to finish.


You might as well ask an accountant why he/she uses a calculator. I really don't think you're asking the right questions here. These questions will lead to obvious answers that you don't need to interview people for.


I'm asking questions because I wanna challenge some biases I have. You believe that you already know the ("obvious"?) answers, good for you?


Consulting has weak margins compared to SaaS and scales poorly. Providing the interface for companies to spin up their own consultants (=Agents like Claude Code) is a superior business model in every dimension.


But those margins are for traditional businesses with human workers, if these claims of 100x productivity increase are real Anthropic should very easily be able to outcompete Accenture no?


Consulting - especially the more strategy type consulting - is often not about “we don’t know how to do something”, it’s more of “there is so much resistance to change organizationally that not even CxOs/directors can push it through”.

Besides selling consulting services involves a lot of relationship building and knowing the business vertical.


Yup, because LLM inference can be scaled by adding racks of hardware. Consulting can't be.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: