Hacker Newsnew | past | comments | ask | show | jobs | submit | zzbzq's commentslogin

You're not using the good models and then blaming the tool? Just use claude models.

Copilot's main problem seems to be people don't know how to use it. They need to delete all their plugins except the vscode, CLI ones, and disable all models except anthropic ones.

The Claude Code reputation diff is greatly exaggerated beyond that.


What, 5.2 Codex isn’t a good model? Claude 4.5 and Gemini 3 Pro with Copilot aren’t any better, I don’t have enough of a sample of Opus 4.5 usage with Copilot to say with confidence how it fares since they charge 3x for Opus 4.5 compared to everything else.

If Copilot is stupid uniquely with 5.2 Codex then they should disable that instead of blaming the user (I know they aren’t, you are). But that’s not the case, it’s noticeably worse with everything. Compared to both Cursor and Claude Code.


5.2 Codex is up there with claude lmao


Agree, but it seems dependent on field. One day I wanted a browser extension made, and 5.2-codex-max added hundreds of lines of code several times, and for 15-20 iterations I did not change one thing, or even have an opinion on what it was doing. This is extremely uncommon for other models for me, even Opus I would say. And yes, I mostly do small green-field things and not even that works all the time, even if LLMs are clearly at their best there.


Not my experience at all. Copilot launched as a useless code complete, is now basically the same as anything. It's all converging. The features are converging, but the features barely matter anyway when Opus is just doing all the heavy lifting anyway. It just 1-shots half the stuff. Copilot's payment model where you pay by the prompt not by the token is highly abusable, no way this lasts.


I would agree. I've been using VSCode Copilot for the past (nearly) year. And it has gotten significantly better. I also use CC and Antigravity privately - and got access to Cursor (on top of VSCode) at work a month ago

CC is, imo, the best. The rest are largely on pair with each other. The benefit of VSCode and Antigravity is that they have the most generous limits. I ran through Cursor $20 limits in 3 days, where same tier VSCode subscription can last me 2+ weeks


Gambling sites probably do have it in their user agreements.

Further, "insider trading" in prediction markets is probably fundamentally illegal under existing commodities fraud laws in the US (I am not a lawyer,) but there's probably nobody actively policing it, and probably no precedent in how to prosecute the cases.


Your own wiki link disagrees with you, most of the article uses landlordism as the base-level example. You've just discovered how "rent seeking" is used as a more broad term to describe many phenomena, but they're still describing them essentially in the metaphor of landlordism.


I think they were referring to the costs of training and hosting the models. You're counting the cost of what you're buying, but the people selling it to you are in the red.


Correct


wrong. OpenAI is literally the only AI company with horrific financials. You think google is actually bleeding money on AI? they are funding it all with cash flow and still have monster margins.


OpenAI may be the worst, but I am pretty sure Anthropic is still bleeding money on AI, and I would expect a bunch of smaller dedicated AI firms are too; Google is the main firm with competitive commercial models at the high end across multiple domains that is funding AI efforts largely from its own operations (and even there, AI isn’t self sufficient, its just an internal rather than an external subsidy.)


Dario has said many times over that each model is profitable if viewed as a product that had development costs and operational costs just like any other product from any other business ever.


What that means, and whether it means much of anything at all depends on the assumed “useful life” of the model used to set the amortization period assumed for the development costs.


> You think google is actually bleeding money on AI? they are funding it all with cash flow and still have monster margins.

They can still be "bleeding money on AI" if they're making enough in other areas to make up for the loss.

The question is: "Are LLMs profitable to train and host?" OpenAI, being a pure LLM company, will go bankrupt if the answer is no. The equivalent for Google is to cut its losses and discontinue the product. Maybe Gemini will have the same fate as Google+.


That's literally what he said


In my experience owning private stock, you basically own part of a pool. (Hopefully the exact same classes of shares as the board has or else it's a scam.) The board controls the pool, and whenever they do dividends or transfer ownership, each person's share is affected proportionally. You can petition the board to buy back your shares or transfer them to another shareholder but that's probably unusual for a rank-and-file employee.

The shares are valued by an accounting firm auditor of some type. This determines the basis value if you're paying taxes up-front. After that the tax situation should be the same as getting publicly traded options/shares, there's some choices in how you want to handle the taxes but generally you file a special tax form at the year of grant.


Postgres nationalists will applaud the conclusion no matter how bad the reasoning is.

Don't get me wrong, the idea that he wants to just use a RDMBS because his needs aren't great enough, is a perfectly inoffensive conclusion. The path that led him there is very unpersuasive.

It's also dangerous. Ultimately the author is willing to do a bit more work rather than learn something new. This works because he's using a popular tool people like. But overall, he doesn't demonstrate he's even thought about any of the things I'd consider most important; he just sort of assumes running a Redis is going to be hard and he'd rather not mess with it.

To me, the real question is just cost vs. how much load the DB can even take. My most important Redis cluster basically exists to take load off the DB, which takes high load even by simple queries. Using the DB as a cache only works if your issue is expensive queries.

I think there's an appeal that this guy reaches the conclusion someone wants to hear, and it's not an unreasonable conclusion, but it creates the illusion the reasoning he used to get there was solid.

I mean, if you take the same logic, cross out the word Postgres, and write in "Elasticsearch," and now it's an article about a guy who wants to cache in Elasticsearch because it's good enough, and he uses the exact same arguments about how he'll just write some jobs to handle expiry--is this still sounding like solid, reasonable logic? No it's crazy.


That's how I've always characterized them. But if you think about it, it's not really true.

The LLM is "lossily" containing things an encyclopedia would never contain. An encyclopedia, no matter how large, would never contain the entire text of every textbook it deems worth of inclusion. It would always contain a summary and/or discussion of the contents. The LLM does, though it "compresses" over it, so that it, too, only has the gist at whatever granularity it's big enough to contain.

So in that sense, an encyclopedia is also a lossy encyclopedia.


the quality is definitely not better, not since the early 2000s when everything was fully digital and everything is single tracked, rhythm shifted to metronome grid, autotuned--the really big budget producers like max martin will put a little effort into the mixdown but by and large they're not even trying to make thing sound good, they're just pumping out minimal effort productions with default settings.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: