Hacker Newsnew | past | comments | ask | show | jobs | submit | ramraj07's commentslogin

Anyone who's used both Claude and ChatGPT will instantly agree what is better by a large margin. Theres maybe a brand recognition long tail but its more likely theyre the rare occasional users who use the free tier. Thus ChatGPT is becoming the shitty free AI app while Claude is what you use to get real work done. Time (in months) will tell yow this will go.

Avoiding Doing something that could cause job loss has never been and will never be a productive ideal in any non conservative non regressive society. What should we do? Not innovate on AI and let other countries make the models that will kill the jobs two months later instead?

Its definitely against terms. The claude code oauth token is only supposed to be used with claude code. I hope no one gets their claude account banned trying to use this.

That's literally what it does :D I.e. it uses the auth token to use claude code (the CLI binary). Check the code here: https://github.com/desplega-ai/agent-swarm/blob/main/src/com...

To those who use AI to get real work done in real products we build, we very much appreciate the value of each token given how much operational overhead it offsets. A bubble pop, if one does indeed happen, would at best be as disruptive as the dot-com bust.

It's a full employment program for security engineers.

How disruptive dot com was depends on where you were.


Star Trek only got a good society after discovering FTL and existence of all manner of alien societies. And even after that Star Treks story motivations on why we turned good sound quite implausible given what we know about human nature and history. No effing way it will ever happen even if we discover aliens. Its just a wishful fever dream.

It isn't even just the aliens (although my headcanon is that the human belief that they "evolved beyond their base instincts" is part a trauma response to first contact and World War 3, and part Vulcan propaganda/psyop.) Star Trek's post scarcity society depends on replicators and transporters and free energy all of which defy the laws of physics in our universe (on top of FTL.)

We'll never have Star Trek. We'll also never have SkyNet, because SkyNet was too rational. It seems obvious that any AGI that emerges from LLMs - assuming that's possible - will not behave according to the old "cold and logical machine" template of AI common in sci-fi media. Whatever the future holds will be more stupid and ridiculous than we can imagine, because the present already is.


I'm definitely not a Star Trek connoisseur but I thought a big part of the lore is the "never again"-ish response to the wars through WW3?

But anyway, I share your lack of optimism.


Well they didn't necessarily stop waging war in Star Trek either.. They also spent most of their time trying to not get defeated by parasitic artificial intelligence.

Great post indeed but let me ask you, put yourself in the LLM shoes. Now instead of reading through coherent lines of code that is exclusively about solving problems, you now have random characters before every line that mean something (because the presence of the edit tool implies it) but not about your actual problem. Do you reckon the LLM will be distracted a little bit? The benchmark deliberately sidestep the actual intelligence of the model on the task at hand, so while the author feels successful at their subtask its very possible they've failed at the war. This seems to be the beauty of AI engineering. The smarter you think you are about something the bigger the fall.


Why do we need a hash for every line. Why cant we mark every fifth line (or get smarter and calculate entropy of lines and jump longer for empty boilerplate)? I feel adding a random 3 char header to every line while making the edit tool smarter will make the overall understandability of the content dumber.


It's why I added read({ change_file: bool = false }) and change_file(...) ; so it doesn't get confused by default if its just investigating.

I suspect doing it only ever 5th line would make it less clear for the llm.

I'm just experimenting, I wouldn't suggest you use this by default unless you're looking to experiment.


The experience in the claude app is fine.


And blue origin has no delays because they don't even launch anything.



Agreed, every line you ship,whether you wrote it or not, you are responsible. In that regard, while I write a lot of code completely with AI, I still endeavor to keep the lines as minimal as possible. This means you never write both the main code and the tests using AI. Id rather have no tests than AI tests (we have QA team writing that up). This kinda works.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: