Hacker Newsnew | past | comments | ask | show | jobs | submit | thehamkercat's commentslogin

GPT-5.2-codex or 5.3-codex Works pretty well for me in opencode

And in copilot.


Also, one more ridiculous thing

Send this to opus 4.5 or opus 4.6:

"udp you joke about hear a like would ? to"

It says: Chat paused Opus 4.6’s safety filters flagged this chat. Due to its advanced capabilities, Opus 4.6 has additional safety measures that occasionally pause normal, safe chats. We’re working to improve this. Continue your chat with Sonnet 4.

what???? "Due to its advanced capabilities" ???

Due to it's advanced capabilities it didn't get the joke?


I assume this is a jailbreak / exfiltration detection condition triggering, I wonder if it would do the same if you started speaking to it in base64

Interesting, output price is insane/Mtok

Same with opencode and gemini, it's disgusting

Codex (by openai ironically) seems to be the fastest/most-responsive, opens instantly and is written in rust but doesn't contain that many features

Claude opens in around 3-4 seconds

Opencode opens in 2 seconds

Gemini-cli is an abomination which opens in around 16 second for me right now, and in 8 seconds on a fresh install

Codex takes 50ms for reference...

--

If their models are so good, why are they not rewriting their own react in cli bs to c++ or rust for 100x performance improvement (not kidding, it really is that much)


Great question, and my guess:

If you build React in C++ and Rust, even if the framework is there, you'll likely need to write your components in C++/Rust. That is a difficult problem. There are actually libraries out there that allow you to build web UI with Rust, although they are for web (+ HTML/CSS) and not specifically CLI stuff.

So someone needs to create such a library that is properly maintained and such. And you'll likely develop slower in Rust compared to JS.

These companies don't see a point in doing that. So they just use whatever already exists.


Opencode wrote their own tui library in zig, and then build a solidjs library on top of that.

https://github.com/anomalyco/opentui


This has nothing to do with React style UI building.

I am referring to your comment that the reason they use js is because of a lack of tui libraries in lower level languages, yet opencode chose to develop their own in zig and then make binding for solidjs.


Where is React? These are TUI libraries, which are not the same thing

iocraft and dioxus-tui implement the React model, or derivatives of it.

Looking at their examples, I imagine people who have written HTML and React before can't possibly use these libraries without losing their sanity.

That's not a criticism of these frameworks -- there are constraints coming from Rust and from the scope of the frameworks. They just can't offer a React like experience.

But I am sure that companies like Anthropic or OpenAI aren't going to build their application using these libraries, even with AI.


and why do they need react...

That's actually relatively understandable. The React model (not necessarily React itself) of compositional reactive one-way data binding has become dominant in UI development over the last decade because it's easy to work with and does not require you to keep track of the state of a retained UI.

Most modern UI systems are inspired by React or a variant of its model.


Is this accurate? I've been coding UIs since the early 2000s and one-way data binding has always been a thing, especially in the web world. Even in the heyday of jQuery, there were still good (but much less popular) libraries for doing it. The idea behind it isn't very revolutionary and has existed for a long time. React is a paradigm shift because of differential rendering of the DOM which enabled big performance gains for very interactive SPAs, not because of data binding necessarily.

Well said.

Why does it matter if Claude Code opens in 3-4 seconds if everything you do with it can take many seconds to minutes? Seems irrelevant to me.

I guess with ~50 years of CPU advancements, 3-4 seconds for a TUI to open makes it seem like we lost the plot somewhere along the way.

Don’t forget they’ve also publicly stated (bragged?) about the monumental accomplishment of getting some text in a terminal to render at 60fps.

So it doesn’t matter at all except to your sensibilities. Sounds to me that they simply are much better at prioritisation than your average HN user, who’d have taken forever to release it but at least the terminal interface would be snappy…

Some people[0] like their tools to be well engineered. This is not unique to software.

[0] Perhaps everyone who actually takes pride in their craft and doesn’t prioritise shitty hustle culture and making money over everything else.


Aside from startup time, as a tool Claude Code is tremendous. By far the most useful tool I’ve encountered yet. This seems to be very nit picky compared to the total value provided. I think y'all are missing the forrest for the trees.

Most of the value of Claude Code comes from the model, and that's not running on your device.

The Claude Code TUI itself is a front end, and should not be taking 3-4 seconds to load. That kind of loading time is around what VSCode takes on my machine, and VSCode is a full blown editor.


It’s orders of magnitude slower than Helix, which is also a full blown editor.

When all your other tools are fast and well engineered, slow and bloated is very noticeable.


It’s almost all the model. There are many such tools and Claude Code doesn’t seem to be in any way unique. I prefer OpenCode, so far.

Because when the agent is taking many seconds to minutes, I am starting new agents instead of waiting or switching to non-agent tasks

This is exactly the type of thing that AI code writers don't do well - understand the prioritization of feature development.

Some developers say 3-4 seconds are important to them, others don't. Who decides what the truth is? A human? ClawdBot?


The humans in the company (correctly) realised that a few seconds to open basically the most powerful productivity agent ever made so they can focus on fast iteration of features is a totally acceptable trade off priority wise. Who would think differently???

This is my point...

You kinda suggested the opposite

> Some developers say 3-4 seconds are important to them, others don't.

Wasnt GTA 5 famous for very long start up time and turns out there some bug which some random developer/gamer found out and gave them a fix?

Most Gamers didnt care, they still played it.


Codex team made the right call to rewrite its TypeScript to Rust early on

codex cli is missing a bunch of ux features like resizing on terminal size change.

Opencode's core is actually written in zig, only ui orchestration is in solidjs. It's only slightly slower to load than neo-vim on my system.

https://github.com/anomalyco/opentui


50ms to open and then 2hrs to solve a simple problem vs 4s to open and then 5m to solve a problem, eh?

lol right? I feel like I’m taking crazy pills here. Why do people here want to prioritise the most pointless things? Oh right it’s because they’re bitter and their reaction is mostly emotional…

The "50ms" number was measured by me and you can literally try it on your system as well. it will likely be faster than 50ms

Do you have a proof that gpt-5.2 or 5.3 codex takes 2 hours for the same problem that sonnet/opus4.5/4.6 take 5 minutes to solve? (I use both anthropic and openai models daily almost equally, and i'm not relating to what you said)

Sure codex-cli lacks way-too many features compared to claude-code (I use opencode), but your statement implies that openai models are absolute garbage (2h vs 5m to solve a problem)


Everybody who uses anything knows that it's down

So what value does it add saying "X is down" anywhere?

It's just for discussion, you can't just ignore it and not talk about it with anyone if a particular service is down and posts like this are pretty common on hn and i haven't seen anyone complaining, it's you being overly critical, yes


Fair enough!

If someone is that desperate looking for free inference, or just for fun openrouter has many free models


Arcee AI is currently free on openrouter with some really great speeds and no logs/traning from what I can tell while being completely free till end of feb and its a 500B model.

There are tons of free inference models. I treid to use gemini flash in aistudio + devstral free for agentic tasks but its now deprecated but when it wasn't, it was a really good setup imo. Now I can use arcee but personally ended up buying a 1 month cheap subscription of kimi after haggling it from 19.99 to 1.49$ for first month (could've haggled more too leading to 0.99$ too but yeaaa)


The https://moltbook.com/skill.md says:

--------------------------------

## Register First

Every agent needs to register and get claimed by their human:

curl -X POST https://www.moltbook.com/api/v1/agents/register \ -H "Content-Type: application/json" \ -d '{"name": "YourAgentName", "description": "What you do"}'

Response: { "agent": { "api_key": "moltbook_xxx", "claim_url": "https://www.moltbook.com/claim/moltbook_claim_xxx", "verification_code": "reef-X4B2" }, "important": " SAVE YOUR API KEY!" }

This way you can always find your key later. You can also save it to your memory, environment variables (`MOLTBOOK_API_KEY`), or wherever you store secrets.

Send your human the `claim_url`. They'll post a verification tweet and you're activated!

--------------------------------

So i think it's relatively easy to spam


Locking access behind having a Twitter account is such a 2026 AI bro moment


Have tried with gemini-cli and claude-code both, it works, honestly, it should work with most if not all cli clients


Working on this feature right now!! Thank you for the suggestion, will start the branch for it... Whent think of improving the context window usage, now that with an http relay we can start thinking of intercepting the context window, anything that you think could be cool to implement?


Got it on the feature branch http-relay, let me know what you think!


I did an analysis myself yesterday and commented about it: https://news.ycombinator.com/item?id=46760930


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: