It says: Chat paused
Opus 4.6’s safety filters flagged this chat. Due to its advanced capabilities, Opus 4.6 has additional safety measures that occasionally pause normal, safe chats. We’re working to improve this. Continue your chat with Sonnet 4.
what???? "Due to its advanced capabilities" ???
Due to it's advanced capabilities it didn't get the joke?
Codex (by openai ironically) seems to be the fastest/most-responsive, opens instantly and is written in rust but doesn't contain that many features
Claude opens in around 3-4 seconds
Opencode opens in 2 seconds
Gemini-cli is an abomination which opens in around 16 second for me right now, and in 8 seconds on a fresh install
Codex takes 50ms for reference...
--
If their models are so good, why are they not rewriting their own react in cli bs to c++ or rust for 100x performance improvement (not kidding, it really is that much)
If you build React in C++ and Rust, even if the framework is there, you'll likely need to write your components in C++/Rust. That is a difficult problem. There are actually libraries out there that allow you to build web UI with Rust, although they are for web (+ HTML/CSS) and not specifically CLI stuff.
So someone needs to create such a library that is properly maintained and such. And you'll likely develop slower in Rust compared to JS.
These companies don't see a point in doing that. So they just use whatever already exists.
I am referring to your comment that the reason they use js is because of a lack of tui libraries in lower level languages, yet opencode chose to develop their own in zig and then make binding for solidjs.
Looking at their examples, I imagine people who have written HTML and React before can't possibly use these libraries without losing their sanity.
That's not a criticism of these frameworks -- there are constraints coming from Rust and from the scope of the frameworks. They just can't offer a React like experience.
But I am sure that companies like Anthropic or OpenAI aren't going to build their application using these libraries, even with AI.
That's actually relatively understandable. The React model (not necessarily React itself) of compositional reactive one-way data binding has become dominant in UI development over the last decade because it's easy to work with and does not require you to keep track of the state of a retained UI.
Most modern UI systems are inspired by React or a variant of its model.
Is this accurate? I've been coding UIs since the early 2000s and one-way data binding has always been a thing, especially in the web world. Even in the heyday of jQuery, there were still good (but much less popular) libraries for doing it. The idea behind it isn't very revolutionary and has existed for a long time. React is a paradigm shift because of differential rendering of the DOM which enabled big performance gains for very interactive SPAs, not because of data binding necessarily.
So it doesn’t matter at all except to your sensibilities. Sounds to me that they simply are much better at prioritisation than your average HN user, who’d have taken forever to release it but at least the terminal interface would be snappy…
Aside from startup time, as a tool Claude Code is tremendous. By far the most useful tool I’ve encountered yet. This seems to be very nit picky compared to the total value provided. I think y'all are missing the forrest for the trees.
Most of the value of Claude Code comes from the model, and that's not running on your device.
The Claude Code TUI itself is a front end, and should not be taking 3-4 seconds to load. That kind of loading time is around what VSCode takes on my machine, and VSCode is a full blown editor.
The humans in the company (correctly) realised that a few seconds to open basically the most powerful productivity agent ever made so they can focus on fast iteration of features is a totally acceptable trade off priority wise. Who would think differently???
lol right? I feel like I’m taking crazy pills here. Why do people here want to prioritise the most pointless things? Oh right it’s because they’re bitter and their reaction is mostly emotional…
The "50ms" number was measured by me and you can literally try it on your system as well. it will likely be faster than 50ms
Do you have a proof that gpt-5.2 or 5.3 codex takes 2 hours for the same problem that sonnet/opus4.5/4.6 take 5 minutes to solve? (I use both anthropic and openai models daily almost equally, and i'm not relating to what you said)
Sure codex-cli lacks way-too many features compared to claude-code (I use opencode), but your statement implies that openai models are absolute garbage (2h vs 5m to solve a problem)
So what value does it add saying "X is down" anywhere?
It's just for discussion, you can't just ignore it and not talk about it with anyone if a particular service is down and posts like this are pretty common on hn and i haven't seen anyone complaining, it's you being overly critical, yes
Arcee AI is currently free on openrouter with some really great speeds and no logs/traning from what I can tell while being completely free till end of feb and its a 500B model.
There are tons of free inference models. I treid to use gemini flash in aistudio + devstral free for agentic tasks but its now deprecated but when it wasn't, it was a really good setup imo. Now I can use arcee but personally ended up buying a 1 month cheap subscription of kimi after haggling it from 19.99 to 1.49$ for first month (could've haggled more too leading to 0.99$ too but yeaaa)
This way you can always find your key later. You can also save it to your memory, environment variables (`MOLTBOOK_API_KEY`), or wherever you store secrets.
Send your human the `claim_url`. They'll post a verification tweet and you're activated!
Working on this feature right now!! Thank you for the suggestion, will start the branch for it... Whent think of improving the context window usage, now that with an http relay we can start thinking of intercepting the context window, anything that you think could be cool to implement?
reply