It works if you figure out a way to have a permanent dark/light side. But really the issue is that we can do compute with light/photon and radiation and not much has been done in those areas.
College is still worth it for maybe the top 100 schools especially well-funded state schools. Why? Because people still get hired based on such a connection alone. Think about Waterloo. It’s a mediocre school with a strong pipeline to SV. You wanna end up at SV but didn’t study hard in high school or just weren’t smart enough for MIT/Stanford? Go to Waterloo.
That's mentioned in the article, but is the lock-in really that big? In some cases, it's as easy as changing the backend of your high-level ML library.
That's what it is on paper. But in practice you trade one set of hardware idiosyncrasies for another and unless you have the right people to deal with that, it's a hassle.
On top, when you get locked into Google Cloud, you’re effectively at the mercy of their engineers to optimize and troubleshoot. Do you think Google will help their potential competitors before they help themselves? Highly unlikely considering their actions in the past decade plus.
I'm pretty sure we are in an apple vs android situation, where you give lifetime apple users an android phone, and after a day they report that android is horrid. In reality, they just aren't used to how stuff is done on android.
I think many devs are just in tune with the "nature" of Claude, and run aground easier when trying to use gemini or Chatgpt. This also explains why we get these perplexing mixed signals from different devs.
There are some clear objective signals that aren’t just user preference. I shelled out the $250 for Gemini’s top tier and am profoundly disappointed. I had forgotten that loops were still a thing. I’ve hit this multiple times in Gemini CLI, and in different projects. It gets stuck in a loop (as in the exact same, usually nonsense, message over and over) and the automated loop detection stops the whole operation. It also stops in the middle of an operation very frequently. I don’t hit either of these in Claude Code or Codex.
There certainly is some user preference, but the deal breakers are flat out shortcomings that other tools solved (in AI terms) long ago. I haven’t dealt with agent loops since March with any other tool.
I'm constantly floored with how well claude-cli works and gemini-cli stumbled on something simple the first time I used it and Gemini's 3 Pro release availability was just bad.
Agreed. Been using Claude Code daily for the past year and Codex as a fall back when Claude gets stuck. Codex has two problems: it Windows support sucks and it's way to "mission driven" vs the collaborative Claude. Gemini CLI falls somewhere in the middle, has some seriously cool features (Ctrl+X to edit prompt in notepad) and it's web research capability is actually good.
Ah, Gemini is the model and Google Vertex AI is like AWS Bedrock, it's the Google service actually serving Gemini. I wonder if Gemini can be used from OpenCode when made available through a Google Workspace subscription...
> It's silly of them to say you need a "modern terminal emulator", it's wrong and drives people away. I'm using xfce4-terminal.
Good. I'd rather use a tool designed with focus on modern standards than something that has to keep supporting ancient ones every time they roll an update.
I too am curious. My daily driver has been Claude Code CLI since April. I just started using Codex CLI and there are lot of gaps--the most annoying being permissions don't seem to stick. I am so used to plan mode in Claude Code CLI and really miss that in Codex.
The model needs to be trained to use the harness. Sonnet 4.5 and gpt-5.1-codex-max are "weaker" models in abstract, but you can get much more mileage out of them due to post-training.
I don’t think LLM leader is going to come out of this AI race on top. OpenAI is like Yahoo! in my view. They were there in the beginning and led by default. Someone somewhere will be the Google of this era. It won’t be LLM but the next step. Leaps and bounds better than LLM. I also think we won’t use even 1/100th of the energy needs projected right now.
Google may very well be the Google of this era. They have demonstrated the ability to maintain parity on the engineering side, they have a long running advantage on OpEx with TPU, they have the most data, and the most trusted brand.
AI collapses the value of IP across the board, because AI trends towards being the only IP, which means that the marketplace will be defined by operational efficiency, ability to build and run systems at massive global scale, access to capital, and government connections, so Microsoft, Amazon, and Google probably stay on top.
They barely caught up after leaking talent for the past 4-5 years. I am not drinking the koolaid. A lot of talented people are going back home to China. I am 99% the next Google will come out of Asia.
I don't think one should worry as much about what medias they are backing up to as if they are answering the question "does my data resiliency match my retention needs".
And regularly test restores actually work, nothing worse than thinking you had backups and then they don't restore right.
80-90% of the teachers are not equipped to handle AI in the classrooms. You can’t expect teachers to know the SOTA that’s rapidly changing. And at the same time punish students from using available tools. Especially in public schools, teaching quality has plummeted in the past decade. This also applies to lower tier colleges. The whole point of education is to learn. Not to weed out talent. If students want to use AI tools to take shortcuts then it’s entirely on them. It will catch up to them at some point.
At my school, long before AI, the work was of two kinds - homework type essays/problems that you could cheat on if you wanted but there was no point because the feedback was for your benefit and didn't count towards anything, and then proper exams where you were watched and couldn't cheat easily.
Not sure why they don't just do that? It worked fine and would be compatible with LLM use.
We're almost at the point that CGP grey predicted over a decade ago with the "digital Aristotle" concept. Teachers will have to eventually transition into class babysitting roles, but the transition period will be ugly as long as tech stays at this level where it renders regular teaching impossible while also not yet being on a level where it can useably replace it.