I would be highly amused if the OP revealed that the essay/rant had been written by o3-mini tomorrow.
While I don't really understand what fuels this person's Substack Forensic Journalist energy, I can only say that I am thrilled to pay $20 to OpenAI because it delivers outrageous value to me as a solo, self-taught "engineer"* designing reasonably complex physical devices intended for sale. Air quotes because here in Ontario, if you don't got the ring, you don't got no business using the title.
So my first hand gut reaction is that people who cannot fathom meaningful use of modern LLMs are by definition people who are not trying to solve complex problems in domains they aren't yet super confident in. No judgement, and this is intentionally reductive; an LLM skeptic is lots of other things, too. Just saying that if you want to build hard things, reasoning models are dramatic force multipliers.
> So my first hand gut reaction is that people who cannot fathom meaningful use of modern LLMs are by definition people who are not trying to solve complex problems in domains they aren't yet super confident in.
OTOH I would never trust anything built by someone not super confident while heavily relying on a LLM.
Nobody starts off super confident! I'm proud to be a life-long learner. I've just learned more, about more things, in the last three years than I did in the previous few decades of coding every day.
One person's imposter syndrome is often superior to another's blustery confidence.
While I don't really understand what fuels this person's Substack Forensic Journalist energy, I can only say that I am thrilled to pay $20 to OpenAI because it delivers outrageous value to me as a solo, self-taught "engineer"* designing reasonably complex physical devices intended for sale. Air quotes because here in Ontario, if you don't got the ring, you don't got no business using the title.
So my first hand gut reaction is that people who cannot fathom meaningful use of modern LLMs are by definition people who are not trying to solve complex problems in domains they aren't yet super confident in. No judgement, and this is intentionally reductive; an LLM skeptic is lots of other things, too. Just saying that if you want to build hard things, reasoning models are dramatic force multipliers.