At a minimum I write my own automated tests for LLM code (including browser automation) and think them through carefully. That always exposes some limitations to Claude's solutions, discovers errors, and lets you revisit it so you fully understand what you're generating.
Mostly LLMs do the first pass and I rewrite a lot of it with a much better higher level systems approach and "will the other devs on the team understand / reuse this".
I'd still prefer deciphering a lot of default overly-verbose LLM code to some of the crazy stuff that past devs have created by trying to be clever.
Mostly LLMs do the first pass and I rewrite a lot of it with a much better higher level systems approach and "will the other devs on the team understand / reuse this".
I'd still prefer deciphering a lot of default overly-verbose LLM code to some of the crazy stuff that past devs have created by trying to be clever.