Well, in this case it's kind of similar to how people write code. A loop consisting of writing something, reviewing/testing, improving until we're happy enough.
Sure, you'll get better results with an LLM when you're more specific, but what's the point then? I don't need AI when I already know what changes to make.
Reading to understand all the subtext and side-effects can be harder than writing, sure. But it won't stop people trying this approach and hammering out code full of those types of subtle bugs.
Human developers will be more focused on this type of system integration and diagnostics work. There will be more focus on reading and understanding than the actual writing. It's a bit like working with contractors.
Sure, you'll get better results with an LLM when you're more specific, but what's the point then? I don't need AI when I already know what changes to make.