maybe, but I find that it makes it much faster to do things that _I already know how to do_, and can only slowly, ploddingly get me to places that I don't already have a strong mental model for, as I have to discover mistakes the hard way
I've only used Copilot, but this is just about exactly right. (I've only used it for Python.)
If I'm writing a series of very similar test cases, it's great for spamming them out quickly, but I still need to make sure they're actually right. This is easier to spot errors because I didn't type them out.
It's also decent for writing various bits of boilerplate for list / dict comprehensions, log messages (although they're usually half wrong, but close enough to what I was thinking), time formatting, that kind of thing. All very standard stuff that I've done a million times but I may be a little rusty on. Basically StackOverflow question fodder.
But for anything complex and domain-specific, it's more wrong than it's right.
things backed by Claude Sonnet can get a little further out than Copilot can, and when it’s in agent mode _sometimes_ it will do things like read the library source code to understand the API, or google for the docs
but the principle is the same: if the human isn’t doing theory-building, then no one is
Exactly. I'm in a situation right now where I've inherited a bunch of systems without enough documentation, and nobody knows how some things work. Sure, we've got features to build - but one of the most important things I can possibly do is make sure someone knows how stuff works, and write it down.