They always market the % of lines generated by AI. But if you are forced to use a tool that constantly inserts generations, that number is always going to be high even if the actual benefit is nil or negative.
If the AI tool generates a 30 line function which doesn’t work. And you spend time testing and modifying the 3 lines of broken logic. The vast majority of the code was AI generated even if it didn’t save you any time.
> They always market the % of lines generated by AI
That's crazy, should really be the opposite. If someone releases weights that promises "X% less lines generated compared to Y", I'd jump on that in an instant, more LLMs are way too verbose by default. Some are really hard to even use prompts to get them to be more concise (looking at you, various Google models)
If the AI tool generates a 30 line function which doesn’t work. And you spend time testing and modifying the 3 lines of broken logic. The vast majority of the code was AI generated even if it didn’t save you any time.