Lines of codes is not a measure of anything meaningful on its own. The mere fact that you suggest this as prove that you are more productive makes me think you are not.
On a serious note: LoC can be useful in certain cases (e.g. to estimate the complexity of a code base before you dive in, even though it's imperfect here, too). But, as other have said, it's not a good metric for the quality of a software. If anything, I would say fewer LoC is a better indication of high quality software (but again, not very useful metric).
There is no simple way to just look at the code and draw conclusions about the quality or usefulness of a piece of software. It depends on sooo many factors. Anybody who tells you otherwise is either naive or lying.
> The SWE industry is eagerly awaiting your proposed accurate metric.
There are none. All are various variant of bad. LoC is probably the worst metric of all. Because it says nothing about quality, or features, or number of products shipped. It's also the easiest metric to game. Just write GoF-style Java, and you're off to the races. Don't forget to have a source code license at the beginning of every file. Boom. LoC.
The only metrics that barely work are:
- features delivered per unit of time. Requires an actual plan for the product, and an understanding that some features will inevitably take a long time
- number of bugs delivered per unit of time. This one is somewhat inversely correlated with LoC and features, by the way: the fewer lines of code and/or features, the fewer bugs
- number of bugs fixed per unit of time. The faster bugs are fixed the better
I understand that you would prefer to be more “productive” with AI but without any sales than be less productive without AI but with sales.
To clarify, people critical of the “productivity increase” argument question whether the productivity is of the useful kind or of the increased useless output kind.