Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder if AI code completion reduces the value of abstractions if boilerplate code is easy to generate. Will this lead to highly repetitive code ?


In Go it’s not uncommon to use code generation to recreate boilerplate code, especially before the introduction of generics. No human looks at this usually. And if they do, they find the code they’re looking for contained in a few files. I personally found this pattern pretty good and easy to reason about.


If it works and it never needs a human, does it matter?


It works until it doesn't at which point you have a massive, useless pile of uninterpretable garbage.


Then you ask another LLM to explain it to you lol, like - I don’t think people are thinking hard enough about what this future is likely to look like.


Well, someone (a human being) still maintains it, and ultimately someone likely will find the code unmaintainable even if LLMs help. If you use ChatGPT enough you would know it has its standards as well, actually pretty high. At one point the code likely still needs to be refactored, by human or not.


Crazy that people say shit like this without seeing a problem with it.


It’s really not a problem at a certain point. Also, we’ll probably have “remove and replace” but for software in the next couple years with this stuff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: