How do we maintain best practices when the compiler outputs a different result for the spec at any given time? How do we obtain reproducible builds? Do we pin to a specific version of our compiler (ie, snapshot of the model; is this possible anywhere except local currently?), and vigorously test changes after any updates in our "toolchain"? How do we have control over our "toolchain" (again, apart from local), especially when said "toolchain" can, for all its users simultaneously, fold to political pressure from state regimes? And, if the code generated by LLMs is the build artifact, why is it now okay to check the build artifact into source control?
There may come a day when we, as an industry, decide that simply doing it by hand is more expedient when it comes to resolving urgent production issues. We may not know the pain we are causing ourselves until well into the future when it has become too much to bear without a visit to the proverbial doctor.
Writing code by hand and managing the mental model of its execution and architecture is one of the few remaining joys of my day job, apart from delivering a good product people want and use and being helpful. Even the small things, the tedious chores of refactoring or scaffolding that initial bit of CRUD boilerplate are steps that matter to me. The callouses matter. The tedium matters. These moments of pain and drudgery inform me on what to do differently next time in a way I worry I would not appreciate otherwise, were specific tools thrust upon me.
I remain because I remain hopeful the pendulum will swing the other way someday.
Completely resonate with this. There don't seem to be many of us, at least in my online bubble, but you're not alone.
I believe and hope eventually we'll come around to valuing people who have put in the work - not just to understand and review output but to make choices themselves and keep their knowledge and judgement sharp - when we fully realize the cost of not doing so.
I value people who put in the work. I also value being able to make a little one-off single use gadget without having to spend a week doing remedial python every few months. I can understand it once it's written but writing it is a separate skill.
Of course, having learned a few languages, understanding data types, knowing to prompt it for idiomatic code and check against best practices, etc is vital to being able to do that. The basic skills need to be developed even if not everyone gets the same value out of being able to write code.
I think throwaway use cases have very different requirements than products we expect to maintain and need to be treated differently. Go nuts with AI to generate a chart or a one off tool or whatever, if you don't care about deepening your skill to do those things yourself.
That’s why I laugh when people are like “oh, AI writes all the tests, it’s so much easier.” If your code is hard to test, you need to change the abstraction or the interface. Tests are the first reuse of your code, so if it’s a pain to use in tests, it’s going to be terrible to use in general.
And not to mention that most tests available to AI training is trash, so no wonder AI-generated tests are not only worthless but costly in terms of false sense of reliability.
Wanted to chime in that, at my job, hand-writing code has been massively helpful when debugging it. My mental model of what can go wrong is much easier to form if I wrote the code. A LLM will not always be able to solve these incidents, no matter how many logs you throw at it.
I've found that I reach for Copilot most often when working on frontend javascript code. Will the incentive to improve the frontend libraries, browser standards, etc vanish now that LLMs let us avoid some of this pain?
I wonder if they will improve in a specific direction - frameworks and libraries built to be easier for LLMs to use.
This could even happen through accidental evolution - a framework that is easier on an LLMs context window results in more successful projects, which results in more training data, which results in LLMs being even better at it.
Yes! I love to code. I love the entire process end-to-end. I love doing all the things people say they prefer to hand off to LLMs. Makes me sad to see all the people allowing corporations to slowly rob them of all the little joys this field has to offer.
Especially true for junior-mid engineers. The brain stores and comprehends what you tend to repeat.
If I don't solve math problems I won't understand how to solve them, no matter how many times I see videos of people solving similar problems. This is what LLM usage early on will ultimately lead to, and anyone who will claim "oh, by the time I'll be senior the LLM's will be much better than me" only proves my point.
I continue to do all of those things but have Claude do the typing for me, if that makes any sense. I'm directing it on almost a line by line basis, I just am not that interested in doing syntax pushups anymore.
Open source developers should be paid for their efforts, and for their contributions to LLM models, past, present, and future, rather than be enticed into paying to participate six months down the road.
OSS developers driven by something else than just money I believe. They are proud of their work of giving something to the community with their name on it. So such respect as giving free subscription to them I think matters, as they were mentioned and respected.
Serious question, outside of the Bay Area, are there therapists whose specialty is in catering to the needs and concerns of developers? Obviously AI therapy is not a serious suggestion here. This is going to be a burgeoning corner of the practice at the US' current trajectory.
reply