Hacker Newsnew | past | comments | ask | show | jobs | submit | halfcat's commentslogin

100% agree. My only super power is weaponized “trying to understand”, spending a Saturday night in an obsessive fever dream of trying to wrap my head around some random idea.

That happens to produce good code as a side effect. And a chat bot is perfect for this.

But my obsession is not with output. Every time I use AI agents, even if it does exactly what I wanted, it’s unsatisfying. It’s not sometning I’m ever going to obsess over in my spare time.


No. The opposite. The people who “move faster” are literally just producing tech debt that they get a quick high five for, then months later we limp along still dealing with it.

A guy will proudly deploy something he vibe coded, or “write the documentation” for some app that a contractor wrote, and then we get someone in the business telling us there’s a bug because it doesn’t do what the documentation says, and now I’m spending half a day in meetings to explain and now we have a project to overhaul the documentation (meaning we aren’t working on other things), all because someone spent 90 seconds to have AI generate “documentation” and gave themselves a pat on the back.

I look at what was produced and just lay my head down on the desk. It’s all crap. I just see a stream of things to fix, convention not followed, 20 extra libraries included when 2 would have done. Code not organized, where this new function should have gone in a different module, because where it is now creates tight coupling between two modules that were intentionally built to not be coupled before.

It’s a meme at this point to say, ”all code is tech debt”, but that’s all I’ve seen it produce: crap that I have to clean up, and it can produce it way faster than I can clean it up, so we literally have more tech debt and more non-working crap than we would have had if we just wrote it by hand.

We have a ton of internal apps that were working, then someone took a shortcut and 6 months later we’re still paying for the shortcut.

It’s not about moving faster today. It’s about keeping the ship pointed in the right direction. AI is a guy a guy on a jet ski doing backflips, telling is we’re falling behind because our cargo ship hasn’t adopted jet skis.

AI is a guy on his high horse, telling everyone how much faster they could go if they also had a horse. Except the horse takes a dump in the middle of the office and the whole office spends half their day shoveling crap because this one guy thinks he’s going faster.


This is exactly what I've seen. A perfect description. I am the tech lead for one part of a project. I review all PRs and don't let slop through, and there is a lot trying to get through. The other part of the project is getting worse by the day. Sometimes I peek into their PRs and feel a great sadness. There are daily issues and crashes. My repo has not had a bug in over a year.

Counterpoint to consider: In real life, you can just play a different game. Most people will choose to shoot from 3/4 court instead of running all the way to the other end, because they’re not interested in basketball.

Most people aren’t interested enough to work 100+ hours per week. But we wouldn’t say Elon isn’t better at work ”because he doesn’t even work a 40-hour work week”

It has a lot to do with interest. Michael Jordan isn’t a world class mathematician. Elon isn’t a world class father.


> Most people will choose to shoot from 3/4 court instead of running all the way to the other end, because they’re not interested in basketball.

I have never once in my life seen anyone do anything close to this. Have you?


> prompting just isn't able to get AI's code quality within 90% of what I'd write by hand

Tale as old as time. The expert gets promoted to manager, and the replacement worker can’t deliver even 90% of what the manager used to. Often more like 30% at first, because even if they’re good, they lack years of context.

AI doesn’t change that. You still have to figure out how to get 5 workers who can do 30-70% of what you can do, to get more than 100% of your output.

There are two paths:

1. Externalized speed: be a great manager, accept a surface level understanding, delegate aggressively, optimize for output

2. Internalized speed: be a great individual contributor, build a deep, precise mental model, build correct guardrails and convention (because you understand the problem) and protect those boundaries ruthlessly, optimize for future change, move fast because there are fewer surprises

Only 1 is well suited for agent-like AI building. If 2 is you, you’re probably better off chatting to understand and build it yourself (mostly).

At least early on. Later, if you nail 2 and have a strong convention for AI to follow, I suspect you may be able to go faster. But it’s like building the railroad tracks before other people can use them to transport more efficiently.

Django itself is a great example of building a good convention. It’s just Python but it’s a set of rules everyone can follow. Even then, path 2 looks more like you building out the skeleton and scaffolding. You define how you structure Django apps in the project, how you handle cross-app concerns, like are you going to allow cross-app foreign keys in your models? Are you going to use newer features like generated fields (that tend to cause more obscure error messages in my experience)?

Here’s how I think of it. If I’m building a Django project, the settings.py file is going to be a clean masterpiece. There are specific reasons I’m going to put things in the same app, or separate apps. As soon as someone submits a PR that craps all over the convention I’ve laid out, I’m rejecting aggressively. If we’ve built the railroad tracks, and the next person decides the next set of tracks can use balsa wood for the railroad ties, you can’t accept that.

But generally people let their agent make whatever change it makes and then wonder why trains are flying off the tracks.


>2. Internalized speed: be a great individual contributor, build a deep, precise mental model, build correct guardrails and convention (because you understand the problem) and protect those boundaries ruthlessly, optimize for future change, move fast because there are fewer surprises

I think the issue here is, to become a great individual contributor one needs to spent time on the saddle, polishing their skills. And with mandatory AI delegation this polishing stage will take more time than ever before.


> But generally people let their agent make whatever change it makes and then wonder why trains are flying off the tracks.

IMO, this is the biggest issue. Well, along with just straight up ignoring what you tell it and doing whatever it thinks should be done.

But, to answer the actual thread question: Make it work (all the tests pass) then make it right is the way I'm getting quality work out of the robots. As long as you watch them to make sure they don't either change the tests to pass on buggy code or change the code to pass on buggy tests (yes, Claude is quite proficient and eager to do both) then the code gets better and better as new stuff is added and the 'flow of computation' is worked out.

Oh, and have an actual plan to follow so they don't get distracted at the first issue and say they're finished because they fixed some random unrelated bug. I've also found it helpful to have them draft up such a plan while they're knee deep in that section of the code for related work so they don't have to figure it all out again from scratch and try to add a few extra levels of abstraction just because.


> Probably because AI appears to work, more or less

All nondeterministic AI is a demo. They only vary in the duration until you realize it’s a demo.

AI makes a hell of a demo. And management eats up slick demos. And some demos are so good it takes months before you find out how that particular demo gets stuck and can’t really do the enterprise thing it claimed to do reliably.

But also some demos are useful.


> Surely that revenue is coming from people using the services to generate code? Right?

Yes. And all code is tech debt. Now generated faster than ever.


Hmm maybe that’s a bit reductive? I’ve used claud to help with some really great refactoring sessions tbh.

> So why do people still design declarative languages?

Cost.

If money were no object, you would only hire people who can troubleshoot the entire stack, from React and SQL all the way down to machine code and using an oscilloscope to test network and power cabling.

Or put another way, it would be nice for the employer if your data analyst who knows SQL also knew C and how to compile Postgres from scratch, so they could fully debug why their query doesn’t do what they expect. But that’s a more expensive luxury.

Good software has declarative and imperative parts. It’s an eternal tradeoff whether you want the convenience of those parts being in the same codebase, which makes it easier to troubleshoot more of the stack, but that leads to hacks that break the separation. So sometimes you want a firm boundary, so people don’t do workarounds, and because then you can hire cheaper people who only need to know SQL or React or CSS or whatever, instead of all of them.


Yes, the RealOrangeOne repo was the working repo before it got merged into Django. It’s the same thing the article is talking about.


Has it been merged into the GitHub repo? I wasn't aware of that and I don't see it here: https://github.com/django/django/tree/main/django/tasks/back...


It sounds like the need you’re describing is not event-driven, but more like orchestration where the orchestrator is aware of dependencies between tasks and runs them in the right order (it builds a DAG, a directed acyclic graph). Tools like Airflow do this.


Static analysis will never be fully robust in Python. As a simple example, you can define a function that only exists at runtime, so even in principle it wouldn’t be possible to type check that statically, or even know what the call path of the functions is, without actually running the code in trace/profiler mode.

You probably want something like pydantic’s @validate_call decorator.


> you can define a function that only exists at runtime, so even in principle it wouldn’t be possible to type check that statically

Can you say more, maybe with with an example, about a function which can't be typed? Are you talking about generating bytecode at runtime, defining functions with lambda expressions, or something else?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: