Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Don't worry -- soon after that, there will be high demand for human coders as companies scramble to hire to rewrite all the buggy and vuln-ridden AI-hallucinated software. We're on the verge of two revolutions in tech, not one.


I shared your view a year ago. Hell half my posts here on HN are arguing against the annoying hype. Today I make heavy use of AI and my code quality has gone up, not down. It's better tested, better factored, with better error handling covering more edge cases

AI code is only buggy if no-one is guiding or reviewing it

I'm not sure how long we'll need someone in the middle to actually review the code


I view AI like a gas pedal. For an experienced driver it can really help. For someone new, straight into a wall. For some code you go straight for hours and barely touch the steering wheel. For some code it's all wheel and too much gas is a mistake.

That said, we will need less coders overall. Just like you don't need human drafters or calculators in the same way. It will cut the bottom out of the industry, and entry level will be expected operate like a senior from 10 years ago.


Or as the article put it:

>> Most workers, and most work days, are just drudgery. Answering emails. Writing up quarterly plans. Reviewing metrics. Building applications that do something with data.

Yes. Those jobs are going to disappear. If your role's primary value is shuffling paper around an org, or putting minor edits on something before forwarding, your job is going to disappear to AI in the next 1-3 years.

Or AI-initiated human process optimization in the next 2-5 years, which I think is an underappreciated second wave.

To define that: if we 10x or 100x the productivity of certain roles, won't companies look at the remaining unaccelerated human speedbumps and ask "Is it really that important we have a person do that? Because it's now costing us substantial latency and throughput of the process as a whole."

And in many cases they'll likely conclude that no, it's not critical that a human be involved at that point. So poof those jobs as well.

As a result, we'll have many fewer humans being much more productive.

Tbd on whether that produces enough surplus (and equal allocation of it) to balance out the job losses.


If there are no people, there won't be a need for managers to exist either. Who/What are the managers going to manage? The office air?

Eventually whoever that is running the business has to be smart enough to figure out everything, and the margin for error will be small in a economic environment where every one is already poor and can't spend to buy your things.

I remember the days of socialist India. If everyone is poor its impossible to do the good things like research, development, innovation. The reason is simple, with no one having money to buy your things, you will never make profits to invest in them.


Maybe NOS is a better analogy, since even the newest of drivers is going to need and get use out of the gas pedal.


I think new devs could probably get use of it as a replacement for SO or Google. But if they slam on it and think they can vibe code a whole app, they are going to have a bad time.

But NOS may be better. It might make you faster but it doesn't make you better.


That works because you are a senior developer who knows how to properly use the AI tools. What 10 years from now? What happens when senior devs are retiring and we do t have replacements since we replaced junior devs with AI?


Presumably, AI has advanced in 10 years?


Do you believe there's a threshold to how good this stuff can get, or do you think it's all infinite upside?


Obviously not infinite, but humans have very real limitations too. We've all seen them.


A materialist could logically conclude that it still has some way to go.


Well I'd probably consider myself a materialist but I'm not sure I'd agree. The evidence to me seems that it can really only come from two places: additional compute or new breakthroughs in AI learning. Compute's coming, certainly, but that only has the potential to improve things if it's added in conjunction with a commensurate AI breakthrough. I think the trend in improvement for transformer could be logistic, not exponential, like a lot of the snake-oil salesman like to state. And while there's plenty of evidence for compute there isn't much for the AI breakthrough that leads to an exponential jump, and if it does exist it's a trade secret, so until we know we don't know.


> What happens when senior devs are retiring and we do t have replacements since we replaced junior devs with AI?

... AI eats senior devs. Vibe coding front-end devs inherit the earth. https://m.youtube.com/watch?v=Clzx434IV6o


It makes me wonder if at some point people will regard writing code the same way they regard writing assembly


> I'm not sure how long we'll need someone in the middle to actually review the code

Always will, it's a question of how many people we need, not if. Well, until AGI (which I'm very bearish on in the near or even intermediate term) or some other monumental shift


I am senior developer and have easily and successfully avoided using Llm development these last few years. Nothing has changed for me and my team mates who do use it are slower than me and often don’t know what the Pr actually does.

You chose to invest in the downward career slope. That’s why your opinion has changed. If you continued to resist it you wouldn’t be looking to remove yourself from the auditing/coding position.


How is investing in something that improves my productivity and output quality a downward career slope? Continuing to use a hand saw when there are power tools available seems like the downward slope..

If AI gets to a point where I'm fully able to be removed from auditing/coding positions.. well there won't be any coding positions left for anyone

> often don’t know what the Pr actually does

this is on them for being lazy. I thoroughly review the code AI produces. I don't commit it if I don't understand it


Analogizing AI to power tools is like analogizing building an IKEA bookshelf to outsourcing the job to a TaskRabbit.

Power tools automate manual hand movements, but you still need to follow the manual and know what fits where. Or you can spend money on a contractor from TaskRabbit to do it for you, perhaps badly.

LLMs make it faster to generate code, sure. Automating boilerplate code isn't too unlike using a drill to fit a screw.

But witting software writ large still requires thinking, something that the companies providing these services are heavily incentivized to remove.


> How is investing in something that improves my productivity and output quality a downward career slope?

> I'm not sure how long we'll need someone in the middle to actually review the code

You don’t think relying on a system you yourself are predicting will replace you isn’t investing in a downward slope. Wait until you find out Llms are helping suppress your wages.

> this is on them for being lazy.

Just like you gave into Llms you became lazy about writing code. That is the trend with Llms, to do less work as you’ve been pointing out.

You saving time the same thing as being lazy.


Not using LLMs doesn't make them go away. Whether they suppress wages or replace anyone is completely out of my control. Avoiding them just means I'd be producing below my potential

> you became lazy about writing code

I haven't, though. I still do the same amount of work except now I get more done. Now more of my energy goes into architecture, testing, specs, making sure it's built well instead of the lower level wiring things together

I use them for their perfect memory and as a creativity buddy, not for them to think for me


>I haven't, though. I still do the same amount of work except now I get more done.

And the expectations will exceed you getting more done. You don't think your employer will try and squeeze even more out of you because of AI?


I'd rather be unemployed than work for someone like that


I have no goal to make them go away. My goal is to not use them because I don’t have to. Using Llms removes your agency to not use them. You become more reliant on using them.


> and successfully avoided using Llm development these last few years.

I'm not sure that's much of an achievement, to be honest. If you tried it and it turned out to be not useful for you, fine, I'm on your side. But refusing to try for the sake of it seems backwards. I mean, then why use CI, version control and those fancy IDEs anyway? Notepad is a perfectly cromulent text editor (and what is code, if not text, anyway?) and my local build.bat and deploy.bat do their job nicely and quickly.


> what is code, if not text, anyway?

Poetry is text too. It is a misleading categorization.

> I mean, then why use CI, version control and those fancy IDEs anyway?

CI, version control, and IDEs do not think for you.

Resisting using LLMs to do the code that you know perfectly well how to do is like resisting using maps to tell you to make "now make a left turn" to travel from A to B, when you have gone from A to B a zillion fucking times. It is perfectly sensible, specially if you want to retain your skills and mental acuity.

Anecdotally, I know individuals (ok, Dad /g) who can no longer negotiate even the most simple routes without the stupid map thing walking them through all the turns. Routes that were taken for years without these gizmos now require the gizmo.

This is an unfortunate 'experiment' we are conducting in this field. The actual lasting results (or damages) are unknown as of yet. We have some idea though.


I never said I didn’t try it. You said that.


I used to be like you and then I decided to get up to speed on where things stand a couple of months ago.

I hate using it but I can write issues in gitlab, send them to aider, and it will spit out working solutions complete with test coverage.

Right now I think I'm maybe still faster just writing things myself but this feels incredibly tenuous. I'm certain that in a year vibe driven development will be faster.

There is no programmer's union. When the industry decides it's time to get rid of developers because vibe coding is faster than mid level developers, there is no counter.

The only developers left will be either true 100x geniuses or vibe coders. I'm not the former so I am trying to make sure I can become the latter long enough to last a few extra years.

Regardless of how much you personally want to resist, this is what is going to happen.


I didn’t say I don’t invest or use automation or other AI. I said I don’t use Llms. There’s a huge difference.


>someone in the middle

In the middle between AI and what?


The requirements.

He's essentially saying that it's a complete guess how long the job of a programmer exists, and when it will change over to essentially product manager, of which you'll need a lot less.

It could be decades... Or next month.


What's your workflow/tool set?


There are many examples of buggy and vuln-ridden non-AI-hallucinated software. Never been a rush to fix them.


I've been thinking about this lately.

While we engineers understand how to judge and evaluate AI solutions, I am not sure Business Owners (BO) care.

BO's are ok with a certain percentage of bugs/rework/inefficiency/instability. And the tradeoff of eliminating (or marginalizing) Engineering may be worth the increased percentage of unfavorable outcomes.


>>While we engineers understand how to judge and evaluate AI solutions

Usually no one really care. Target is to close a ticket, not to make a good software.

"Works fine on my machine" - heard this many times. But user don't have a beefy m4 pro machine with ultra fast fiber. No one care.


Yeah but the ‘works fine on my machine’ is usually said by jr/inept devs

I havent heard that around serious engineers


Isn't that the no true Scotsman fallacy?

I'm overjoyed that you only encountered serious "engineers".

My professional experience is that at least 3/4 of the people riding software should not be because they write horrible code horribly.

But since I live in a place where you have to pass licensing exams to call yourself an "engineer", Maybe my experience is different than yours


Hi, you did not understand what I wrote.

> I'm overjoyed that you only encountered serious "engineers".

Nowhere did I say this. At all. I said I havent heard 'works on my machine' said by serious engineers. That does not mean that I have ONLY met serious engineers.


Their title and salary are serious, though.


Precisely. Businesses are okay with bugs if it helps them enter a market faster or stave off competition. Bug fixes are mostly considered maintainancd which is outsourced or "cost-optimized"


Probably depends on BO/stakeholder as well. B2B solution that has a low risk of killing anyone? Maybe fuck it, let the model have its way.

Technology that controls software that keeps people alive, controls infrastructure, etc., uhhhh I don't think so. I guess we're just waiting for the first news story of someone's pacemaker going haywire and shocking them to death because the monitoring code was vibed through to production.


Think more business risk than risk to humans.

B2B AI LLM vibe-SaaS that has a 10% chance to become profitable and a 10% chance to gift away all money invested into the business ever while leaving founders on the receiving end of 100 lawsuits.


Isn't the sector for software that is life-critical really small? medical devices, and maybe some control software? Oh and probably defense too

I don't feel much better ( as someone who has spent their career in consumer electronics )


> Isn't the sector for software that is life-critical really small?

I think it's large. Think about the software that goes into something like air travel - ATC, weather tracking, the actual control software for the aircraft... I am aware that nothing is perfect, but I'd at least like to know that a person wrote those things who could be held accountable.


But how many people are there? Comparing to 100k in Meta? To millions from Indian body shops?


This exactly. Instead of adding Adobe adding AI features to Acrobat that I don't want or need, I would like it to fix the fact that it still can't convert a DOCX to a PDF without messing up the tabs.


The lack of devs that understand the domain knowledge and the codebase will be the main issue.

I would say that the current capabilities of genAI is like a junior dev, sometimes even a mid-level. But one main difference is that a dev is slowly learning and improving and at some point will become a senior dev and also domain specialist.

If there is a codebase created by genAI, then it’s equivalent as if all devs left the company, so no one knows why some piece of code was created in a certain way, if it was part of the business logic or some implementation detail


I wonder if the codebase will be understandable at all.

You can most often “get used” to a codebase because they authors tend to stick to the same patterns in many ways, and they are somewhat coherent across time. After a bit you can kinda guess in what file is a feature implemented etc etc.

Will this still be true with ai doing most of the work?


Claude is really good about documenting what it has done. I’ve also had some good luck with “I’m interested in this part of the codebase please explain it to me” And “this part is not working let’s diagnose” I’ve been doing a lot of productionization of nontechnical vibecoding. And I provide a lot of feedback on best practices (one of which is to ask “is this best practices for security and data protection?”)

That being said there’s no replacement for a real knowledgeable human software engineer.


The rate at which AI is capable of producing code is intractable for humans to deal with. Right now the bottleneck is human reviewers. If AI ever becomes effective at generating provably correct code, it's joever.


The good news is there are several thousand newly unemployed software engineers on the market…


It's also likely that there are thousands who aren't hustle-culture oriented who have been looking for so long that they don't even count as unemployed anymore.

Or maybe I'm just projecting.


But will humans understand the proofs?

Oh…


The proofs are not meant for human consumption. It's for the AI to know to try again rather than spit out hallucinations. Of course there's a leap of faith somewhere here.


The proof either passes the SAT solver in a reasonable amount of time, or it doesn't.


Somebody will have to write proof verifier, and that in many ways will be harder than writing some CRUD app that they want proof verifier to validate.

We might even end up increasing the demand and pay for devs if this happens to pass.


We have many of those that are perfectly fine. Writing proofs is still quite hard, especially proofs that actually say something about your program.


Proving something is correct is a far harder exercise, than writing a broken but acceptable version of that thing.


nah, just throw more hardware at it. never rewrite buggy code, increase processing power is the winner almost always. That's one of the things that is nice about opensource. You can't hide the crap code unless the users don't care at all.


"Use our agentic coding model version n+1, it will fix all your buggy and vuln-ridden source and you won't have to apologize to your CEO/board!"


I wouldn’t bet on this


Why not? Seems to match the trend of tech innovation creating more demand for tech.

The reality is lots of software problems can’t be solved with the level of “intelligence” LLMs have. And if they could, it wouldn’t be just software in danger - it’d be every human profession. Even the physical ones, since AI would quickly figure out how to build machinery to automate those.


The industry might evolve to "AI Please rewrite the faulty code and redeploy, eating the cost of inefficiencies as business expense, instead of hiring back to the Full Software Engineering Employment Levels".


>> Why not?

Because it's just cope. Look at the current reality. Are companies rushing to fix bad or even buggy code written by human devs? No, not in most cases. In most cases, if a piece of code "works", it is left the hell alone. And that's the thing about AI code: it does work. The quality is irrelevant in the overwhelming majority of cases (especially if it's other AIs that are adding to it, which is the case more and more often).


> The quality is irrelevant in the overwhelming majority of cases

Software quality is especially important in safety critical applications.

We should not expect an LLM trained solely on formally-verified code to produce formally-verified code. I don't think that also training on specs and hateful training material will fix that.

So then we're back to the original software engineering objectives of writing better SAST, DAST, Formal Method, side channel, and fuzzing tools for software quality assurance.


Ok, then 10% of the current IT workforce will be allocated to the critical applications. The rest 90% will be replaced with Claude SuperGPT CODER 6.2.

Like ~100k people in Meta - nothing critical there, right? Many thousands could be replaced with AI-coders there.


I've been to birthday parties that employed more people than "safety critical" software development. We're talking about 99.99% of the software development jobs evaporating.


I think we're talking like 100% of everyone gets a new power saw!

Compare traditional woodworking with modern carpentry on quality, longevity, and marginal efficiency.

From "Why Don't People Use Formal Methods?" https://news.ycombinator.com/item?id=18965964 :

> Which universities teach formal methods?

> Is formal verification a required course or curriculum competency for any Computer Science or Software Engineering / Computer Engineering degree programs?

> Is there a certification for formal methods? Something like for Engineer status in other industries?


Just become a black hat then. It will be easy to find problems in everything but the very top percent of AI generated software.


I had Claude Code do some of the challenges at defcon, it was doing surprisingly well.

(I did make sure the challenges didn’t ban AI).


I love love love Claude Code. It's my weapon of choice. However, we're still a ways off.

The most recent thing I did prior to typing this comment was have it look at a cosmosdb integration and recommend changes to my CosmosClientOptions to reduce CPU, based on a few exceptions I'm seeing.

It recommended 5 changes to 5 settings, and every single one of them was outside of the allowed range for those 5 settings. I told it as much, and asked nicely (always nicely, I don't want to be on the Singularity's naughty list) to try again, those aren't valid values. It came back with 4/5 values within the allowed scope, one still outside of the scope, and 2 of the 4 that were accurate were the same values that I already have configured.

Not there yet. Better every day, but definitely not there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: