Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was saying the same thing to a colleague a couple of weeks back. The power of ChatGPT will become obvious when it's sucking in all of an orgs data.

At that point instead of your boss asking you to send an email to someone or asking the data team to pull some stat, they'll just ask the chatbot to do it for them.

My guess is 80%+ of the work most people in corporate jobs do could fairly easily be automated with the next generation of GPT being fully integrated with an organisations data and tools.

It's so obvious the power of this technology. Those saying ChatGPT is still making a few programming errors with their crappy prompts are missing the point. Wait until a slightly more advanced version of GPT has access to your dev documentation + all your repos + your jira ticket board + your dev environment.

You won't even need to ask it to do anything. Your boss is going to quickly wonder why they need a team of 20 devs when a team of 2 devs reviewing ChatGPT pull requests is 10x more efficient.



If you thought tech debt was bad with human programmers...


When the funding horizon for your company or project is shorter than the tech debt horizon, the biz pick tech debt every time.

Color me jaded, but I don’t care if the crud feature du jour is full of tech debt when we’re gonna throw it out in 3 months anyway.


Just ask ChatGPT to refactor the code and have it pay down the tech debt.


Managing outsourced programmers is hard enough...


> Wait until a slightly more advanced version

Yes, and therein lies the issue - that last crucial 1%, 0.1% may be impossible to achieve (sort of like attaining lightspeed travel)


The last 1% isn't crucial when doing something as non-critical as sending an email or changing an input field on a website. I write bad emails and deploy awful code all the time. For 99% of problems it just needs to be good enough.

The fact some people argue ChatGPT already is already good enough for a lot of use cases should indicate we're not that far from something just as reliable as a human programmer.


... Or self driving cars. At some point they were only a couple years away


The Internet...

I predict the Internet will soon go spectacularly supernova and in 1996 catastrophically collapse.

- Robert Metcalfe, in InfoWorld, 1995

Most things that succeed don’t require retraining 250 million people.

- Brian Carpenter, in the Associated Press, 1995

Tim Berners-Lee forgot to make an expiry date compulsory . . . any information can just be left and forgotten. It could stay on the network until it is five years out of date.

- Brian Carpenter, in the Associated Press, 1995


to be fair, looks like we can circumvent the issue by leapfrogging into faster-than-light speeds. I've seen somewhere that real (albeit tiny) warp bubbles were achieved in a lab recently, so its not too far off into SciFi land... probably only a few decades (tm) until its here :D


> Wait until a slightly more advanced version of GPT has access to your dev documentation + all your repos + your jira ticket board + your dev environment.

why you are so confident chatgpt ever will be able to work as independent dev and won't hit the limit of it's abilities?


Why would it hit the limit of its abilities?

Almost all tech has room for progress, ChatGPT is no different, it's just a question of how fast that progress will be.

You can roughly predict the rate of future progress by looking at how quickly advancements have been made in the recent past. In my opinion AI looks like computers in the 70s/80s or cars in the 40s. The tech isn't completely new, but significant improvements are being made every year.

I think the burden of proof would be on you to explain why advancements in AI would stop right here. Conveniently right around the time when AI is able to write code fairly well but with a few bugs.


> Almost all tech has room for progress, ChatGPT is no different, it's just a question of how fast that progress will be.

all tech have also its limits, we could use internal combustion engines in the car, and even in some airplanes, but we have problems with this tech in building fighter jets and then spaceships.

The same is about ChatGPT, it is good to spill texts it seen during the training, or do slight transformation on it, but there were multiple attempts to teach NN do some algorithmic work, and it always failed miserably afaik.


And just like ICEs and jets, each tech increasingly became otherworldly and revolutionary at doing what it does well.


> each tech

lol, of course not each tech. You picked cars as example, but it was one of the most transformative advancements, and for each such advancements there are hundred thousands with moderate impact, and tens millions failures.


> became otherworldly and revolutionary at doing what it does well

also, it is yet to be proven that ChatGPT will do well in any job requiring significant reasoning skills.


Good thing that not all of the use cases that are finding footholds today require skills it does not yet have. Why are "significant reasoning skills" needed to help you write a song, or help rewrite your resume, or find an answer to a niche piece of knowledge, or fuck... this is exhausting.

I'd have thought that people on HN would have far more vision than this. For every single "well it can't do that yet" observation there are dozens of use cases people are finding TODAY. It's already widely useful and this is year one.


> Why are "significant reasoning skills" needed to help you write a song, or help rewrite your resume, or find an answer to a niche piece of knowledge, or fuck... this is exhausting.

discussion started with person claiming that soon chatgpt will be trained on docs, code and jira tickets and completely replace engineers, which imo will require significant reasoning skill.

I agree that for those tasks you described, chatgpt may find its niche.


@riku_iki

That was actually me. And it already can write Jiras, scaffold code, and help write docs. Doesn't need any significant reasoning, just a well trained model and contextual data.

However, I didn't say it was going to 100% replace anyone. The horizon looks like GPT will become a significant assistant tool for numerous tasks. I think it will need humans to review and approve output and direct it for the foreseeable future.


> And it already can write Jiras, scaffold code, and help write docs.

yes, and my hypothesis that's where it will stop, because core meaningful engineering work on complex product/system requires way more reasoning abilities.


> and my hypothesis that's where it will stop

That's like saying junior developers will never take your job because they just don't know enough context or have enough experience.

Why do you think there is any reason for progress to stop? When has progress ever stopped? Even just looking at the simplistic Github Copilot from ~3 years ago I could see the writing on the wall for jnr devs. When these models have your entire codebase consumed it's quite apparent that the gestalt has changed in a way I think you are underestimating.

Complex product/systems is exactly where you'll need an AI code writer. It knows all of everything in your massive codebase. It will be able to suggest efficiencies you couldn't possibly be aware of unless you'd read every line of code yourself.

Remember when AlphaGo made that one move that showed it was far, far superior than a human at improvising and seeing ahead leagues of moves? It shocked the whole community, even the DeepMind engineers were shocked. That's going to be you one day. There is absolutely no reason I can see for this not to happen bar some sort of as yet undiscovered logical limit.


> Why do you think there is any reason for progress to stop?

I described you reasons: jun developer has proven abilities to learn reasoning and context. and chat gpt does not have proven abilities to learn to reason.


New things arouse suspicion, even if they've been seen before. Mobile phones were plenty smart before Smartphones. The new thing occupies inhabited space so something has to be sacrificed.

All well and good when it's usually your enemy's space on the chopping block but mainstream media haven't yet told people if this kind of progress is generally desirable. The flock is flying blind so they default back to conservatism... for now.


Because it's not AGI and all technology has limitations. Also because it's easy to imagine a hyped tech doing anything. Much harder to make that work.


Or all twenty devs stay and eighty more get hired because DevGPT can do 100x more work in the same amount of time.


> The power of ChatGPT will become obvious when it's sucking in all of an orgs data.

I am having a hard time seeing this.

Intranet/corporate search has forever been awful in comparison to internet search.


And that's why traditional search is going to give way to GPT search. It solves this problem more or less already. Feed it web pages and you can talk to it, guide it with a conversation etc. Here is a realistic example for how searching Intranet could look in 5 years time:

-----------

> Employee: Hey CorpGod; where is that document that had some information about our new project management processes? You know, the one I had open like 2 weeks ago

> CorpGod: Oh, you mean the one your boss asked you to read by Friday? It's here [link]

> Employee: No, that's not it. It's the one that we copied and made edits to after that, I can't find it.

> CorpGod: That got deleted after Employee2 made a copy and published it. It's here on the Wiki [link]

-----------

GPT already does stuff like that with data from the Internet.


What makes you think GPT can be trained that fast so that it keeps up with newer documents, discussions, etc? It takes a long time for GPT models to come out. chatGPT is 2021 internet data, and you had a lot of people involved that made it possible. Do you envision the LLM constantly being fed updates by crawlers and indexers, with no need for human review or intervention?


Yes. If I was on OpenAI's board or product team, this is exactly where I'd be trying to go. It's a purely technical challenge with nothing standing in its way beyond regular engineering problem solving.

1) Scaling 2) Efficiency 4) Quality 5) Security

Some of these may be super-duper hard problems, but hard problems worth solving = massive opportunity.

If IBM could put space age technology into corporate offices in the 50s, we can put dystopian era technology into offices in the 2020s. At first it will be stupidly expensive, and only the big players will be able to take advantage of the cost to benefit ratio, but in time it will more than likely be on your phone.


Yes. This is what MS will do with Teams.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: