Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So are we near AGI or is it 'just' an LLM? Seems like no one is clear on what these things can and cannot do anymore because everyone is being gaslighted to keep the investment going.


The vast majority of people I've interacted with is clear on that, we are not near AGI. And people saying otherwise are more often than not trying to sell you something, so I just ignore them.

CEO's are gonna CEO, it seems their job has morphed into creative writing to maximize funding.


As a CEO, I endorse this message


Nobody knows how far scale goes. People have been calling the top of the S-curve for many years now, and the models keep getting better, and multimodal. In a few years, multimodal, long-term agentic models will be everywhere including in physical robots in various form factors.


It will always just be a series of models that have specific training for specific input classes.

The architectural limits will always be there, regardless of training.


That’s an interesting point. It’s not hard to imagine that LLMs are much more intelligent in areas where humans hit architectural limitations. Processing tokens seems to be a struggle for humans (look at how few animals do it overall, too), but since so much of the human brain is dedicated to movement planning, it makes sense that we still have an edge there.


Be careful with those "no one" and "everyone" words. I think everyone I know who is a software engineer and has experience working with LLMs is quite clear on this. People who aren't SWEs, people who aren't in technology at all, and people who need to attract investment (judged only by their public statements) do seem confused, I agree.


There is no AGI. LLMs are very expensive text auto-completion engines.


No one agrees on what agi means.

IMO we’re clearly there, gpt5 would easily be considered agi years ago. I don’t think most people really get how non-general things were that are now handled by the new systems.

Now agi seems to be closer to what others call asi. I think k the goalposts will keep moving.


Definitions do vary, but everyone agrees that it requires autonomy. That is ultimately what sets AGI apart from AI.

The GPT model alone does not offer autonomy. It only acts in response to explicit input. That's not to say that you couldn't built autonomy on top of GPT, though. In fact, that appears to be exactly what Pulse is trying to accomplish.

But Microsoft and OpenAI's contractual agreements state that the autonomy must also be economically useful to the tune of hundreds of billions of dollars in autonomously-created economic activity, so OpenAI will not call it as such until that time.


I really disagree on the autonomy side. In fact the Wikipedia page explicitly says that’s not required so whether you agree with that concept or not it’s not a universal point.

> The concept does not, in principle, require the system to be an autonomous agent; a static model—such as a highly capable large language model—or an embodied robot could both satisfy the definition so long as human‑level breadth and proficiency are achieved

Edit -

> That is ultimately what sets AGI apart from AI.

No! The key thing was that it was general intelligence rather than things like “bird classifier” or “chess bot”.


> In fact the Wikipedia page explicitly says that’s not required so whether you agree with that concept or not it’s not a universal point.

It says that one guy who came up with his own AGI classification system says it might not be required. And despite it being his own system, he still was only able to land on "might not", meaning that he doesn't even understand his own system. He can safely be ignored. Outliers are always implied, of course.

> No! The key thing was that it was general intelligence rather than things like “bird classifier” or “chess bot”.

I suppose if you don't consider the wide range of human intelligence as the marker of general intelligence then a "bird classifier" plus a "chess bot" gives you general intelligence. We had nearly a millennia ago!

But usually general intelligence expects human-like intelligence, which would necessitate autonomy — the most notable feature of human intelligence. Humans would not be able to exist without the intelligence to perform autonomously.

But, regardless, you make a good point: A "language classifier" can be no more AGI than a "bird classifier". These are narrow systems, focused on a single task. A "bird classifier" doesn't become a general intelligence when it crosses some threshold of being able to classify n number of birds just as a "language classifier" wouldn't become a general intelligence when it is able to classify n number of language features, no matter how large n becomes.

Conceivably these classifiers could be used as part of a larger system to achieve general intelligence, but on their own, impossible.


ChatGPT is more antonymous than many humans. Especially poor ones and disabled ones.


How? Chatgpt has no autonomy - it can only act when you type into its chatbox or make an API call. A disabled person can autonomously and independently act towards the world, not just react to it.


Does that mean AI needs to be able to decide "what to do today"? Like wake up in the morning and decide I am going to research a problem in field X and then email all important scientist or institutions with my findings? I am not sure if we want that kind of independent agents, sounds like a beginning of a cyberpunk novel.


But without that level of autonomy it is hard to say it is more autonomous than an average human. You might not want it, but that is what humans are.

Every human every day has the choice to not go to work, has the choice not to follow the law, has a choice to... These AI doesn't have nearly as much autonomy as that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: