But but but Crypto will change the world?! And everyone should have their private wallet? And who cares about recovering your funds because everyone of us will be handling private data / secrets (like passwords or keys) perfectly, always!111
Cryptobros telling you never to use an exchange due to FTX and other examples, also its super easy to use...
Its not an AI hype. A hype is defined as something which gets oversold: "promote or publicize (a product or idea) intensively, often exaggerating its benefits."
Just yesterday I visited a google cloud summit and one person from bosch told the audiance how they are now able to work with less external agencies like texting, graphicsdesigner and photographers for their materials.
It already saves money, has real impacts and continues to progress.
We are also don't know what ChatGPT 5 will bring, because they say this will do more reasoning than before, but we already are working (people/our socity) on solving this in different ways: From code which creates a unit test first and than the code, to different type of architectures.
For me, 2024 was the LLM cost reduction year and the LLM gets a big context window year.
AI doesn't need to be ready tomorrow, but its capabilities are already really good. And i know plenty of people around me who are a lot less interesting to talk to than any llm (from a human skill/knowledge point of view).
llama 3 was also a big achievement 2024. Facebook shows that better data leads to better quality for smaller models.
We haven't not only entered the AI ara but also the 'gather all the knowledge we can, quality check it and refine it because now we can actually do something with it' ara.
Your post is complete hype, all about people saying things instead of showing things that've actually been done.
For me, 2024 was the LLM exposed as basically pure hype year.
There is no expert of any field I follow online where they're posting up results from AI tooling for any other reason than to show how awful it is. I consider myself an expert in software, and LLMs specifically have only caused me great pain.
Even the one situation where you describe someone describing the ability to work in an absolute vacuum sounds like a huge negative to me. The recent push for DEI policies were even ostensibly about the importance of people of diverse backgrounds and viewpoints working together.
The most important thing you're missing a perspective of scale on is the step you describe as "quality check it". On things I don't know, and have attempted to enlist an LLMs help on, in every case I have had to go back and just actually learn how something works, after time wasted struggling with subtle wrongness in the output.
At least I have the background expertise to do that, however, I have seen a Jr dev's mind get literally rotted by too much time in pure LLM land. Besides the cost of rewriting their code, the company was now the proud owner of a young dev with a mind filled with nonsense.
How do you even weigh the cost of fixing a corrupted human mind?
Do you have any concern about the data you're feeding to the vendor serving your prompts?
I've had junior devs tell me they use chatgippity to combine excel workbooks, and when I confirm they're not self hosting a llm to do it, I ask if they think it's a good idea to hand over company data to openai. They don't care.
In a world of tight security, I find it astonishing that so many people willingly give away trade secrets to these companies, whom can sell it to any bidder if they choose.
Has something changed with the service agreement? I was under the understanding that Microsoft didn't mine or sell to advertisers 365 data, at least for corporate accounts.
But thats the point. You can use OpenAI through Azure and other means. If you trust MS (who has basically data from millions of companies) why would it not work out with OpenAI usage?
> We are also don't know what ChatGPT 5 will bring, because they say this will do more reasoning than before...
This paper very clearly demonstrates these LLMs are not reasoning in a fundamental way. Token prediction and reasoning are two different tasks. They may be related, but they are not the same. "Just wait for GPT 5, it will be amazing!" is part of the hype.
Please do not assume an LLM is correct in skill or knowledge unless you already know the answer or can verify by other means.
The problem is that we don't know how we do reasoning.
I calculate stuff by following a formular after i pattern detected a problem i already know.
Plenty of humans are not able to solve those math problems.
If the future of llm / ai becomes a LLM with multi modal and mixture of experts and that solves those reasoning problems, we still don't know if this is a different type of reasoning than what humans do.
If it created meta concepts from billion words on the internet and has meta models which are correct and are more and better than an avg human, isn't it actually good in reasoning?
Its a very narrow thing to say 'is that so many think its actually reasoning' to say AI is just hype or everything we are doing is a waste etc.
There are human benchmarks they are winning at. The critic could be more that we don't have enough benchmarks.