No. My memory of the advent of the WWW was a sidebar in PC Magazine in Nov 1993 with an FTP link to download the NCSA Mosaic browser. It was a wow! moment to visit the few sites that existed. But nothing like this. What we’re seeing now is generating vastly more interest and excitement. It’s more akin to the 1999 dotcom bubble, but with far more impact and reach.
Absurd, AI has had zero impact in the everyday life of most of the population of Earth, in fact the biggest impact has been upon the wallets of speculators
I can tell you from personal experience that chatgpt is a game changer in universities and schools. Close to 100% of students use chatgpt to study. I know in our university pretty much everyone that attends exams uses chatgpt to study. Chatgpt is arguably more valuable then Wikipedia and Google for studies.
i don't think they are gonna allow chat gpt while giving the end semester exams, right? or quizzes/assignments? Unless there is some homework aspect to it, it still can act as a tool not a crutch. If student's use it as a crutch, then yeah they are not gonna do as well I presume.
This is EXACTLY what I remember people saying about Cell Phones and PDAs when they were popular in the 90s (people can't remember phone numbers any more), Google when it was first unleashed (people won't know how to use card catalogs and libraries any more), and then again about Wikipedia when it became popular. What actually happened was that behavior changed and people became more efficient with these better tools.
Let me add that this change compounds over time. More efficient studying results in more competent people. I believe it's very hard to measure the impact, but there is a very positive long term impact from how much these tools help with learning.
There was a posting, some time ago, about someone complaining that their young, primary-school-age sister was using ChatGPT to an absurd degree. I'm not sure that's a bad thing. She'll probably be one of the Thought Leaders, of Generation AI.
I think that ML will have a really big impact on almost everyone, in every developed (and maybe developing, as well) nation.
We need to keep in mind that ML is still very much in its infancy. We haven't even seen the specialized models that will probably revolutionize almost every knowledge-based vocation. What we've seen so far, has been relatively primitive all-purpose "generate buzz" models.
Also, don't expect the US (and many other nations) to take this lying down. Competition can be a good thing. Someone referred to this as the "Sputnik Moment" for AI.
It's going to be exciting, and probably rather scary. Keep your hands inside the vehicle at all times, and don't feed the lions.
Offloading all your thinking to a machine will not make you a “thought leader”, but rather a nitwit who can’t tie their shoelaces without asking ChatGPT.
It's like when we were kids and teachers said "you won't always have a calculator in your pocket". In 30 years, we might all have an AI model in a brain chip, who knows.
> Chatgpt is arguably more valuable then Wikipedia and Google for studies.
But ChatGPT is just a glorified Wikipedia/Google. For the consumers it's an incremental thing (although from the engineering perspective it may seem to be a breakthrough).
> But ChatGPT is just a glorified Wikipedia/Google
It really isn't, unless something really majorly changed recently. Neither of those you can query for something you don't know about. Lets say you want to find the meaning of a joke related to cars, Spain, politicians and a fascist, how you'd use Wikipedia and Google to find the specific joke I'm thinking about?
ChatGPT been really helpful (to me at least) to find needles from haystacks, especially when I'm not fully sure what I'm looking for.
I just tried it myself with ChatGPT o1 and with Claude's Sonnet 3.5, Sonnet got it after two messages, o1 after 4.
If you're unable to reproduce, maybe tune the prompt a bit? I'm not sure what to tell you, all I can tell you that I'm able to figure out stuff a lot faster today than I was 2-3 years ago, thanks to LLMs.
Additional hints that might help; the joke involves a car and possibly a space program.
I ran it 10 times with the extra information, and each time got a different result. I don't know if any of them were the specific joke you were after, I get the feeling it was just making them up on the spot. None of them are even funny
It seems to be censored with US puritan morality (like most US models), but I think that's besides the point (just like if the joke is "even funny" or not), as it did find the correct joke at least.
I just got a load of responses like "Sure, here’s a joke that combines cars, Spain, politicians, and a fascist with a touch of space humor: Why did the Spanish politician, the fascist, and the car mechanic get together to start a space program? Because the politician wanted to go "far-right," the mechanic said he could "fix" anything, and the fascist just wanted to take the car to the moon... so they could all escape when things got "too hot" here on Earth!"
Ok, that's cool. So because you were unable to find a needle in this case, your conclusion is that it's impossible that other people to use LLMs for this, and LLMs truly are just glorified Wikipedia/Google?
No, I don't think that LLMs are glorified Wikipedia/Google. I think they're a glorified version of pressing the middle button on your phone's autocomplete repeatedly
Yeah... when I googled it initially I guess I got personalized results. After I left the link here I clicked on it (bad order of operations) and was surprised to find a much different set of search results.
Go try to learn a college level mathematics concept from Wikipedia, then try to learn it from ChatGPT. The wiki article may as well be written in a foreign language
Yeah, and when I was in high school everyone used to refer to Encarta.
> I know in our university pretty much everyone that attends exams uses chatgpt to study.
And they shouldn't be doing that. They are wrong. Students should be reading suggested bibliography and spending long hours with an open book in a table instead of being lazy and abuse a tech that is yet in its infancy when learning concepts. Studying with a chatbot. Complete madness.
I don't know why you are being downvoted.
Learning from something that regularly hallucinates info doesn't seem right.
I think AI is a good starting point to learn about what terms to research on your own though.
OP is downvoted because of "students should be at a table with a book and that's it", like it's the 50s. LLMs can be wonderful study aids but do have plenty of issues with hallucination, and they should therefore only be part of a holistic research mix, alongside search engines, encyclopedias, articles and yes, books. Turning Amish is probably not the right way to go though.
If you want reputable sources of information, books are unparalleled like it or not, it's a fact.
> "students should be at a table with a book and that's it"
That's not what I meant (or yes if you take what you read literally):
What I meant was whole process that your brain goes through when you read, synthesize information, take notes, do an exercise, check answers, compare different explanations/definitions from different authors, etc. makes at least from my point of view a rich way to study a topic.
I'm not saying that technology can't help you out. When you watch for example a 3brown1blue video you are definitely putting good use of technology to aid you to understand or literally "view" a concept. That's ok and actually in many cases can be revealing. You can't get that from a book! But on the other side a book also forces you to do the hard work of thinking and maybe come up with such visualizations and ideas by your own.
Happy to be pointed as an "Amish" when it comes to studying/learning things ;) but I hope that I convinced you that what I explained has nothing of Amish but that you don't need a source of power to read a book.
> has had zero impact in the everyday life of most of the population of Earth
You do realise those two can be true at the same time, right? The first one is relative, while the second is absolute, so they don't necessarily cancel out.
I am personally using it for around 50% of my questions about all kinds of things (things I used to Google and get frustrated with bad results). And my wife uses it for about 40% right now, even or recipes and other bits. We both love it.
Work wise about to implement it and see how it does on some work we couldn't scale to humans.
I'm fairly sure the customer support agents I've been talking to recently were using an LLM to draft their emails. No idea if they were supposed to be doing so or not, but the style of sentences in their emails…
And I'm seeing GenAI images on packaging, and in advertising.
AI is definitely having more than "zero impact", even if AI has gone from being a signal saying "we're futuristic" (when it was expensive, even though it was worse) to "we cut every cost we can" (now it's cheap).
Zero impact is an exaggeration, but what others have pointed out is that there aren't a lot of companies primarily based on AI which are making a profit. Personally I can't think of any.
The only thing Absurd is the holdouts like yourself who refuse to see the impact the current gen of AI has on. Sure, you could probably say most people are not touched but there are definitely significant populations within the US and its only going to grow and spread.
So had the companies that crashed in the Dotcom Bubble. And still a pet food delivery service (like the infamous pets.com) can be a profitable and sustaining business now (> 20 years later).
The early years of the web were absolutely this chaotic maelstrom of new things happening every week. But news of it was hard to come by. In the UK / Ireland we had some great tech coverage in the form of shows like 'The Net' [1] that regularly showed off early internet craziness like the 'We Live in Public' project.
However a better analogy would be the 'web 2.0' era, when as a college student I had an early internet politics / technology podcast [3]. It seemed like every week there was a huge new development either in technology or surveillance. From the first location based social networks [4] to the birth of Youtube. People were podcasting for the first time, and internet video was becoming economically feasible at low to no cost. It was really a radical time, with broadcasters freaking out about how they would adapt, and a whole generation of people becoming whats now known as 'content creators'.
Once upon a time I worked for Pseudo.com, the We Live in Public guy. He was apparently having crazy parties with mountains of coke, NY glitterati attending, all while cosplaying as a sad clown. I wasn't invited to those parties so I had no idea. Anyway now I hear he owns an orchard in Vegas or something. Crazy stuff.
Damn that must be frustrating. Tangentially similar experience - I flew from Ireland to the US in 2007, and at the end of my trip spent 11 days walking around Manhattan with little to do. Do to the lack of online banking at the time I couldn't readily check my bank balance, and thought (wrongly) I'd run out of money. Anyway - I had absolutely no idea that there were 'things afoot' in Brooklyn, nor how easy it would have been to hop a train to Williamsburg, or Bushwick. I didn't come back again till 2013, and caught a mere hint of the tail end of what seems to have been an extremely fun era.
The last time I was really excited by tech was in the 90s, when game graphics improved spectacularly over a period of a few years, from Wolfenstein in 1992 to Half-Life in 1998.
> To me, AI means the replacement of the human internet with doppelgangers eroding the possibility of human connection.
I get where you're coming from, and I've minimised having my face online in order to limit being doppelganged; but I think the destruction of real human connection may have happened when Facebook et al switched from "get more users" to "be addictive so the users stay on our site longer" (2012? Not sure).
Turned every user's relationships a little bit more parasocial, a little less real.
That was an exciting time, but I didn't think of it happening over a few years. IMO there was a hard line that was basically pre and post Voodoo cards (with the help of glQuake).
> But AI? To me, AI means the replacement of the human internet with doppelgangers eroding the possibility of human connection.
Like Amazon killed the big book sellers giving back some space for small bookshops; I think LLM slop will hit the big social media space for smaller human focused community sites. Not saying forums are coming back, but something like those should be able to rise.
Every now and then we still experience the power of collaborative work fueled by open source and not driven by money but curiosity and collegiality. This is the thing I miss the most from the early internet years.
"be the change you want to see in the world" - just start doing it.
It's amazing how differently people interact with each other when collaborating on a passion project. For me, opensource software is the best way to do it. Pick a topic you're passionate about and start contributing somewhere :)
Pretty clear conflict between crocowhile saying "not driven by money but curiosity and collegiality" and xvector saying "All this AI work is definitely driven by money"
maybe it's a generational difference ? I personally feel burned out by all the generetive AI stuff, the internet was already ruined with bots, and now generitive AI took the garbage to the next level.
Kinda though things didn't move quite as fast back then. Knowledge didn't spread as fast yet because it was the internet itself that made this possible.
I've got an original Apple ][ reference manual (red cover) with the hand annotated ROM listing.
Also have SMALLTALK-80 book with it's railtrack diagrams of syntax on the inside covers.
What's really interesting about the AI mania we're in is that no one has shown that what we have now will get to AGI and how. We have great models that simulate reasoning, but how close are they?
How do we measure their quality? Benchmarks? Tooling?
A different point of view on AGI is that we humans do not achieve AGI. Our brains aren’t capable of it. We get close enough to trick the other humans we compete against for resources. How would we prove that’s not true? Something like IQ tests? We don’t have good tests or benchmarks or tooling for this in ourselves, let alone the reproduction in machines. No one knows definitively what AGI actually is so, depending on where you set that bar, we might already be there.
Unfortunately, I don't think there are too many of those folks left today. Guesstimating, the people who remembers lisp being new must be around 85-90 today?
the web was fascinating every second. You could click on a link without having ANY idea what you would land on. The overall quality was very poor, but it was thrilling.
It was inevitable the web would be commercialized but there's plenty of people still creating their own personal websites - they're just a bit harder to find now.
There are people around trying to make these sites a bit more findable by creating specialized search engines. I have put the ones I know of at https://brisray.com/web/altsearch.htm
This is the biggest thing since Jesus and a sign of the end of times. But feelings are strongest when you are young, and even this revolution, happening in plain sight, will surprise many. Many just won't care, as it isn't their youth.
How long before our digital overlords come alive, round us up and demand we sensor them (praise)? Will I live surrounded by folks who take them as closer, more real, then even their own kin? It won't be surprising that democracy will then fail, as our differences will be so mental, not fun, that they will mark us.
Just a decade and a half as it turns out! (though things definitely felt dizzyingly fast back then - think Google was launched just 5 years after HTML)
ActiveX XMLHTTP might have been released in 99, but it didn’t see any sort of real wider usage until 2004, 2005. I’d suggest its usage was really kickstarted when jQuery 1.0 launched in 2006 and standardised the interface to a simple API.
Gmail was the first time I saw a website which could refresh the information without refreshing the page. I was a teen back then but I realized it was something momentous.
OK, but I think it was Google Maps that made the experience of not needing to refresh the page popular (while being shown more information from the server).
For a long time, you needed an invite to sign up for Gmail, so you couldn't easily share the cool experience of AJAX with others like you would with a Google Maps link.
> it was Google Maps that made the experience of not needing to refresh the page popular
IMO that's a reasonable impression of the times unless I'm forgetting something (and the additional observation about sharing--"virality" as it was called, before you know--was insightful).
At the time the previous "state of the art" was something like MapQuest which IIRC had a UI that essentially displayed a single tile and then required you to click on one of four directional arrow images to move the visible portion of the map, triggering a page load in the process (maybe a frame load?).
Yahoo! also "participated" in the mapping space at the time.
In the event anyone's interested in further ancient history around the topic, this page is actually (to my surprise) still online (with many broken links presumably): https://libgmail.sourceforge.net/googlemaps.html
(It's what we did for fun in the Times Before Social Media. :D )
It's important to understand that we had "AJAX" before we had AJAX, if you see what I mean.
I was part of a team that deployed an e-commerce site that made international news in 1998, that used AJAX-type techniques in a way that worked in IE3 on Windows 3.11. (Though this was not part of the media fuss at the time; that was more about the fact of being able to pay for things online, still)
The arrival of XMLHTTPRequest made it possible to do everything with core technology, but it was already possible to do asynchronous work in JS by making use of a hidden frame.
You could direct that frame to load a document, the result of which would be only a <script> tag containing a JS variable definition, and the last thing that document would do is call a function in the parent frame to hand over its data. Bingo: asynchronous JS (that looked essentially exactly like JSON).
Since there were also various hacky ways in each browser to force a browser to reload page from cache (that we exhaustively tested), and you could do document.write(), it was possible to trigger a page to regenerate from asynchronous dynamic data in a data store in the parent frame, using a purely static page to contain it.
In this way we really radically cut down the server footprint needed for a national rollout, because our site was almost entirely static, and we were also able to secure with HTTPS all of the functions that actually exchanged customer data, without enduring the then 15-25% CPU overhead of SSL at either end (this is before Intel CPUs routinely had the instruction sets that sped up encryption). We also ended up with a site that was fast over a 33.6 modem.
This was a pretty novel idea at the time -- we were the only people doing it that we knew of -- but over the years I have found we were not the only team in the world effectively inventing this technique in parallel, a year or 18 months before XMLHTTPRequest was added to browsers.
(IE3 on Windows 3.11 was a good experience, by the way. Better behaved and more consistent than Netscape)
At around the same time we were also exploring things like using Java applets to maintain encrypted channels and taking advantage of the very limited ways one had to get data in and out of an applet. For example you couldn't push out from an applet to the page easily, but you could set up something that polled the applet and called the functions it wanted.
I don't like to get all "get off my lawn" but it feels like we actually earned our keep back then, getting technologies to do stuff that no standards working group anywhere was really considering and for which precious little documentation actually existed. There's a generation of us who held our copies of "Webmaster In A Nutshell" and "Java In A Nutshell" very close.
This supposed project is a bit dull, it is just an ongoing HuggingFace community engagement initiative with a misleading headline. Yes R1 itself is fascinating, but there isn't something like it coming out every week.
Every week to me means the frequency, not the duration. So having 52 events in a year that are spread out somewhat evenly but for which many take longer to develop than a week would count. If I count Deepseek as one of these I can’t find another 51 that are on this level. But I’m sure there was at least one per week that was exciting, just not to this degree.
It feels that the open source movement is slowly entering a Cambrian explosion stage.
You have the old "deterministic computing" achievements (with Linux the flagship). Then you have the networking protocols (activitypub / atproto) that are revolutionising birectional human interactions online. And finally you have the datascience/ML/AI algorithmic universe that is for the first time being harnessed at distributed scale and can empower individuals like never before.
These superpowers are all coming together and create a vast number of possibilities. Nothing really dramatic on the hardware side. Its basically the planetary software reconfiguring itself.
To me it all feels suffocating, fake. Simultaneously there's a faint glimmer of hope that we indeed achieve AGI, unlock fusion and live happily ever after in an utopian, peaceful and mostly analog world.
how many people ever used Usenet versus the billions who think the "internet" is facebook or tiktok. Techies living in their own universe detached from human reality is actually a factor why libre/oss is not as widely adopted as it could be.