I keep seeing breathless claims about this, but I don't see how: ChatGPT doesn't index websites, which is what I use Google for. It doesn't manage my calendar or provide me email, which are services I actually pay Google for.
At best, ChatGPT seems to be well suited for the "infobox" value-add thingies they put on search results. But those are infamously inaccurate and/or misleading, and it's unclear to me that ChatGPT will perform any better on that front (or that the majority of users even care about that feature).
> At best, ChatGPT seems to be well suited for the "infobox" value-add thingies they put on search results. But those are infamously inaccurate and/or misleading, and it's unclear to me that ChatGPT will perform any better on that front (or that the majority of users even care about that feature).
Also agree. My use of Google and the Internet in general isn't "ask a question and take the first thing told to me as the absolute truth". It's much more about getting different bits of information and assembling a view for myself.
Exactly this. ChatGPT cannot answer 90% of the questions I ask it. The reason? Because the information I’m looking for exists after 2021. I have no idea how this would ever replace a search engine that has real-time results.
In fairness, I'm not sure what changed but frankly Google doesn't answer my questions anymore either. It just shows me random spam websites, number 7 will surprise me.
I know your point is about product features not accuracy, but these things may be connected. It might take an AI to sift through the shit we created via SEO.
That said, I tried using ChatGPT today as a stack exchange replacement and it was like a smart 7-year old. Sounded super impressive but wasn't actually correct.
Yeah there's a race between Google becoming totally useless and ML technologies becoming just better enough than it. If Google keeps on lowering the bar enough then someone may be able to disrupt it. Although it isn't going to be ChatGPT, it'll be an ML-indexed and ML-driven search index over the internet.
Until that happens, and it works better than Google, I don’t see a path to disruption. For example, I can type “NBA schedule” and Google will give me the exact thing I’m looking for. A chronological list of games, their times, if they’re in progress (with real-time scores if they are, and even possession). I think ChatGPT supporters downplay how important instant real-time results are.
And there is nothing to suggest right now that ChatGPT can’t do any of that. Just because the creators limited the information doesn’t mean it cannot be opened up and be fed information.
Continuously learning chatgpt would probably be absolutely insane indeed.
Yet it remains that chatgpt has no notion of « truth » or information quality. Whereas google has built a whole system of ranking reliable source of truth for every possible subject. I’m not sure a language model can reach that goal, but i may completely wrong.. It’s going to be fantastically interesting !
You’re not seeing it because you’re not in the business of taking market from Google.
Let’s take email.
Yes, you can’t beat gmail for basic email inbox services but if you were to add a gpt-3 based assistant to your inbox which “learns” as you use your inbox, suddenly, your basic email service has a USP. Will it big enough to destroy gmail - maybe not on day 1 but regardless, an alternate now exists and will only get stronger.
Calendaring is as commodity as it gets and can be bolted on to literally anything else a competitor creates.
Document management and editing aka Google docs/drive could be an area where GPT3 doesn’t encroach but O365 is already there waiting to take your money and give you comparable service.
YouTube and other human generated content could be google’s moat but if they’re defending YouTube, it means the textual search market has been lost.
A lot of that sounds like add-ons to services rather than the services themselves, but it would be pretty great if gmail could predict how I would respond to every email and give me the option to send or edit (similar to grammarly).
that with be pretty great, wouldn’t it!?! An AI which looks at your SENT emails, infers your style and drafts responses in that style for your approval.
Not trying to incite a flamewar but this reads like a “I want faster horses” type comment.
People want a reliable SMTP/IMAP service only until they notice that others get more mileage out of their email inbox and reliability has become table stakes.
There is a lot of longstanding pent up demand for a good email solution - hey.com 2021 launch comes to mind and I still remember joining Inbox’s waiting list (circa 2013-14?).
I think as a direct replacement for internet search, or even general knowledge queries, I definitely agree with you. When it does work though, it's hands-down a better experience. I imagine the 80/20 problem of making it work for the long tail of difficult queries that people will throw at it that will be the hard part. You can't just have a bot spewing out blatantly incorrect data without sourcing half the time. I think it's a coin flip whether those issues will be resolved anytime soon.
Where I do see it being handy is with mundane tasks or those "tip of my tongue" queries. I was doing some tedious mapping of enums to other enums today, and that's one of those problems where you can script it out in vim/sed, but then need to spend a while double checking it against the spec anyways. I'd love to ask a bot to do that for me. Similarly, the easy stuff like "how do I do [common git operation] again?", it seems to do great on, since there are canonical answers to that stuff.
I've been generally uninterested in the ChatGPT craze, but I did check out perplexity.ai when it popped up on HN a few days ago. It did seem genuinely useful for these types of queries, and it sources its answers with web links. I could see a space for that type of tool, as long as the hairier parts of the chatbot can be filtered away.
I suppose they fear for Bing and other Microsoft products integrating it. Even with a limited dataset and no knowledge of internet articles (ie. it can't search the web), it still does very well in answering questions, at least when it doesn't confidently tell you incorrect information. If Bing feeds in "Here is the article content of the top 3 results: ... Please summarize and answer the query '<user question>'" and displays the output, it could quickly outpace Google in usefulness.
Back when Bing did the challenge where you decided if you liked it's results or Google's, Bing won for me hands down but I still didn't start using it. I doubt less-stupid autosummaries will work either.
ChatGPT and similar devices are poised to become narrative machines which fill the infoboxes of search engines with whatever "truth" their keepers want them to push. The agility with which these things can bend language around a desired narrative makes them an ideal fit for this purpose.
Most people use Google for answers, not to find websites. We who do are in the minority.
I think the real threat from ChatGPT comes from the fact that Microsoft is involved. Google is their main competitor so it's clear that after years of Bing being a not-quite-as-good-as-google search engine, they see a completely different direction opening up to dethrone them.
This technology is only going to get better, and it will do so quickly. I think the era of sifting through millions of crappy, overly SEO'd sites to skim around for an answer - is coming to an end.
Imagine Microsoft puts ChatGPT into Microsoft 365. You're a Fortune 500 company, you have zillions of SharePoint sites over many departments. Your employees can now ask ChatGPT about them. To find documents by vague search terms, to summarize them, to write in the style of one. Remember how it used to be a thing where non-technical people would ask Google queries as if it was a person, and technical people would say 'no no, think about the keywords'. Imagine you can do this chatty back and forth narrowing down with vague terms, searching your work's SharePoint:
Prompt: What was the film with the silly detective and like a Purple Lion or whatever?
ChatGPT: The film you're thinking of might be "Scooby-Doo," which features a silly detective named Shaggy and his talking Great Dane named Scooby-Doo, who often dress up in costumes to solve mysteries involving supposedly supernatural creatures.
Prompt: no with real actors
ChatGPT: You might be referring to "Scooby-Doo" (2002) starring Freddie Prinze Jr., Sarah Michelle Gellar, Linda Cardellini, Matthew Lillard and Rowan Atkinson, which was a live-action adaptation of the classic cartoon.
Prompt: no the coloured animal was in the name
ChatGPT: The movie you're thinking of might be "The Pink Panther" (1963) starring Peter Sellers as the bumbling French police detective Jacques Clouseau.
Only it's about "What was that report from someone in the sales team, with the white paper about plants?" or "Where's that HR document about the training website with the health and safety stuff?" or "Who manages the team who deal with XYZ customer?" or etc.
> It doesn't seem like an especially hard lift to get ChatGPT to start indexing websites.
That is a very bold claim unless you've seen their architecture. Maybe you have, or read something to that effect, but citation is needed.
* poor coverage for post-2021 data means it's not trivial to update the models regularly
* wikipedia: "ChatGPT ... can’t currently rank sites for reliability, quality or trustworthiness" - to be able to train on all kinds of web data requires a proper ranking system, which is basically rebuilding something like PageRank or an alternative from scratch (and fine-tuning it for decades to filter out garbage). According to this quote, they don't have that.
* incrementally updating the web-scale data on a daily-ish basis is a big undertaking, Google had to invent multiple systems from scratch just to do that (e.g., had to create Percolator based on BigTable and MapReduce, all three invented by them just to do indexing) - and that's just a ranked index, not a machine learning model
Possibly, but is an LLM geared towards interpolating between content likely to even be particularly well-suited to relevance ranking of results that require no interpolation?
And if it is, what stops Google from just ... incorporating an LLM into their existing search offering instead of having their lunch eaten by this hypothetical IndexGPT? It's not like Google lacks expertise in LLMs.
This seems like an incoherent objection – they would presumably prefer to be "Google + LLM" and make less money than let the hypothetical IndexGPT eat their lunch and make even less money as a result, while additionally suggesting that the business model for IndexGPT is poor, raising the question of why it would be pursued in the first place.
You've also not addressed the more fundamental question of why an LLM would even be good for this.
> This seems like an incoherent objection – they would presumably prefer to be "Google + LLM" and make less money than let the hypothetical IndexGPT eat their lunch
You'd think that, but it's the classic innovator's dilemma. Google would have to modify their product to generate less revenue now to maintain market share. Except that market share will decline regardless as competitors rise, so they'd be cutting revenue just to slow the decline of market share. Alternately they could release a new AI product that cannibalizes their own search profits.
Either way, for a large multinational obsessed with quarterly profits and the stock price, it's very hard to overcome internal resistance to do either of those things.
Quite frankly, if ChatGPT destroyed Goggle, at least we'd have one positive contribution it has made to the world 8-/
In search applications, maybe it would be better at interpreting structure of language, in the phrasing of search queries, rather than just keyword matching, which leads to often abysmal results.
Prior proposals for structured HTML, including the semantics of website contents, not just words they contain, seemed like the best solution to this problem. But apparently compaies making billions of dollars keeping people roped to their search engine sites weren't interested in this...
There are workarounds for these issues, but they are societal and institutional, and right now the threat from (and rewards for) creating and disseminating misinformation and disinformation continues to rise steadily.
1 - there is so much money tied up in the search -> clicks for ads business model. Content providers were happy to let Google scrape their pages because indexing in Google leads to clicks. They won’t be happy to let OpenAI scrape their pages because it will lead to many fewer clicks, even if OpenAI can solve the source attribution problem. As a result, there will be many more walled gardens if ChatGPT becomes the dominant method of search as content creators see their typical stream of revenue dry up. Eventually ChatGPT will become useless as its training data becomes more and more obsolete.
2 - Google News already got sued into oblivion by content creators for summarizing news articles. (Google worked around it by removing the article snippets). ChatGPT is just doing the same thing but for every query. Their legal exposure is going to be huge for the exact same reasons.
3 - Lastly, how will OpenAI make money on ChatGPT? Advertisers won’t pay to “maybe” show up in query results. A monthly subscription? Unless the results are drastically better than Google that will be a big pass from most people.
Could OpenAI become the dominant player in search? Sure, there is a path there, but I would probably bet against it right now.
1. Almost all high value content on the web exists on far more than one site. Any individual website which opts in will get more clicks with ChatGPT than without. Given this, most websites will let ChatGPT index them in order to drive more clicks.
3. It seems like a pay per click model would work. Basically get ChatGPT to weigh paid content higher than unpaid content.
1. Does it? There are a bunch of sites that mirror high quality content but I don’t see many direct replacements for StackOverflow.
3. But most people are asking ChatGPT queries that don’t require you to click on anything. Why do I want to click through to see the code that ChatGPT copied from if it got the answer right in the first place? The click thru rate will be a small fraction of what it is today. Why would advertisers buy into that model?
Because advertisers buy per click, not impression usually. I agree the total amount of revenue generated by ChatGPT enabled search will be less than classic search. That's one of the reasons Google hasn't done this yet, but it could be a win for Microsoft.
> Any individual website which opts in will get more clicks with ChatGPT than without.
Not necessarily, especially if ChatGPT makes the user no longer need to even visit the website in order to get the content. Remember, google drives visits. ChatGPT aims to provide the answer.
I think there is a > 50% chance that Microsoft and OpenAI throwing a bunch of incredibly bright researchers at the problem can figure out how to do proper attribution.
It is not clear to me why people keep saying this. Google has a competitor to ChatGPT, internally. The thing that that Blake Lemoine guy was playing with and decided had selfhood? I played with an earlier version of it when I was there... Google is not slouching in this area.
There's nothing that OpenAI can do that Google couldn't also do.
If they feel the need to integrate it into search, they will do so.
If they need to roll out a new product, they will do so.
But LLMs are not a replacement for search. For one... they "lie." For two, from my understanding they don't really accommodate real-time / recent information very well.
Can they augment search? Almost certainly. And in that regard, I suspect Google is better positioned than most to take advantage of that kind of fusion.
If anything, I suspect Google has more of an unfair advantage in this kind of domain than most. Huge amount of computing power, boatloads of AI expertise on staff, and access to probably the biggest corpus of learning and search material out there.
I couldn't care less about Google or its business but here is how I think this is short-sighted in at least on way. If there is no search engine and people can't benefit from being found, they will be less likely to produce content.
So chatgpt will need to incorporate sources to its results but if any result is a combination of multiple pages then i don't know how it would show that. And without attribution, content creation will likely not die but shift and who knows what that'll look like
I imagine even chatgpt's output will change in time, it can't stay like this
In the end someone needs to do the original research, to go out and take pictures of stuff. If these people are no longer entitled to the sweat of their brow - not even attribution - why would they bother?
I'm one of those people. If it comes to that, I'll have to find another line of work. I won't just start doing it for free.
The internet was filled with people posting useful content and photos long before the ad industry showed up and shat on everything. People love to share their passions and knowledge with others and will happily do it for free. Who is paying you to post comments here?
An internet where clicks and ad pushes don't drive content might look different, but it certainty wouldn't be devoid of content.
so... given that right now ChatGPT works on a set of finite, curated datasets (not the whole entire web like Google) and doesn't credit the authors of its information whatsoever, you are saying if it becomes prominent and destroys all traditional search traffic to website content authors, they will just happily keep posting quality content into the void, even though nobody except a mindless digester robot will read it? (since they will get no search traffic and will never get any credit of any kind for their work)
Probably? I mean, it's not as if only the AI will read it if that AI is passing along the information to others. The AI would just the interface people use to access the information.
A do think a lot of people put stuff online just to be helpful. What amounts to "credit" is often very abstract. For the early web, counting page views and watching visitor counters increment was somewhat satisfying, but a lot of communication online is pseudo-anonymous. What's the "credit" we get here? Karma awarded to a pseudonym? If I leave a review on amazon, no one seeing it (but amazon) will ever know it was me, but it's nice to see others mark the review as having been helpful to them. An AI interface to information found online could provide something similar. For most people at least, I don't think that a lack of personal recognition or reward invalidates the sanctification they get from even the smallest acknowledgement that they've contributed to something.
It also seems like much of the absolute worst things online exist only to optimize page views and monetary reward. Being freed of so much of that trash would be pretty nice.
I put helpful content online for free on my personal blog. The karma I get comes from the occasional thankful email, and coffee with strangers with similar interests.
I don't even measure traffic. I just like to know that people land there and find what they need.
If this corpus that took countless hours to create was distilled by an AI, it would feel completely pointless to extend it.
Imagine writing a book knowing that no one will see it, but that an in-house writer at the publishing house will read it once and write its own version. They'll get credited and paid, but you won't.
Some people are okay with that, but probably not the people whose words you read all day, either to solve a problem or entertain yourself.
I'd keep writing comments and the occasional blog post, but I might not devote my life to documenting German bureaucracy, as I currently do. That's by far my most valuable contribution.
It's a bit like how I love to cook for friends on the weekends, but don't volunteer as a line cook in restaurants. Without any reward, I'd have to get a job and I'd have far less time to write.
You can't run everything on sheer passion. Some valuable things depend on creators getting rewarded for their work.
With an internet that didn't run on ads we'd lose some things for sure. Some of them useful or entertaining, but we'd also lose a lot of the SEO/contentfarm garbage and it'd be a huge win for people's privacy.
How successful have you been with getting tips and how does that compare to the income ads generate for you?
You dismiss most of the internet so casually that you seem to argue in bad faith.
Most of the useful internet monetises its traffic one way or another, and not just with privacy-invading ads. My website looks and works the same with an ad blocker, and it's as far from blogspam as it gets.
I'll leave it at that, and invite you to flex your imagination a bit. Think of all the websites you depend on that would not survive having zero revenue.
I've primarily used ChatGPT for additional explanations of programming terms (e.g. C++ lambdas & capturing local variables, validated against my code and other sources). The results from Google for the same query are a bunch of overly long, SEO'd crap, covered in ads.
There are plenty of other things I've searched for recently, were ChatGPT simply doesn't make sense. One example is looking for trusted reviews of kitchen appliances, and things to do on vacation. I need to be careful with those as well, given the incentives around human reviews. I'm skeptical that market will be disrupted by a language model, while also being the most relevant for advertisers.
> There are plenty of other things I've searched for recently, were ChatGPT simply doesn't make sense. One example is looking for trusted reviews of kitchen appliances, and things to do on vacation.
Right now, ChatGPT doesn't provide good answers to questions in the form of "What is a good source for information of type X?", but that is really a limitation of the source data (text from a current web crawl will, superficially, tend to reflect information rather than meta analysis), rather than the model as such, and transforming PageRank style graph data into a verbalized form suitable for an LLM to be trained on ought to be possible.
Something equivalent has already been done for knowledge graphs, for example:
Just seems like evolution. Like how you don't make money selling albums anymore. The world will look different. Here's one possible world...
Everything is paywalled. And you basically can pay to subscribe to services that have information, news, stories, etc... and those get fed into "your GPT". You can also add your emails and chats into this as well. Just one thought.
The premise seems to be: Google makes money from ads on search pages. ChatGPT will replace search, so no money for Google.
Assuming Google has the tech for it, they can provide a ChatGPT-like thing that peppers context relevant ads into the response. Heck, you could go to Google.com as it is today and instead of the search bar chat with the AI. Why would their primary source of income crumble?
If I search for "car dealerships" how is ChatGPT's response going to make Google irrelevant?
ChatGPT: "A car dealership is a retail business that sells new and/or used vehicles to customers."
Google: Shows ads for local car ships, then shows me a map of nearby dealerships, then shows me some car dealership websites, then shows me some more business, then defines what a car dealership is, then shows car dealership podcasts, then shows car dealership books
Google is miles ahead of of ChatGPT for this query.
Google don't mess around existential threats to their search business model.
We're talking about the company who, when they saw the iPhone, recognized the mobile apps space as a threat to search (why search if they can open an app from a walled garden) and immediately went to work on Android, even if they were never in the phone business. They reacted faster and better than everyone else and the entire phone space as it was went bankrupt (Nokia, Blackberry, et al).
I have no doubt in my mind Google have ChatGPT as a top priority threat, and they'll come up with something to counter it. And Google already invest heavily on AI, they're not completely offguard.
If somehow someone invented a car, today, that could threaten ad search from Google, they'd be competing with Ford, GM etc in two years.
> Google acquired Android in 2005, years before the iPhone was released.
IIRC: That "Android" was nothing like what we refer to as Android today. I believe it was aiming to be a common OS for digital cameras or something. The iPhone caused them to pivot the project hard in a different direction.
I’ll counter that the Google that built the Android business is not the Google we have today. Google has been on the long tail of financialization for a long time.
This is the first time they have had any serious threat (beyond legal) since the reigns were handed over.
I don’t think Google has completely lost control to investors, but I suspect they don’t have the agility, focus, and risk tolerance they once had.
Funny how in making this prediction, the author of gmail seems to forget that google also has.. gmail and a gazillion other products which bring in revenue.
ChatGPT can't find videos or images or company home pages or the tax office website or the electrical utilities, etc.
It could, but that would turn it into a search engine and I think it's reasonable to assume that Google would come up with its own version of ChatGPT sooner than OpenAI would build a search engine.
I'm seeing a lot of skepticism about these claims of GPT models becoming important in search, especially on here. What people seem to find remarkable about ChatGPT is, that it seems to answer general questions fairly well, but what I find remarkable about it is: it makes you feel like it understands what you're trying to tell it. I sometimes make a half-assed prompt not even spelling correctly (and not caring, because the GPT will pick it up anyways), and the result from the model seems to repeat to me what I wanted from it.
Then it sometimes gives a wrong answer, but it legitly seems to correctly repeat what I asked from it. I think this might be the real killer feature - it isn't (currently) giving accurate answers, but it is giving people the feeling that they're being understood. Humans crave that, and I find that might be the main reason for its popularity. And the issue of providing the 'correct' information might just be an engineering problem.
That is a good point. I asked ChatGPT "What was the book with the guy stuck on Venus or wherever who had to grow food in poop?". ChatGPT replied "The book you're referring to is likely "The Martian" by Andy Weir." which makes me feel like it understood me even though I deliberately got the planet wrong and was vague about the rest. A human who knows the story would likely get it right because one of the most grossly memorable scenes is him growing potatoes using poop for fertiliser.
To the same prompt, Google's top result was a Wikipedia page for a Ray Bradbury book, and told me the result was "missing poop". It's always frustrating that Google ignores words I typed in, and here it's one of the key words in the query. Yes, ChatGPT makes me feel like I'm talking to a helpful assistant and Google, Bing, Siri, Cortana, OK Google, Alexa, don't.
(I think Google used to feel like that, years ago when its results were good?)
I think Apple is one of the few companies that tries to maintain a level of focus that Google simply does not have. Just compare the number of cancelled Google products/services vs Apple.
I think the difference in the number of canceled projects is that Apple cancels them internally before customers see them. Google seems to like to try things out more to see if it might work, whereas at Apple they have to clearly see how this is going to work out before they let it out.
Apple certainly has a list of failed projects/products, but they certainly try to keep the public ones to a minimum.
An interesting counter-example to this would be the satellite communications that they have just released. If you look into the details, Apple is covering the costs for the emergency feature for the first 2 years (unclear to me if that is first two years of phone ownership, or of the program). And there was some talk in the keynote about expanding this to non-emergency calls (e.g.: look at this beautiful photo from the middle of nowhere). This looks to me like Apple is not sure what they are going to do with this system, but are seeing how it plays out.
They secured exclusive licensing rights to GPT3 and its spinoffs. And yes, they also have an arrangement where MS provides funding and resources (Azure compute) in exchange for such licensing rights (see GitHub Copilot).
The whole “in two years” completely destroys the credibility of the article. I think it took longer for Google to overtake AltaVista and that was with a decidedly superior product in the same category. ChatGPT is amazing, but really isn’t a replacement for search.
I'm very curious about this. I'm starting to develop an irrational hatred of all things AI which obviously clouds my judgement, so I'm not sure if I see it failing because I want it to fail so much, or because it actually cannot work.
But there is a fundamental difference between Google search and ChatGPT. Google search (or any search, so far) just point to sources of information, and we can then evaluate those for reputation, reliability, accuracy, and in the end form an opinion. ChatGPT speaks in its own name and affirms things, without ever sourcing them, that are sometimes wildly incorrect.
The sourcing problem is itself huge, because it makes it so easy to poison the well without a trace.
ChatGPT is amazing. It can do things that Web/Location/etc. search cannot do. I love that it can do Q/A, summarize and assist in tasks that have to do with writing. BUT, web search will always be needed and ChatGPT will co-exist with Search.
If ChatGPT ever makes inroads into real time information then queries like "restaurants in my area" and "latest news" and other similar queries then Google Search will be destroyed. At that point, I don't know what else Google Search will be useful for other than to search for web sites which is useful for me. ChatGPT is a long time away from that though.
Google makes money from advertising. It will deploy a ChatGPT version with conversational ads. The challenge that ChatGPT presents is that it is the only technology that appears to threaten Googles search dominance (and thus, ad revenue) in 25 years. It's very wise and appropriate of them to treat this as "code red".
MSFT is foaming at the mouth for the opportunity to finally be able to compete with Google search.
I think Google comes out unscathed. When their identity is at stake, they'll have no troubles executing.
Google has been investing in AI for over a decade. I seriously doubt they don't have their own ChatGPT and haven't experimented with it to replace search.
But I bet (probably well-founded) fears of drawing anti-trust or copyright infringement legal action has them hesitant to push that functionality on something as prominent as the Search page.
A relatively small company with no pre-existing monopoly position summarizing pages so you don't need to visit them is one thing. Google doing it (too much, and too well) is practically guaranteed to draw one or more credible challenges (again).
This may leave a gap in which to challenge Google and win. But even the little company is likely to see some risky lawsuits as they monetize and grow. A lot's gonna hinge on whether telling a computer to do some math on content launders copyright as well as having a human learn something, then explain it, or synthesize something "new". And actually, it'll need to launder it much better, in the eyes of the courts, or else still be hopelessly vulnerable to legal attack—it's not like humans don't unintentionally infringe copyright, as the courts see it, pretty often, so if "AI" is held to the same standard it'll just be a mass-lawsuit-magnet. Uber for getting yourself sued.
I'm feeling lucky never worked, I doubt ChatGPT, alone, without a list of links, i.e. a page of results will work. At least to me, however good, I most of the times want choices. Nevertheless, where I think they could be impacted is to introduce new competition with new players using search powerered by these technologies, that I can see.
You also want the ability to evaluate the trustworthiness of a fact or opinion within the context of its publication.
Not delivered opaquely outside of its original context, wrapped up by a centralized giant that can create any lever it wants to massage the info for advertising purposes or worse.
If people are able to run ChatGPT (or similar) locally and can either buy access to large training datasets or generate them on their own then Google and others are probably in trouble - there's no business opportunity to jam paid messages in there.
So how will they (OpenAI/Microsoft/whoever else comes up with something similar) monetize this? They can charge for access or turn ChatGPT into SalesGPT. I've worked with a lot of great sales people and they're a lot of fun to hang out with when they're not working - but I think users would eventually get tired of their Chatbot trying to convince them to buy something when they just wanted answers.
The key here is that it can produce output that is at a glance reasonable to educated humans.
Many people are saying that ChatGPT is "going to get big and change the world" in the meaning that it will be useful. I'm not yet onboard with that interpretation. I do think that it will change the world - it has the potential to decrease the reliability of information online by a few orders of magnitude.
> Many people are saying that ChatGPT is "going to get big and change the world"
I see many people saying that AI, like ChatGPT, is going to be big and change the world. ChatGPT and StableDiffusion are the first public exposures to models, looking like a crazy step function in capability. I believe they're indicators that change is coming, but the rate of change is probably a little off for most.
> chatgpt increasingly looks like the "bro this is going to get big and change the world" crap that comes in every few years.
A lot of techies believe in science fiction and over-eagerly look for signs of the advent of worlds like those in their TV shows and books.
They also seem to always imagine themselves as the protagonists in those works, so even cheer on the idea of some pretty shitty sci-fi worlds (e.g. cyberpunk dystopias).
> It's only fiction until it's real. Most everything we have today was science fiction at one point.
Eh, not so much. IMHO, very little science fiction has actually become real, and that which has mainly gets credit through very generous interpretation.
> very little science fiction has actually become real
I can't comprehend this perspective, so you'll have to help me out. Look at the fiction from the 20's and 30's. People went to the moon. Robots walked on two legs. People had computers on their wrists. Machines made body parts for people.
Do you have some examples of "very generous interpretations"?
>> very little science fiction has actually become real
> People went to the moon.
That's actually a fairly interesting example. People went to the moon, so (narrowly) you could count that as "sci-fi becoming real." However, more broadly, science fiction is about more than about technology, and the world with that technology hasn't turned out anything like the sci-fi zeitgeist depicted.
Sci-fi can throw a lot of ideas about technology at the wall, and some will stick, especially vague/easy ones like "smaller computers," but it tends to get everything else wrong. When people "believe in sci-fi," I think they believe in that "everything else."
I have met quite a few Googlers that confidentially share "things are a lot more precarious than people outside know." My guess was always that this was some sort of internal motivational speech in Google, and not an actual observation.
One thing I'm sure, despite all the papers and theoretical advances Google has on AI, the translation to Google products and user experience is dismal.
I use business GChat and recently I've been graced with TL;DR summaries of threads. They are comically bad.