I actually experienced this the other day. Bought the new Baldur's Gate and was wondering what items to keep or sell (don't judge me, I'm a pack rat in games!)
I had found some silver ingots. The top search result for "bg3 silver ingot" is a content farm article that very confidently claims you can use them at a workbench in Act 3 to upgrade your weapons.
Except this is a complete fabrication: silver ingots exist only to sell, and there is no workbench. There is no mechanic (short of mods) that allows you to change a weapon's stats.
I'm pretty sure an LLM "helped" write the article because it's a lot of trouble to go through just to be straight up wrong - if you're a low effort content farm, why in the world would you go through the trouble if fabricating an entire game mechanic instead of taking the low effort "They exist only to be sold" road?
This experience has caused me to start checking the date of search results: if it's 2022 and before, at least it's written by a human. If it's 2023 and on, I dust off my 90's "everything on the World Wide Web is wrong" glasses.
I had a similar experience recently looking some stuff for Starfield. Content farms are obviously switching to LLM-generated (mostly hallucinated) articles and Google seems to be ranking them pretty highly atm.
Kagi's ability to manually downrank/remove those kinds of results from your searches (and their return to flat rate pricing) finally tipped the scales for me for subscribing/switching search.
> In Baldur's Gate 3, silver ingots are a common miscellaneous item that can be found in various locations such as chests, shops, and dropped by enemies.[1] Each silver ingot can be exchanged for 50 gold at merchants or traders.[2] While silver ingots do not have any crafting or upgrade uses currently in early access, they provide a reliable source of income early in the game before other money-making options become available.[3]
I actually found my first AI YouTube channel today looking for starfield videos. I noticed the narrator sounded like text-to-speech. I went to the channel and it’s all just random videos for tones of different games with no real theme. Was a surreal experience.
I had mine the other day when I bought an audiobook and noticed the price was way cheaper and sure enough, it was a very robotic voice with uncanny pauses reading it. Refunded instantly
Finding about about their flat-rate pricing makes it a complete no-brainer for me to switch if I ever hit >100 searches per month (haven't gotten there... Yet).
Probably in the future people will only trust sources of info that can't be monatised. If you want to know the answer to a game question you just got to the reddit or discord and ask, since there is no point autogenerating crap for discord when you can't put ads next to it and the mods can remove you.
Platforms will be happy to run bots they can portray as real humans to bolster engagement, make spaces seem popular and dynamic, trick advertisers and investors, etc.
Similarly, if it costs basically nothing to work your way into communities to astroturf with bots, it'll happen. You don't have to post about great sites to get free Viagra right away, you can build reputation and subtly astroturf. And you can use additional bots to build/portray consensus.
Reddit is already a problem because of actual humans doing the latter. It'll just get worse when it's automated further.
Yes, but it's reddit monetizing, not the users. The users can be astroturfing, but there's no point astroturfing some types of content like game guides I hope.
Some users monetize it by selling their karma rich accounts to those astrosurfers you mentioned - or to spammers.
They currently manufacture these karma rich accounts by reposting popular posts and comments. LLMs will soon be (or already are) another way to karma farm.
Discord has raised so much money and is part of the inflated valuation unicorns that I sadly wouldn’t be surprised if they somewhat get forced into Ads
It's only a matter of time before things change with Discord monetization. On reddit there is an incentive to create LLM-powered fake accounts with high karma and sell them. It's true that on Discord this incentive doesn't exist right now because no karma equivalent is associated with Discord accounts, but eventually that's going to change as Discord, as a company, will try to monetize their user data in various ways.
It's the typical overvalued VC-backed company dilemma that needs investor returns. Quora, Medium, and so on.
I guess we need to again (and again) establish that having good information isn't necessarily cheap: could be a high quality vendor, policing a forum, maintaining search engine integrity, etc.
It's funny that one argument openai used to keep their models closed and centralized is so they could prevent things like this. And yet they're doing basically nothing to stop it (and letting the web deteriorate) now that profit has come into play.
Not saying they should, but if they wanted to they could have an API that allows you to check whether some text was generated by them or not. Then Google would be able to check search results and downrank.
It's not that simple. Originally OpenAI released a model to try and detect whether some content was generated by an LLM or not. They later dropped the service as it wasn't accurate. Today's models are so good at text generation it's not possible in most cases to differentiate between a human and machine generated text.
Well they could just not allow prompts that seem to participate in blogspam. If they wanted to stop it they definitely could.
Their argument is that since it's centralized, things like that are possible (while with llama2 you can't), they do "patch" things all the time. But since blobspam are contributing to paying back the billions microsoft expects they're not going to.
It would be easy to workaround using other open source models. You use GPT-4 to generate content and then LLAMA-2 or sth else to change the style slightly.
Also, it would require OpenAI to store the history of everything that it's API has produced. That would be in contrast with their privacy policy and privacy protections.
If it's a straightforward hash, that's easy to evade by modifying the output slightly (even programmatically).
If it's a perceptual hash, that's easy to _exploit_: just ask the AI to repeat something back to you, and it typically does so with few errors. Now you can mark anything you like as "AI-generated". (Compare to Schneier's take on SmartWater at https://www.schneier.com/blog/archives/2008/03/the_security_...).
10 years ago I was a huge fan of GameFAQs, which, for any major game, had at least one and often several highly detailed documents describing a game, not just a walkthrough but a set of tables where you would find items.
Back then I was playing Hyperdimension Neptunia and almost tried “applying AI” in the old sense to the problem of “What do I have to do to craft item X?”. Those games all had good FAQs and extracting a knowledge graph and feeding it into some engine that could resolve dependencies wouldn’t have been too hard.
Today I am playing Atelier Sophie which has the same mechanic but is very cozy and doesn’t pose complex dependency problems and the FAQs for this game are atrocious, consisting of a walkthrough that way too prescriptive. If you ask some question like “Where do I get a Night Crystal?” on Google this is likely to turn up a question/answer pair on a forum which isn’t quite as good as having the structured FAQ.
YouTube walkthroughs really seemed to kill text walkthroughs, sometimes these are better (like when there is a jump in a dungeon that doesn’t look like you could make it but you can) but sometime they are much worse (there are 75 hour long videos in a walkthrough, you have to find that it is in video #33 and that you have to seek to 20:55.)
Maybe the proliferation of trash sites will motivate the creation of high quality FAQs but you’d better believe that the creator of these FAQs will be horribly afraid of being ripped off.
To be fair it was already the case before the boom of LLM, to the point I was obliged to add "reddit" or worse, "youtube", to my queries. Of course LLM make it easier and faster, so I guess the wrb search engines will have to be smarter if they want to keep being used
Back in the 90s when the Internet became a thing, it was common knowledge that because normal people made websites, that you should take things with a grain of salt. There was a bit of an overreaction to this, as the general feeling at the time was to trust nothing on the Internet.
In the 00s and 10s, the quality of discoverable content improved: reddit and stackechange had experts (at a higher rate than the rest of the net at least). It was the era where search was good, Google wasn't evil (good results that separated the ads are entirely why they won against AskJeeves and Yahoo), and SEO was still gestating in Adam Smith's wastebasket.
Now Google and Bing are polluted with SEO-optimized content farms designed to waste your time and show you as many ads as possible. They hunger for new content (for the SEO gods demand regular posts), and the cheapest way to do this is an underpaid "author" spewing out a GPT-created firehose of content.
SEO has ruined search, and content farms have made what few usable results there are even less trustworthy.
So yes, the Internet has fundamentally changed in the last 9 months.
Reddit never had experts. Maybe for half a minute. It became an echo chamber fast: fake internet points to be gained for saying what got upvoted last week, or to be lost for saying anything different.
If all you do is browse (default) home or all, then sure, it's just a stupid echo chamber obsessed with hating the things its cool to hate. That's not where the value is on reddit, and it's not what people are referring to when they say the search reddit for answers. If you're looking for product reviews, it's not perfect but it's tough to find anywhere better unless you happen to know of exactly the right hidden gem of a forum to visit for your particular subtopic (and the link to that hidden gem of a forum is probably easier to find on the relevant subreddit than it is on google).
Reddit may not have experts per set, but in the right subreddits it definitely has enthusiasts. In ages gone by, you'd find the same people on message boards or forums, talking up and comparing the minute details of this or that. There's obviously the same risk of cargo culting that there's always been, but there's genuinely useful information available from people who spend way more time than the common man on their area of interest.
I think, at least on programming language subreddits, there are people who deserve to be labeled experts. r/cpp has some frequent users who work on standards proposals or compiler features. There are also subreddits dedicated just to communicating with experts, like r/askdocs
Two things can be true at the same time: "Reddit is prone to karma-driven bullshittery" and "Reddit content is generally significantly higher-quality than SEO content farms".
With Reddit you might get inane arguments and bandwagoning about what the best game strategy is, but you're exceedingly unlikely to read about a game mechanic that was straight-up hallucinated by a LLM.
Some subreddits at (like askscience) at least asked for a copy of your diploma (in a science field) if you wanted flair. It was actually an awesome reddit.
Prior to LLMs, generating plausible-sounding misinformation took actual effort - not much effort, but the marginal cost was reasonably above free. With LLMs making the cost of bullshit vanishingly close to free we're going to tip into an era where uncurated LLM confabulation is going to dominate free information.
There's "one loony had a blog" levels of wrong, and then there's "industrial scale bullshit" levels of wrong and we are not prepared for the latter.
Because hiring humans to write misinformation costs more money than $20/mo? Like what are you even trying to say?
Before LLMs, a $3000 camera had fake reviews on Amazon, and you got fake news about politicians. But you can safely assume "bg3 silver ingot" information is likely real, since hiring someone to make up silver ignot will never make the money back.
GP is literally reminding you of a time when online misinformation was rampant, but before search engines (temporarily) did a better job than overwhelmed curators.
I had found some silver ingots. The top search result for "bg3 silver ingot" is a content farm article that very confidently claims you can use them at a workbench in Act 3 to upgrade your weapons.
Except this is a complete fabrication: silver ingots exist only to sell, and there is no workbench. There is no mechanic (short of mods) that allows you to change a weapon's stats.
I'm pretty sure an LLM "helped" write the article because it's a lot of trouble to go through just to be straight up wrong - if you're a low effort content farm, why in the world would you go through the trouble if fabricating an entire game mechanic instead of taking the low effort "They exist only to be sold" road?
This experience has caused me to start checking the date of search results: if it's 2022 and before, at least it's written by a human. If it's 2023 and on, I dust off my 90's "everything on the World Wide Web is wrong" glasses.