So from this hacker news title I definitely thought it was saying that when you give some AI agents a self reflection like maybe by putting an internal monologue loop then they unlock an emergent animal-like exploration behavior.
But this is not what happened. Instead, some guys told AI agents to explore in the way that the guys think that animals explore. "Stanford researchers invented the “curious replay” training method based on studying mice to help AI agents"
Author here, a key thing is that we didn't prescribe that the mechanism of exploration was the same, but rather we found that the AI agent explored poorly (i.e. unlike animals) until we included Curious Replay. Interestingly, we found that the benefits of Curious Replay also led to state of the art performance on Crafter.
OK here is the arxiv https://arxiv.org/abs/2306.15934 called "Curious Replay for Model-based Adaptation" and from the abstract it says "we present Curious Replay -- a form of prioritized experience replay tailored to model-based agents through use of a curiosity-based priority signal" and "DreamerV3 with Curious Replay surpasses state-of-the-art performance on Crafter" here is the crafter benchmark https://github.com/danijar/crafter but it appears to have out of date baselines at the bottom of that page.
That arxiv stuff looks perfectly normal but I kind of hate how it got more and more caricatured as it went through the university press office and hacker news clickbait pipeline.
That’s standard. Me and others in my PhD cohort have had experiences where we saw so many minor inaccuracies in the copy we only fixed things that were flat out wrong, otherwise we’d have rewritten the whole article. It’s the result of a combination of non-experts having a 30 minute conversation with you then writing based off their notes a week later and the fact that their job is to hype up research so that it gets more attention from a broader audience. Everyone I knew said they wouldn’t let that happen to them when the press office called, but rewriting someone’s whole article because you feel like they missed nuances is hard to take a strong stance on, especially as an early career researcher.
I've been wondering for a while at what the next steps in adding 'inefficiencies' to AI processing would look like, commenting the other day to a friend that what's needed in the next 18 months is getting AI to be able to replicate the Eureka moments in the shower where latent information is reconstructed in parallel to processing tangential topics.
Going from "attention is all you need" to "attention and curiosity is what you need" seems like a great next step!
> getting AI to be able to replicate the Eureka moments in the shower where latent information is reconstructed in parallel to processing tangential topics.
I've been playing with this part specifically and it's really amazing stuff.
Having the model concurrently model internal monologue and output for a task, but allowing the internal monologue to be as focused or unfocused as the model sees fit.
You end up with situations where you have it working on a naming task for example, and the model starts imagining the warmth of a coffee cup on the desk, or traffic building up outside for a future appointment with a non-existent person, and then returns back to the task at hand with non-obvious tangents that it'd probably never have uncovered if it was only predicting on tokens related to the original goal of naming something.
It gets even more interesting when you inject variability into the process via the API (for example, telling it to use certain letters pulled from an RNG inside the next iteration of internal monologue).
This sounds fascinating. How do you make an internal monologue? Do you have any reading or samples to look at? Sorry for the ignorance, I’m a dev but not in the AI space.
Maybe I'm missing something (I only did a quick read) but aren't you explicitly telling the model to re-explore low density regions of the action space? Essentially turning of the exploration (and turning down exploitation) with a weighting towards low density regions?
As not an RL person (I'm in generative), have people not re-increased the exploration variable after the model has been initially trained? It seems natural to vary that ee trade-off.
Is there a possible Crafter benchmark that is too high for safety? For instance, a number beyond which it would be dangerous to release a well equipped agent into meatspace with the goal of maximizing paperclips?
You just got hit by a spam filter. I've turned that off now. But please send questions like this to hn@ycombinator.com, as the site guidelines ask (https://news.ycombinator.com/newsguidelines.html). They're off topic in the threads.
I don't like the misleading titles either, but honestly if you want the real titles you probably want some kind of arxiv feed. The paper title is "Curious Replay for Model-based Adaptation" which is too dry for social media or whatever hacker news is or for whoever is the audience of the stanford university press office. You have to expect more juicy (and therefore somewhat misleading or sensationalized) titles if you don't get your news straight from an arxiv feed.
“Patronizing” seems to be a matter of taste. I’ve never considered it to be patronizing; indeed, that’s often much unlike articles which have their title changed.
As far as simply differing, much of the time there’s a character limit that’s hit. I’ve seen many posts with comment from the poster calling out their edit to the title and the character limit is usually cited.
It would be especially difficult to keep the character limit (I think there are legitimate design reasons for this) while also requiring that the title matches the submission as closely as possible. Who decides what words are omitted without it potentially being any of: patronizing, inaccurate, or misleading?
But this is not what happened. Instead, some guys told AI agents to explore in the way that the guys think that animals explore. "Stanford researchers invented the “curious replay” training method based on studying mice to help AI agents"