Wouldn't the implication of them being "opposite" be that in some sense they are mutually exclusive? I don't really see evidence of that. Your example of sensory input vs world model weight is a bit flawed, because both of those are extremely multifaceted. One can have extreme weight in sensory input in one sense but not others, as well as extreme weight on world model for certain aspects of life.
You can use RC liberally to avoid thinking about memory though. The only memory problem to think about then is circular refs, which GC languages also don't fully avoid.
I am more shocked about the origin story compared to the acquisition.
> Almost five years ago, I was building a Minecraft-y voxel game in the browser. The codebase got kind of large, and the iteration cycle time took 45 seconds to test if changes worked. Most of that time was spent waiting for the Next.js dev server to hot reload.
Why in the hell would anyone be using Next.js to make a 3D game... Jarred has always seemed pretty smart, but this makes no sense. He could've saved so much time and avoided building a whole new runtime by simply not using the completely wrong tool for the job.
That’s just what the Javascript ecosystem has been missing! A runtime built on an unstable pre-1.0 language to go with the npm dependency churn. It’s been way too long since I’ve had to waste a week debugging what turns out to be a compiler/interpreter bug.
These are completely different. Agents (aside from the model inference) are not CPU bound. You gain much more by having a wider user base than whatever marginal CPU cycles you would gain in Rust/Go.
My thinking is that they’re trying to capture that market for JavaScript before another AI company does. To put it bluntly they want to capture the revenue generated by writing JavaScript code, which is currently being captured by independent JavaScript developers. The reason for a JavaScript is that is the most ubiquitous language, and id guess there are more jobs available for JS/node than any other language.
Of course, as a JavaScript developer, this may just be my paranoia. <sweats profusely>
I’m guessing he was probably a JavaScript developer that wanted to make a game. He began building it using what he knew and then he hit the limitations of it. Rather than switching to something else, he tried to figure out why fast compile times weren’t possible and determined that they were possible and started to build a solution for it.
This take is interesting given we're all here congratulating Jarred for seeing that there was no tool to solve x so made it, and is now enjoying a likely nice payday. Be the change you want to see in the world?
It kinda reads like a case of survivorship bias. He is the one in a million to reach the good ending, despite starting with the wrong choice; though in this case, the wrong choice brought him on the road to the good ending.
Now the real question is, does the game loads significant better now, or does the performance still suck? In which case it might be more an excessive case of yak-shaving. And if yes, when can we except the release?
If the output generated by his attempts at making a better game is a much faster node runtime, then even if the game is still not usable does that matter? The end result is still an improvement over something that existed before. The game was just a catalyst.
Isn’t every success story really an example of survivorship bias?
> Isn’t every success story really an example of survivorship bias?
No, survivorship bias (in this context) means to wrongly see a minor subgroup as the majority. But the successful subgroup is not always a minority, or falsely labelled.
That’s kind of the opposite though. I guess if you’re saying that there’s an art to building things using the least efficient means possible just as there’s an art to being maximally efficient (like the 4k demo scene) then your point stands.
That is not a good analogy. Games are built using programming languages. JavaScript is a programming language.
Cars are built using metals (usually steel). A better analogy would be like trying to build a car out of iron, a really heavy metal. Since js/node is very resource heavy requiring transpilation/etc…
It's not a perfect analogy, but none of my comments are directed at the use of JS for a game, it's a fine choice. It's the use of Next.js that's the issue, it's a framework for server side rendering of HTML. It serves no benefit if your goal is to make a 3D game, it only adds overhead. If he had not been using it he would have realised there's a few bundlers out there that are far better than what Next.js dev server provided at the time.
That's super strange since React by its nature assumes that controls are stateless - which games definitely are not. If you render your game inside a canvas then React decides it wants to recreate your control, then your whole game restarts.
He may have been serving a game in a canvas hosted in a Next.js app, but have done all the actual game (rendering, simulation, etc.) in something else. That’s a decent approach - Next can handle the header of the webpage and the marketing blog or whatever just fine.
But like... so can an index.html with a script tag? Am I missing something, where did you read that there was a lot of work involving the header or an attached marketing blog?
My point isn’t that you absolutely need that, just that the negative effect on your game development are pretty minimal if you’re not leaning on the SPA framework for anything related to the game. If your game is going to be embedded into an otherwise normal-ish website, this isn’t a terrible way to go (I’ve done it personally with a game mostly written in Rust and compiled to WASM). You can get gains by splitting your game and web site bundles and loading the former from the latter explicitly, but they’re not massive if your bundler was already reasonably incremental (or was already esbuild).
Thanks for assuming I “read” about bundlers somewhere, though. I’ve been using (and configuring) them since they existed.
index.html with script files would still benefit from a bundler. You can have a very minimal react footprint and still want to use react build tools just for bundling.
What effect do you imagine Next.js has on a bunch of code manipulating an HTML canvas? For vanilla code directly using browser APIs it’s basically just a bundler configuration, and while it’s not optimally configured for that use case (and annoying for other reasons) it’s probably better than what someone who has never configured webpack before would get doing it themselves.
Okay, but it’s a web game. Those will make up less than 0.1% of the downloaded bytes required to render the first frame of the game. One image asset will dwarf the entire gzip/brotli Next.js/React framework.
What is the use case for bundling next.js with the web game? Just the layout of the page surrounding the game canvas? It just seems unnecessary, that's all. Traditionally, software development in general and game development in particular has tried to avoid unnecessary overhead if it doesn't provide enough value to the finished product.
It's obvious why he didn't write the game in x86 assembly. It's also obvious why he didn't burn the game to CD-ROM and ship it to toy stores in big box format. Instead he developed it for the web, saving money and shortening the iteration time. The same question could be asked about next.js and especially about taking the time to develop Bun rather than just scrapping next.js for his game and going about his day. It's excellent for him that he did go this route of course, but in my opinion it was a strange path towards building this product.
Why would he stress about a theoretical inefficiency that has very little effect on the finished product or development process? Especially one that could be rectified in a weekend if needed? The game industry is usually pretty practical about what it focuses on from a performance perspective with good reason. They don’t build games like they’re demosceners min-maxing numbers for fun, and there’s a reason for that.
I also wonder how many people who sing the praises of an HTML file with a script tag hosted by Nginx or whatever have ever built a significant website that way. There’s a lot of frustrating things about the modern JS landscape, but I promise you the end results were not better nor was it easier back before bundlers and React.
Bun and Deno's goals seem quite different, I don't expect that to change. Bun is a one stop shop with an ever increasing number of built-in high-level APIs. Deno is focused on low level APIs, security, and building out a standard lib/ecosystem that (mostly) supports all JS environments.
People who like Bun for what it is are probably still going to, and same goes for Deno.
That being said I don't see how Anthropic is really adding long term stability to Bun.
You could just notify the user to add you to their contact list. Like :
Emails will be sent from feed@example.com. If you're not seeing any email, please check your spam inbox and add this address to your contact list,...[rest of notice].
I think the important part in that statement is the "most useful information", the size itself is pretty subjective because it's such an abstract notion.
Evolution gave us very good spatial understanding/prediction capabilities, good value functions, dexterity (both mental and physical), memory, communication, etc.
> It's pretty obvious that artificially created models don't have synthetic datasets of the quality even remotely comparable to what we're able to use.
This might be controversial, but I don't think the quality or amount of data matters as much as people think if we had systems capable of learning similar enough to the way human's and other animals do. Much of our human knowledge has accumulated in a short time span, and independent discovery of knowledge is quite common. It's obvious that the corpus of human knowledge is not a prerequisite of general intelligence, yet this corpus is what's chosen to train on.
I found Enderal so compelling that I watched a full playthrough, it's the only game I've ever done that with. I wanted to play it all the way through myself, but my PC at the time had very strange audio cracking issues with Skyrim so I wasn't able to get far.
There's a lot of concern in the comments here about what this means for ARC. The size of this investment while large isn't enough to warrant jeopardizing ARC though. Intel has a responsibility to all shareholders, and diminishing ARC would be a bad move for overall shareholder value.
If Nvidia did try to exert any pressure to scrap ARC, that would be both a huge financial and geopolitical scandal. It's in the best interest of the US to not only support Intel's local manufacturing, but also it's GPU tech.
Not TS, but there are plans to allow something similar with special type annotation syntax. The runtimes still won't do anything with that extra info though, it'll get stripped just like comments.
reply