It seems like bun caches the manifest responses. PNPM, for example, resolves all package versions when installing (without a lockfile), which is slower. The registry does have a 300 second cache time, so not faulting you there, but it means your benchmark is on the fully cached path, which you'd only hit when installing something for the first time. Subsequent installs would use the lockfile and bun and PNPM seem fast* in that case.
If I install a simple nextjs app, then remove node_modules, the lockfile, and the ~/.bun/install/cache/*.npm files (i.e. keep the contents, remove the manifests) and then install, bun takes around ~3-4s. PNPM is consistently faster for me at around ~2-3s.
I'm not familiar with bun's internals so I may be doing something wrong.
One piece of feedback, having the lockfile be binary is a HUGE turn off for me. Impossible to diff. Is there another format?
* I will mention that even in the best case scenario with PNPM (i.e. lockfile and node_modules) it still takes 400ms to start up, which, yes, is quite slow. So every action APART from the initial install is much MUCH faster with bun. I still feel 400ms is good enough for a package manager which is invoked sporadically. Compare that to esbuild which is something you invoke constantly, and having that be fast is such a godsend.
> It seems like the main thing that bun does to stay ahead is cache the manifest responses. PNPM, for example, resolves all package versions when installing (without a lockfile), which is slower.
This isn't the main optimization. The main optimization is the system calls used to copy/link files. To see the difference, compare `bun install --backend=copyfile` with `bun install --backend=hardlink` (hardlink should be the default). The other big optimization is the binary formats for both the lockfile and the manifest. npm clients waste a lot of time parsing JSON.
The more minor optimizations have to do with reducing memory usage. The binary lockfile format interns the strings (very repetitive strings). However, many of these strings are tiny, so it's actually more expensive to store a hash and a length separately from the string itself. Instead, Bun stores the string as 8 bytes and one bit bit says whether the entire string is contained inside those 8 bytes or if it's a memory offset into the lockfile's string buffer (since 64-bit pointers can't use the full memory address and bun currently only targets 64-bit CPUs, this works)
yarn also caches the manifest responses.
> If I install a simple nextjs app, then remove node_modules, the lockfile, and the ~/.bun/install/cache/.npm files (i.e. keep the contents, remove the manifests) and then install, bun takes around ~3-4s. PNPM is consistently faster for me at around ~2-3s.
This sounds like a concurrency bug with scheduling tasks from the main thread to the HTTP thread. I would love someone to help review the code for the thread pool & async io.
> One piece of feedback, having the lockfile be binary is a HUGE turn off for me. Impossible to diff. Is there another format?
If you do `bun install -y`, it will output as a yarn v1 lockfile.
Of course, I can't say for sure that he looked at the fastest possible way to parse json here, but my intuition would be that if he didn't, it's because he had an educated guess that it'd still be slower.
You don't need to go straight to simdjson et al, something like Rust serde which desierializes to typed structs with data bllike strings borrowed from the input can be very fast.
Nobody is arguing that JSON is equally as performant as binary formats. What the others are saying is that the amount of JSON in your average lock file should be small enough that parsing it is negligible.
If you were dealing with a multi-gigabyte lock file then it would be a different matter but frankly I agree with their point that parsing a lock file which is only a few KB shouldn’t be a differentiator (and if it is, then the JSON parser is the issue, and fixing that should be the priority rather than changing to a binary format).
Moreover the earlier comment about lock files needing to be human readable is correct. Being able to read, diff and edit them is absolutely a feature worth preserving even if it costs you a fraction of a second in execution time.
> I agree with their point that parsing a lock file which is only a few KB
You mean a few MB? NPM projects typically have thousands of dependencies. A 10MB lock file wouldn't be atypical and parse time for a 10MB JSON file can absolutely be significant. Especially if you have to do it multiple times.
> Being able to read, diff and edit them is absolutely a feature worth preserving even if it costs you a fraction of a second in execution time.
You can read and edit a SQLite file way easier than a huge JSON file.
GitHub (disclosure: where I work) does respect some directives in a repo’s .gitattributes file. For example, you can use them to override language detection or mark files as generated or vendored to change diff presentation. You can also improve the diff hunk headers we generate by default by specifying e.g. `*.rb diff=ruby` (although come to think of it I don’t know why that’s necessary since we already know the filetype — I’ll look into it)
In principal there’s no reason we couldn’t extend our existing rich diff support used for diffing things like images to enhance the presentation of lockfile diffs. There’s not a huge benefit for text-based lock files but for binary ones (if such a scheme were to take off) it would be a lot more useful.
Any way to use `.gitattributes` to specify a file is _not_ generated? I work on a repo with a build/ directory with build scripts, which is unfortunately excluded by default from GitHub's file search or quick-file selection (T).
Does this really work for jump to file? (we're not talking language statistics or supressing diffs on PRs, which is mostly what linguist readme is talking about).
> File finder results exclude some directories like build, log, tmp, and vendor. To search for files within these directories, use the filename code search qualifier.
(The inability of quick jumping to files from /build/ folder with `T` has been driving me crazy for YEARS!)
Correct me if I'm wrong, but checking those two files:
I don't see `/build` matching anything there. So to me this `/build` suppression from search results seems like controlled by some other piece of software at GitHub :/
I checked and you're right: The endpoint that returns the file list has a hardcoded set of excludes and pays no attention to `.gitattributes`.
I think it's reasonable to respect the linguist overrides here so I'll open a PR to remove entries from the exclude if the repo has a `-linguist-generated` or `-linguist-vendored` gitattribute for that directory [1]. So in your case you can add
build/** -linguist-generated
to `.gitattributes` and once my PR lands files under `build` should be findable in file-finder.
Thanks for pointing this out! Feel free to DM me on twitter (@cbrasic) if you have more questions.
On Linux, not yet. I don't have a machine that supports reflinks right now and I am hesitant to push code for this without manually testing it works. That being said, it does use copy_file_range if --backend=copyfile, which can use reflinks.
Still don't understand whhy we even need all these inodes.. The repo is centrally accessible (and should be read-only btw). Resolving that shouldn't be a problem. It's been more than a decade and npm is still a mess.
I'm ultra excited about Bun being finally open sourced, congrats on the amazing progress here Jarred!
Since JSC is actually compilable to Wasm [1] and Zig supports WASI compilation, I wonder how easy would be to get it running fully in WAPM with WASI. Any thoughts on how feasible that should be?
Congratulations for the release! You are doing an impressive work with bun. I find particulary exciting the built-in sqlite, and I cannot wait to move all my projects to bun. Egoistically speaking (my 2012 mbp doesn't support AVX2 instructions), I hope that now that the project is public, since you are going to get a lot of issue reports about the failure on install, you will find some time to get back to issue#67. Thank you, and keep up the excellent work.
Yeah, the install function of npm/yarn/pnpm are all incredbly slow. And also seems to get slower super-linearly with the number of dependencies. I have one project where it can take minutes (on my 2015 MacBook - admittedly it’s quicker on my brand new machine) just to add one be dependency and re-resolve the lock file. If that can solved by a reliable tool I’d definitely switch!
This is one of if not the most insane thing in web dev at the moment. Git can diff thousand of files between two commit in less time than it takes to render the result on screen. But somehow it can take actual minutes to find out where to place a dependency in a simple tree with npm. God, why ?
> it can take actual minutes to find out where to place a dependency in a simple tree with npm. God, why ?
npm is famous for a lot of things & reasons, but none of those are "because it's well engineered".
To this day, npm still runs the `preinstall` script after dependencies have actually been downloaded to your disk. It modifies a `yarn.lock` file if you have it on disk when running `npm install`. Lots of things like these, so that the install is slow, is hardly surprising.
I don't know exactly the "since when", but recently I was caught off guard when issuing `npm i` by mistake in a yarn project. It modifies "yarn.lock" by changing some, if not all, the registry from yarn pkg registry to npm package registry.
People building language tooling often use the language itself, even if it is not very suitable for the task at hand.
This happens because the tooling often requires domain knowledge which they have and if they set out to write tooling for a language they tend to be experienced in that language.
> Yarn v2 is backwards compatible though. You just need to use the node_modules "linker" (not the default one) and it's ready to go.
Last I checked, not quite. Yarn 2+ patches some dependencies to support PnP, even if you don’t use PnP. I discovered this while trying out both Yarn 2 and a pre-release version of TypeScript which failed to install—because the patch for it wasn’t available for that pre-release. I would have thought using node_modules would bypass that patching logic, but no such luck.
I have just discovered yarn2 / yarn3. The main advantage over npm / pnpm seems be the Zero Install philosophy [1] and the Plug'n'Play architecture [2]. Have you some feedback about these features?
By the way, the yarn2 / yarn3 project is hosted on a distinct repository [3].
The benchmark source code links on the homepage are "Not found".
Also a few questions:
What do you attribute the performance advantage to? How much of it is JavascriptCore instead of v8 versus optimized glue binding implementations in the runtime? If the latter, what are you doing to improve performance?
Similarly for the npm client: how much is just that bun is the only tool in a compiled /GC free language versus special optimization work?
How does Zigs special support for custom allocators factor in?
I've been following Jarred's progress on this on Twitter for the last several months and the various performance benchmarks he's shared are really impressive:
Easy to dismiss this as "another JS build tool thing why do we need more of this eye roll" but I think this is worth a proper look — really interesting to follow how much effort has gone into the performance.
As far as I know, one guy made this (https://twitter.com/jarredsumner) working 80h-90h on it a week. His twitter has some cool performance insights into JS and JS engines. It's the biggest Zig codebase too I think.
If there was one comment or tweet that lead me to follow him, it was this [1]
"1ms is an eternity for computers"
When was the last time you heard a Web Developer, frontend, backend or web tooling Dev state that? [2] It is always, oh the network latency dominate, or the DB response time dominate. It is only one tenth of a second ( 100ms ) it doesn't matter. It is fast enoughTM.
It's because the web developer is so far removed from the hardware; we work in Typescript, which is transpiled to Javascript, which is minified, optimized and compressed before it's sent to the browser, where it's run in a JS engine in a browser in an operating system where there's a few more steps to the actual hardware.
And that's plain JS, most people don't even work in plain JS but in a framework where they don't just say "var x = y" but "publish this event into my Observable stack which triggers a state update which will eventually, somehow, update my component which will then eventually update the DOM in the browser", after which the browser's own whole stack of renders takes over.
Meanwhile in video games you can say `frame[0][0] = 255` to set a pixel to white, or whatever.
I think it's a matter of levels of abstraction; with webapps, I'd argue they've gone even further than Java enterprise applications in the 90's / 2000's.
More web developers need to try out game development, it's a similar enough domain in terms of rendering UI and being responsive, but there's absolutely no tolerance for the kind of time wastage that web developers seem to accept as a given.
I'm really excited about bun – it represents an opportunity to deeply speed up the server side javascript runtime and squeeze major performance gains for common, real-world uses-cases such as React SSR.
Also, Jarred's twitter is a rich follow full of benchmarks and research. If nothing else, this project will prove to be beneficial to the entire ecosystem. Give it some love.
Faster startup -> lower serverless bills for a lot of shops. Shaving 25ms off a request might not seem like it's important, but if that saves you a million serverless seconds in aggregate, you just earned a bonus.
The open source JS ecosystem, though, has soooooo many folks working on it that you can almost always find something you need. Certainly compared to something like Java, which has a couple "800 pound gorilla" projects, but then it falls off pretty quickly.
Of course, this is very much a double edge sword, with all the "leftpad" type dependency nightmares and security issues, as well as the "Hot new JS framework of the month" issues. Still, I think the dependency issues are solvable and dependency tooling is getting better, and the frameworks issue has calmed down a bit such that it's easy to stick to a couple major projects (e.g. React, Express) if so desired, but more experimental, cool stuff is always out there.
Yeah definitely a double edged sword. So much of the JS ecosystem is crap written by keen beginners who don't know what they're doing.
You might say "so what they're doing it for free, you don't have to use their stuff", but often you do because the existence of a sort-of-working solution means that other people are much less likely to write a robust solution for the same problem. So everyone ends up using the crap one.
My point was that JS has its 800 pound gorillas too, and so if you only want to use that, you can.
But it has so much of a broader ecosystem of other tools that if you're willing to take that risk it's an option. Java basically just doesn't have that.
Well those gorillas are really hard to maintain. Suppose you found that it wasn't quite what you wanted. Would you poke your head into the Kafka codebase?
Java Gorillas are far, far easier to maintain than JS mayflies. Static typing, top-notch refactoring tools, excellent IDE's with deep language analysis, stable ecosystems, minimal dependency chains, better performance, better profiling tools, trustable repositories with well-supported choices for self-hosting - the list goes on and on.
PS: I have already poked my head into the Kafka codebase in the past. Not the best written project and also confusing because of the Scala mix, but far more readable than several I have seen. And Java makes it easily navigable. Can even auto-gen diagrams to grok it better.
At the last company I worked at I only used C# + WPF (it was horrible). A couple jobs ago I only used R. There are companies with entire divisions that never have to touch javascript, and I'm certain there are companies that never use it. There's a very large world of programmers working for insurance/Healthcare/embedded/military/government that is nothing like modern web dev.
Definitely beta, but really awesome to see someone working on this stuff.
"I spent most of the day on a memory leak. Probably going to just ship with the memory leak since I need to invest more in docs. It starts to matter after serving about 1,000,000 HTTP requests from web streams"
I only added the additional note after the , to clarify that it is still cool stuff, even if it is beta and has memory leaks.
At least in my usecase, I do about 35m hits / day... so this would fall over in less than an hour. 1m isn't that large of a number and the author is willing to shrug that off until after launching.
One of the things that makes frontend dev a nightmare is having to wrangle so many different tools and configurations to build a site, even a simple one. The only next step to improve the situation is all-in-one tools. We're lucky that people are trying to tackle this hard problem.
parcel is great, I always use it for personal project and it works pretty well
Normally I start with something else (webpack, rollup, whatever happened to be there with the example I'm starting from), then when I hit some roadblocks I just parcel index.html and I have something working.
Parcel is great, until it isn’t. I regularly try to switch to Parcel and it’s just impossible on larger projects. If you start with parcel, great! (maybe, as long as you just need to bundle 2 modules and that’s it)
You don't use them in production though. Your code is built in a pipeline somewhere using those tools, typically CI/CD, and then the artifacts from that process are what gets deployed to production. If you're actually running Webpack on a production server then you're doing something very unusual.
That's not what anyone means when they say "in production" though. When people talk about things being "in production" they mean "on a production server".
No, what most people mean when they say "in production" is "in a real project" in the sense that it is or will be published and used. It means something that is more than a demo / PoC / etc. You're being so ridiculously pedantic it's shocking.
No, what most people mean when they say "in production" is "in a real project" in the sense that it is or will be published and used.
Most companies have things that run "in dev", "in staging", "in CI", and "in production". These map directly to some tools - for example, React has "development" and "production" modes. When someone says a server or a database or a tool is used "in production" they're usually referring to the live environment that users access. Most people use tools like Webpack locally to run code when they're doing dev work, and in CI to build things. If someone said to me "We're running Webpack in production" then I would have questions.
If you use "in production" to mean "anywhere in a project", then how do you differentiate between a staging environment and a production environment? Do you talk about "the staging environment that's in production"? That would be confusing...
I want to love this so much, kinda sad that Windows support is non-existant outside of WSL (which I try to use as a last resort option).
I love the bundling of most tasks in one app, especially in an environment where I had friends refuse to interact because of the "framework of the month" problem.
I just wish it din't rep Zig this much, I'm hyped for Zig as much as the next guy, but the website mentions is twice back to back and I really think we should stop going "it's better cause it's written in X".
It's not enough to build something great if nobody knows about it. Marketing is just as important to the success of a project as engineering. By giving such a generous shoutout to Zig, complete with a call for donations, Jarred has effectively created a symbiotic relationship with the Zig project, a win-win situation where both projects are boosted up by each others' spotlights.
A shoutout is very appreciated and putting Zig in the title line definitely is what caught my interest (so yes Zig is being used to promote bun probably more than bun is promoting Zig at this point) but the writing specifically on the website is a bit on the nose, don't you think? "Why is Bun fast? Because of Zig"
Rust is already somewhat infamous for this ("Rewrite it in Rust" is a meme) and has caused it to develop a bit of a stigma, at least in my circles.
I'm still rooting for Zig to get its place among the big ones (and bun seems definitely a nice way to push for it) I just hope that happens without creating the annoying cult-like behaviors that plagued the crab language.
Bun relies on Zig and Zig is an unfinished product that relies on donations to survive. A shout-out to Zig is not only a nice gesture but also a smart one, since it's in Bun's best interest for Zig to survive at least until v1.
The general concern about "written in Zig" being annoying is fair. I think it's a different beast when paired with a call to donate but regardless, if RIIZ is what worries you most, then you can sleep safe because our motto is "Maintain It With Zig", a conscious rejection of "Rewrite It in *".
Purely from better ergonomics[1], not a compile-time guarantee like Rust has. If your tools for concurrency are better, you're less likely to make a mistake.
Sometimes it is better if written in X. Tools that are themselves written in JS tend to be slow, compared to something like esbuild or swc, which, being written in compiled languages, are very fast.
This is super cool and if I was working on more personal projects I would be tempted. In an enterprise context, moving to a reimplementation of core Node APIs is a terrifying prospect. There are infinite possible subtle problems that can appear and debugging even within official Node has been a challenge in the past. I don't know how this concern can be alleviated without seeing proven usage and broader community acceptance.
It is scary, but it already seems more stable and backward-compatible than Deno. With some testing and further stabilization, I have a feeling bun might be a much more feasible and beneficial move.
It's not backwards compatible, it's just Node compatible. Demo is not, and it's stated as much clearly. Thus the suggestion that bun is more backward-compatible than Deno is incorrect and speaks to a fundamental misunderstanding of the two projects.
Sure, a poor choice of words. Node compatibility is what matters to me (and likely many others) practically as a node developer. Deno's benefits don't outweigh the costs of migrating to it and adopting it more broadly, in my opinion.
tried deno briefly the other day, was a real pain. just wanted to hash some files. they had an std hash lib but dropped it in favor of web crypto which bazzarely doesn't have a hash.update method for streaming?? not very good. prefer node's impl
Performance is one thing (the benchmarks are probably wrong though), but will it solve any of the headache you get with NodeJS. I for instance have 7 different NodeJS versions installed just to compile all the projects I use. Oldest version is currently 6(!). The NPM dependency hell is best in class. NodeJS LTS is nothing but a tag. Compliance with ECMAScript updates have not been great. Still a mess with NodeJS to figure out if a package will work on the specific version. Still poor performance in comparison to other tech. And so on...
There will always be incompatibilities between versions. Ideally the tool would manage its own version, maybe with vendoring. Yarn does this optionally and it works well
At least with this you wouldn’t need to manage versions of the package manager and runtime separately
I've been following this project for a while now, and it's incredibly ambitious. It will take a while to reach its full potential but Jarred is doing an extraordinary amount of very good work on it, and I think there is a very good chance this will turn out to be a pretty big deal.
Impressive work here. Congrats Jarred for the lunch of Bun!
I'm eager to see how the different runtimes will shine in the edge side. Bun is clearly positioned as one option and the focus on this is stated in the main page.
I believe the traction for Bun will depend a lot of the adoption. I see the point on having the fastest(TM) runtime in Linux and MacOS, but that comes from the fact of using specific syscalls. If we move to the edge, it's not clear to me how this will be implemented. Maybe those syscalls are not available or sandboxing makes things for difficult. The requirements for making it compatible may reduces its performance.
Hey, excited to see more players in this space and more alternatives which I believe is a win for users.
If there is anything Node core can do better for you to allow better interop ping us. Also - I'm not sure if you're involved with the WinterCG stuff (I _think_ maybe?) but if you're not please come by!
Note that the bun install [1] seems to be hosted as an HTML file, not as a text file. I'm not sure to what extent that causes issues, but it seems atypical.
Just took it for a spin w/ SQLite. Pretty nice that it's got TypeScript support and SQLite support out of the box with no dependencies. The same project in Node pulled in over 100 dependencies and was about 1/3 as fast (super basic HTTP endpoint serving 2 rows of data). Bun performed roughly on par with Go. Again, a trivial test, but pretty exciting, nonetheless.
I'm not really sure what your argument is. It sounds like you're complaining that he used a library to handle the actual javascript parsing/evaluation. There are still platform-specific things that need to be implemented and plugged in to the JS environment to be able to do anything useful (for instance, starting up a server and binding to a port isn't something that the OOTB JavaScriptCore sandbox is going to let you do - you have to implement that separately and plug it in yourself). The transpiler and npm client are completely separate things.
If you read "runtime" as "extensions that allow you to do actually useful stuff" instead of "parser/evaluator", I don't see an issue.
I just pointed out that there is a different interpretation of the sentence that makes it work. It's a little arrogant to assume that not only is your interpretation the "correct" one, but that it's the _only_ one.
There is precedent for this too -- for instance, the ".NET runtime" is not only the JIT/AOT compiler for CIL, but also the libraries providing the .net api + standards/mechanisms for loading other libraries, etc.
> Runtime describes software/instructions that are executed while your program is running, especially those instructions that you did not write explicitly, but are necessary for the proper execution of your code. [...] Runtime code is specifically the code required to implement the features of the language itself.
HatTip [1] just added preliminary Bun support [2].
(HatTip: write universal server code that runs anywhere: Node.js, Edge, Deno, Bun, ... Also for library authors who want to provide a middleware that works anywhere, instead of locking their users into a single JS server runtime.)
@Jarred: We'd be curious to know what you think of HatTip!
The benchmark numbers for React server-side rendering are really impressive. How does bun manage to be so much faster, especially considering that React SSR is actually fairly compute-intensive (building up a DOM tree and all)?
Gaming benchmarks is pretty easy if you have a very repetitive task. A good compiler developer should be able to make his or her compiler best everyone else one one benchmark
Do you just want a wish list? How about configurable garbage collectors, cacheable JIT outputs, or a standardized bytecode format to facilitate language interop?
Those are great points. I asked because I have seen people get excited about new JS runtimes a lot when Deno came out, and not understanding that it was using V8 under the hood. So many people thought they wanted a "new TypeScript runtime" that would "compile to WASM" without really thinking through how that would work or what it would be. Spoilers: it would work the same as V8.
But thanks for those specific wishlist items, which are much more sensible! Isn't your last item WASM though, with its interface types proposal?
ESbuild is much more mature than bun. The author of ESbuild cares a lot about compatibility with other bundlers and stability. Moreover it is already insanely fast. I am not sure there is any interest to switch from ESbuild to bun for bundling or transpiling code.
By the way, I think that bun does not apply the patches of ESbuild since the translation date.
From the page: “Why is Bun fast? An enourmous amount of time spent profiling, benchmarking and optimizing things. The answer is different for every part of Bun, but one general theme: zig's low-level control over memory and lack of hidden control flow makes it much simpler to write fast software.”
I did a quick comparison for my own reasons of using them as an "absolutely stupid runner" which boots a fresh VM, runs some JavaScript that converts a piece of JSON, and gets out (this is likely mostly measuring VM boot and cleanup only). Bun was crazy fast: a factor of 2 over libmozjs, and a factor of 3 over nodejs
Bun won my heart because:
- it uses Zig and not Rust
- it uses JavaScriptCore and not V8
- it has built-in support for my favorite database: sqlite
- it is all-in-one
however that fact that it is NOT using V8 is quite refreshing change. We need diversity in the JS eco system other than google v8. Didn't realize JavascriptCore is faster than v8.
This is the same weak argument that's being used to sell native compilation on Java. Tons of graphics showing startup time improvement, never a word on what happens after. But the only cases where this metric makes a real difference are CLI utilities and Serverless (if you know others, please tell me)
That’s not true, as far as I've read. I feel it is spelled out pretty thoroughly: Startup time and memory footprint are way lower, but peak performance is also lower: JIT outperforms static compilation due to it having more information. There are some tools to gather such information by running the program for a while, gathering stats, but JIT still outperforms.
True. I do allow myself some leeway when I get deeper in the conversation tree but it's still noise I'm adding since HN is much flatter than other forums. I really need an outlet that will fulfill my need for expression but without going full Reddit.
Or maybe I just need physical colleagues and coffee break humor.
While true, I think there's a lot of "this is better because it's been written in Rust" software around these days. The emphasis is on the language, not the problem being solved.
Memory leaks in rust might be less common than a fully GC'ed language but saying rust prevents or makes them less common than e.g. zig/c++ seems like a stretch.
Leaks are absolutely considered safe by rust. Just look at Box::leak. And if we're comparing it to GC'd "leaks", those are typically hashmaps growning out of control and that can also happen
That's mostly a thing teenagers are doing to learn CS, when it's not it's because the software is vulnerable or slow and benefits from a rewrite in a safe language.
It is effectively bait, irrespective of intent. Why would you say that “this pastry is great: it’s blueberry and not chocolate”. Is the implication that 50% of the enjoyment is on account of what it is not? Just weird.
this is very interesting difference too! can you please tell me what you like more about zig over rust and JavascriptCore over v8. very interesting cause I've always thought v8 is superior to JC cause v8 is chrome and JC = safari and zig is very interesting too
Also some things are wayyy slower on v8 compared to other engines. (I've only checked spidermonkey.) In my experience v8 is faster at running WASM and spidermonkey is faster at running optimized js (not quite asm.js but using some tricks like |0) and starting workers.
Every JS engine has interesting design choices. V8 has more documentation than other (the v8 blog is great resource [1]). This ism ore easy to have an overview of V8 internals. It could be nice to have more internal overview from other engine, in particular SpiderMonkey. It is hard to find up-to-date info.
Unrelated question, maybe someone knows, each time I opened a JavaScriptCore or Webkit source file, i've almost never seen any comments. Is it me and my bad sampling or these codes have almost no documentation at all? That's really uncanny.
JavaScriptCore is a constituent part of WebKit, which is pretty much available on washer-dryers these days. But the macOS/iOS/etcOS framework part of JavaScriptCore that adds Objective-C/Swift layers is only available for those platforms, yes: https://developer.apple.com/documentation/javascriptcore
Congrats! Cool to see a new class of JS runtimes springing up. Lots to be excited about here, but cold start time seems like a game changer for building at the edge.
> An enourmous amount of time spent profiling, benchmarking and optimizing things. The answer is different for every part of Bun, but one general theme: Zig's low-level control over memory and lack of hidden control flow makes it much simpler to write fast software.
With modern hardware, accessing memory efficiently is key to writing fast software. Programs written in programming languages that don't let the developer control how data is laid out in memory usually face an uphill battle to be fast.
This sounds extremely impressive! One question: Most of my nodejs code relies heavily on a custom wrapper I built around node-mysql ... for some rather complicated historical reasons, not node-mysql2. In general, what database modules are supported out of the box (besides sqlite3)?
heh. I understand. No one uses mine either. I wrote it around the core to cache server-side prepared statements and old PDO style bindings (WHERE `field`=:var)
I will check yours out if you post a link! or is it just node-mysql3?
Congratulations! This is one of those epic projects you only get to see only few times on your career. Potential/hopeful game changer. Like jQuery or Node.
I'd like to know more about the bundled .bun files. What are they? How they are used? Usable on the browser too?
wonder if bun also has a different approach to security when it comes to installing packages and running their scripts and/or using file system at runtime, e.g. not giving access to the whole machine's file system like Node?
It's a shame proprietary Discord is their only communication option, and proprietary GitHub their only Git mirror. They're even advertising the Discord in the CLI (https://github.com/Jarred-Sumner/bun/blob/e4fb84275715bb4de4...). Also the shorthand syntax `bun create github-user/repo-name destination` is favoring users choosing GitHub above others instead of not favoring any specific Git forge (the best path I've seen is how nixpkgs supports github:u/repo gitlab:u/repo sourcehut:~u/repo etc. to shorthand popular options but not favoring any, while still being flexible enough to continue extending).
> Why is a proprietary communication tools is a problem?
You're signaling to all contributors that you don't value their freedom or privacy.
Not everyone wants to give their data to a corporation. Some users have accessibility needs that straight aren't met by Discord's clients and they send cease-and-desists to every attempt at people to try to make a better or safer alternative client experience free of charge. The fact that there are free and libre alternatives, but choosing not use or at least support an alternative alongside shows your project's priorities (see: Libera.Chat, mailing lists, Matrix, Zulip, Fediverse, RSS/Atom feeds, hosting Discourse, et. al.).
> What's next? It's gonna be a shameful practice to use windows for development?
It's a cost-benefit analysis, like everything else. Supporting only github more than qualifies the Pareto need for the feature, as does Discord for realtime communication. None of the alternatives you mentioned to Discord, for example, are likely to already have a client installed for the vast majority of developers; with that as a litmus, the choices are effectively Discord or Slack. When you're doing the hard calculus of how to spend your most precious resource (time) in a FOSS project, you have to weigh the costs and rewards. Only supporting Github likely has no statistically significant difference in likelihood for contributions. Similarly, hosting conversations on an unusual platform most users are not already using increases the friction of their contributions, so you choose the most popular platform.
I'm sure this project's community would welcome a contribution to mirror their git in a read-only state somewhere else, because why wouldn't they? Similarly, I'm sure they'd be fine with collaborating on setting up bidirectional chat bots so you can communicate with them as you want.
But to expect these things from a nascent project seems ridiculous. We're not talking about React or Spring here, we're talking about a brand new project who should be investing as much time as possible making their software work, not catering to every potential communications niche.
If you've decided that contributing to someone else's code on Github violates your sense of ethics or privacy, that's well within your rights and I respect you for it, but you must have enough self-awareness to recognize that that puts you in the far extreme of digital ethicists. And that shouldn't come with an expectation that your ethics have been catered to.
It remind like every parents' lesson of: if everyone's jumping off a bridge, should you too? And as stewards of OSS, we should shepard users into these FOSS platforms.
Instead, you bifurcated your community that is passionate about FOSS and privacy from those that aren't.
> None of the alternatives you mentioned to Discord, for example, are likely to already have a client installed for the vast majority of developers
You seem to have a very distorted view of developers, the vast majority of free software developers are going to have either an IRC client or a Matrix client installed already.
I've never seen Matrix discussed outside of HackerNews. In the last five years or so, the reaction to people finding out I still use IRC is either "What is IRC?" or "People still use that? Brings me back..."
I don't know a single person in any professional context that doesn't have one of Slack, Teams, or Discord installed.
I may be overstating how much smaller your pool is, but to say that choosing Discord or Slack doesn't grossly expand your reach is just naive.
yeah what happens if a few years down the road, the company decided to close the free accounts? To me for open source project we need a way to archive the discussion and make it public not tight to any company or so.
Nobody wants their data stuck - people assess the likelihood of that as low and the impact as low, so rationally don’t care about it.
If one of my personal projects was unilaterally deleted right now by GitHub it’d be annoying to lose my issues but I could recover ok. And I don’t think it’s likely anyway, so why worry?
People only have so much energy to spend worrying about things. Most would spend it building instead.
> You're signaling to all contributors that you don't value their freedom or privacy.
Disagree strongly with this. It signals to me that the developer cares more about building a good thing than standing up FOSS tools to appease the zealots. Seems very pragmatic.
The Vim plugin communities are notorious about the GitHub bias too with almost everything just assuming GitHub.
npm is supports shorthands for some Git forges (though no SourceHut or Codeberg), but without the `forge:` syntax, you get GitHub as the default which is also favoritism (no surprise with Microsoft owning GitHub and npm though).
The worst offender IMO though is Elm who ties their entire package management ecosystem to GitHub where both your identity and the ability to upload a package requires a GitHub account and hosting must be there too, and downloading requires that GitHub's servers are actually up (with the community needing hacks to deal with the non-so-uncommon likelihood of GitHub being down and no way to point to a mirror).
Was trying to benchmark bun but after 2 minutes it stuck in typescript [1342/1364]. Running on M1 Mac, nextjs project. I think i'm still going to stick pnpm
That's great, but PTC not quite the same thing as TCO. (As the blog post says). Too bad it doesn't seem any big players in JavaScript ar moving to implement that.
But there are many capable languages to develop memory save web applications (Haskell, Go, Rust, Java, Python, TypeScript, Elm, Clojure, ...), so why would one choose one that's not save?
For starters, Go is not even memory safe. That aside, there are many reasons to choose a different language, memory safety is only one of very many tradeoffs that you make when choosing your tooling. In Bun's example, extreme performance has been achieved which no comparable alternative provides. Neither performance nor memory safety are "better" or "worse" by themselves, it's a matter of which tradeoffs you choose.
Not to be a cynic, but I wonder how much of the motivation to create a competing runtime in recent months is in response to the eye gauging ( I know, I know. It’s all relative ) tens of millions in funding Deno just raised.
I don’t intend this as a knock on this project. Competition is good and, this space, unlike the rest of JavaScript, could do with more players. There are some promising numbers and claims here. I hope it works out.
I’m genuinely posing this intellectual question of financial incentive as an theory for JavaScript fatigue as a whole.
High profile threads on JavaScript fatigue trend on HackerNews multiple times a week. The wide range of theories about why web developers reinvent the wheel strangely leave out the financial incentives.
Everyone claims their framework is the fast, powerful, light, batteries included, modern, retro futuristic, extensible, opinionated, configurable, zero config, minimal (3kb, gzipped minified, of course ). The list goes on. A few days ago, I was chatting with someone how all these JavaScript libraries make these words have no meaning. To demonstrate, I screenshared a bunch of landing pages. At this point, I haven’t exhaustively, in one sitting, cross referenced these landing pages. 90% of the libraries shared the same layout. 3 columns / cards with one of those hyperbolic words.
Previously I thought it was pretentious and weird that Svelte marketed itself as “cybernetically enhanced web apps”. What does that even mean? Then again, none of the descriptor words like light, dynamic, and ergonomic mean much. At least Svelte was memorable.
Occasionally, one of these libraries would describe their interface as being ergonomically designed. As if other developers designed their interfaces to not be ergonomic. It’s like how we’d all like to think we’re nice, good, decent people. The majority of people would not do bad stuff if they perceived it as such.
I do think most JavaScript developers have good intentions. Then I’ve also seen DevRel / Evangelist types who shill their library, with disingenuous benchmarks, knowing full well there are better solutions, or that they can help make existing solutions better, to everyone’s benefit. The spoils include consulting, training, speaking fees, books, swag, collecting Patreon ( there are some controversial projects which come to mind ), resume building, GitHub activity social capital ( I’ve talked to some recruiters who were fixated on the idea that you publish code on GitHub, specifically Github, because to them, VCS=Git=GitHub, or it doesn’t exist )
I'm also pretty cynical of most JS rebuild/reinvention projects. I'm very tentatively excited by this one _because_ it looks like all it does is incrementally improve. Having something that is a drop-in API compat replacement for yarn 1/npm and node makes it potentially really easy to get the benefits of incremental perf improvements _without_ needing to reinvent the wheel like yarn 2 or deno.
100% this. Being compatible with nodejs API makes it possibly useful, unlike some other projects which want to throw away the huge npm ecosystem. Why on earth would anyone use JS (or even TS) server side if not to benefit from the ecosystem? Unlike on the web, there are plenty of better languages to use if you don't want npm.
For me and possibly other full stack devs, because I don't want to stay current in two different languages. Using python and JavaScript sucked. Switching to Node and JS was much better.
These are legitimate concerns. Looking at the history of bun and its current homepage the main point is that it offers a dramatic speed improvement (plus misc. quality of life stuff).
At this point the ball is in your (and everyone else's) court to put these claims to the test. It should not be terribly hard to see if the speedup is worth your while or not, JS surely doesn't lack bloated projects that you can try to build. My own personal blog is a bloated Gatsby mess that takes half a minute to build.
That's the one true part of the experience that nobody can falsify.
> Replace npm run with bun run and save 160ms on every run
Maybe you can’t falsify this, but it’s a question of risk vs reward.
It’s currently at 0.1 release. Chances are it has a much higher chance of breaking. And when that happens, it would likely take occupy way more time to debug than the hundreds of ms saved.
Also by being new, it means it has not had a chance to cover all the cases yet. That’s the devil. It’s fast now, but it’s an apples to orange comparison until Bun is at a stable release.
Yes, making up your own mind with first-hand experience requires investing time and effort, that's why people like to have other people tell them what to think.
Jarred's been working on bun for over a year. I don't get the sense that it's a reaction to anything recent at all, Jarred is just super passionate about building the best JavaScript runtime.
The fact that this project uses Zig suggests to me the developer is talented , passionate, and willing to challenge the status quo.
When you choose a lesser known language like that to tackle a hard problem, chances are you are confident in yourself and the language.
The problems I pointed out with the JavaScript ecosystem as a whole is that it’s low hanging fruit. It’s not that there aren’t financial opportunities elsewhere, outside JavaScript. There definitely is. But the perception of financial incentives is the low hanging fruit, plus high reward.
In this case, it will boil down to how much Bun innovates versus just being a thin wrapped around existing solutions. And again, I don’t doubt this. Skeptical, in general, but not ruling Bun out.
Deno showed there is space for not-Node, and that developers would be receptive to this.
And yes, Deno is just one player in edge, but you can agree there is much more money involved with all those other players you listed.
It’s going to be a battle of eyeballs from those edge providers then wouldn’t it? Whether that’s consulting or licensing fees, or just an acquisition / acquihire player.
Maybe you’re suggesting these players would build a runtime themselves. From my experience, only a fraction of companies, rarely, tackle ambitious projects like this. It’d be hard to justify to management who need quarterly results. Instead, they’ll fork an existing technology and make it better, because you can show incremental progress but keep your baseline. For example, Facebook didn’t rewrite their PHP code right away. They wrote a faster interpreter.
I wouldn't agree that Deno showed that, as I said many companies are making a lot of money from non-Node JS runtimes.
The players I mention have built their own runtime, they're mostly all built on V8 isolates (including Deno Deploy).
This is why I struggle to see where Bun fits in the edge JS world, as far as I understand it JSC has no Isolate primitive meaning Bun would have to write this from scratch (or salvage the other parts of WebKit that offer isolation). Otherwise Bun will be limited to using Linux containers on the edge, at which point you re-introduce the startup time you gained by switching from node in the first place.
Someone suggested that Deno Deploy might not be using isolates as isolation boundary per account but processes instead (though, Deno Deploy may be using isolates to run different functions part the same account).
I think some creators pursue products as ways to fund their passions. For example, was deno deploy always the endgame or is it just a way to fund working on deno? Aside from motivation, how it's implemented and how it affects the community matters.
This part, very early on in the Bun page, stood out. That’s a monetizable product, even if the code is open source. That to me felt like positioning itself as a potential drop in replacement for Deno Deploy / Edge Functions.
There was serverless, and now the next trend is with edge computing. It’s already happening, but now specifically about runtimes on that edge.
It actually seems more vendor agnostic in description. It's an overall advantage of bun, but there isn't any first-class bun service pitched (at least at this point).
"Please don't pick the most provocative thing in an article or post to complain about in the thread. Find something interesting to respond to instead."
Why? If it's over TLS you can trust it's being served by the owner of the website. You're having to trust the person who wrote the script anyway. And before anyone says "I'm going to inspect the shell script before I run it", do also note that its purpose is to download and run binaries that you are never going to inspect.
The other thing to be careful about is that the script is written in a way that a truncated script has no effect. This is because sh will execute the stdin line-by-line as it reads it, so if the download fails in the middle of a line the downloaded fragment of that line will be executed.
It can be the difference between `rm -rf ~/.cache/foo` and `rm -rf ~/`
The standard way to solve this problem is to put the entire script inside a function, invoke that function in the last line of the script (as the only other top-level statement other than the function definition), and name it in such a way that substrings of the name are unlikely to map to anything that already exists. Note that the bun.sh script does not do this, but also from a quick glance the only thing that could noticeably go wrong is this line:
rmdir $bin_dir/bun-${target}
A truncated download could make it run `rmdir $bin_dir` instead, in which case it'll end up deleting ~/.bun instead of ~/.bun/bin/bun-${target}, which is probably not a big deal.
IMO the main problem is that it isn't clear how updates will work. Some of the curl-to-bash scripts don't do anything in regards to updates at all, some add a PPA/similar on ubuntu/debian/fedora/etc.
It'd be nice to know what and how I should manage updates.
Even if you do compile the kernel yourself, do you really own your OS if you haven't compiled the compiler yourself? Did you use a pre-built compiler binary to compile the compiler?
Now we're getting to the real questions in life. :)
(Incidentally, this is probably the most fundamental software supply chain attack vector - manipulate the compiler binary used to compile the compiler used to compile the kernel and userspace. The attack payload would never appear in any sources, but would always be present in binaries.)
You could add additional security to the process by first validating some cryptographic signature or verifying that the downloaded content's hash matches one that the author published.
Both of those just push the overall security a bit down the line, but both are ultimately not completely safe. The only truly safe action to take is to not download it at all.
someone demonstrated a while back how based on user agent you could serve innocuous code for a browser checking the code first, and then a different malicious payload for curl.
But this technique lets you serve malicious code to a small number of people using curl|bash, rather than hosting obviously-bad binaries that anyone can inspect and call you out on. It also lets you target the attack to specific users or IP blocks.
I’m always flabbergasted how often projects, including and especially the ones dealing with web technologies (JavaScript), fail to write a responsive website.
You could be more impressed that some web pages manages to guess that the browser you use, that presents itself as a desktop browser, is in fact a mobile browser :)
One of the things I'm excited about is bun install.
On Linux, it installs dependencies for a simple Next.js app about 20x faster than any other npm client available today.