Wasm only gets additive changes - the binary format can't change in a way that breaks any previously existing programs, because that would break the Web. So, you just have to add more opcodes to your implementation.
Aside from code size the primary benefit on the Web is that the GC provided to wasm is the same one for the outer JavaScript engine, so an object from wasm can stay alive get collected based on whether JS keeps references to it. So it’s not really about providing a GC for a single wasm module (program), its about participating in one cooperatively with other programs.
Yes, every wasm program that uses linear memory (which includes all those created by llvm toolchains) must ship with its own allocator. You only get to use the wasm GC provided allocator if your program is using the gc types, which can’t be stored in a linear memory.
No, the component model proposal is not part of the Wasm 3.0 release. Proposals only make it into a Wasm point release once they reach stage 5, and the component model is still under development and so is not trying to move through the phases yet.
Unlike any of the proposals which became part of Wasm 3.0, the component model does not make any changes to the core Wasm module encoding or its semantics. Instead, it’s designed as a new encoding container which contain core Wasm modules, and adds extra information alongside each module describing its interface types and how to instantiate and link those modules. By keeping all of these additions outside of core Wasm, we can build implementations out of any plain old Wasm engine, plus extra code that instantiates and links those modules, and converts between the core wasm ABI to higher level interface types. The Jco project https://github.com/bytecodealliance/jco does exactly that using the common JS interface used by every web engine’s Wasm implementation. So, we can ship the component model on the web without web engines putting in any work of their own, which isn’t possible with proposals which add or change core wasm.
https://github.com/WebAssembly/custom-page-sizes is a proposal championed by my colleague to add single byte granularity to Wasm page sizes, motivated by embedded systems and many other use cases where 64kb is excessive. It is implemented in wasmtime, and a Firefox implementation is in progress.
It's not about the whole microcontroller having less than 64kB of memory - it's that each WASM module has a minimum memory size of 64kB, regardless of how much it actually requires. Also, if you need 65kB of memory, you now have to reserve 2 pages, meaning your app now needs 128kB of memory!
We're working on WASM for embedded over at atym.io if you're interested.
A runtime that accepts Wasm modules that use a large fraction of the functionality, there is going to be a RAM requirement in the few KiB to few tens of KiB. There seems to be a branch or fork of Wasm3 for Arduino (https://github.com/wasm3/wasm3-arduino).
If you are willing to do, e.g. Wasm -> AVR AOT compilation, then the runtime can be quite small. That basically implies that compilation does not happen on device, but at deployment time.
Wasmtime's epoch system was designed specifically to have a much, much lower performance impact than fuel metering, at the cost of being nondeterministic. Since different embeddings have different needs there, wasmtime provides both mechanisms. Turning epochs on should be trivial if your system provides any sort of concurrency: https://github.com/bytecodealliance/wasmtime/blob/main/examp...
Wasmtime maintainer here - curious to hear what went wrong, I and several other users of wasmtime have production embeddings under no_std, so it should do everything you need, including building out WASI preview 2 support. You can find me on the bytecode alliance zulip if you need help.
I think I was a bit spooked by the examples (https://github.com/bytecodealliance/wasmtime/tree/main/examp...), and the need to implement platform dependencies in C code (which would have complicated the build process). Makes sense since it's a more complex and mature project, but Wasmi on the other hand was just a pure Rust dependency that only required a single line in the Cargo.toml. So in short I went the lazy route :)
All of the C primitives there implemented in (unsafe) Rust, but we built that example for an audience that already had some platform elements in C. We'll try to improve the example so that both integrating with C, and using pure Rust, are covered.
I'm not the OP, but I have a similar experience with Motor OS: wasmi compiles and works "out of the box", while wasmtime has a bunch of dependencies (e.g. target-lexicon) that won't compile on custom targets even if all features are turned off in wasmtime.
Not sure how to help with this much information but I've built and run wasmtime on some pretty squalid architectures (xtensa and riscv32 microcontrollers among others) but the right collection of features might not be obvious. We can help you find the right configuration on the Bytecode Alliance zulip or the wasmtime issue tracker if you need it.
I guess not much can be done at the moment: dependencies are often the primary obstacle in porting crates to new targets, and just comparing the list of dependencies of wasmtime vs wasmi gives a pretty good indication of which crate is a bit more careful in this regard:
Wasmtime has many capabilities that wasmi does not, and therefore has more optional dependencies, but the required set of dependencies has been portable to every platform I've targeted so far. If anything does present a concrete issue we are eager to address it. For example, you could file an issue on target-lexicon describing how to reproduce your issue.
> If anything does present a concrete issue we are eager to address it.
That's great to hear! I think it is a bit too early to spend extra effort on porting Wasmtime to Motor OS at the moment, as there are a couple of more pressing issues to sort out (e.g. FS performance is not yet where it should be), but in a couple of months I may reach out!
There’s now an interpreter in wasmtime called Pulley. It’s an optimizing interpreter based on Cranelift, which generates interpreter opcodes which are more efficient to traverse than directly interpreting the Wasm binary.
I have run wasmtime on the esp32 microcontrollers with plenty of ram to spare, but I don’t have a measurement handy.
Wasmtime, being an optimizing JIT, usually is ~10 times faster than Wasmi during execution.
However, execution is just one metric that might be of importance.
For example, Wasmi's lazy startup time is much better (~100-1000x) since it does not have to produce machine code. This can result in cases where Wasmi is done executing while Wasmtime is still generating machine code.
Thanks for using wasmtime! I worked on the component bindings generator you’re using and it’s really nice to see it out in the wild.
To elaborate a bit further: wasmtime ships a [bindings generator proc macro](https://docs.wasmtime.dev/api/wasmtime/component/macro.bindg...) that takes a wit and emits all the code wasmtime requires to load a component and use it through those wit interfaces. It doesn’t just check the loaded component for the string names present: it also type checks that all of the types in the component match those given by the wit. So, when you call the export functions above, you can be quite sure all of the bindings for their arguments, and any functions and types they import, all match up to Rust types. And your component can be implemented in any language!
Thanks so much for your work on this! So far it's been a good pattern for adding hot-reloadable logic to my Rust projects.
Next up I'm hoping to integrate this stuff into a game so you can have hot-reloadable game logic, with the potential for loading and sandboxing user-created mods.
Would wasmtime be a good general purpose scripting extension layer for a C program in the style of embedded Lua but faster and more generic? Is that a reasonable use-case?
Wasmtimes C bindings for components are still a work in progress. In general C programming is always going to require more work than Rust programming, due to the limitations of C abstractions, but at the moment it’s also held back by how much energy has gone into the C bindings, which is much less than the Rust bindings which many of the maintainers, myself included, created and are using in production at their day jobs.
This platform is also based on wasmtime and wasmtime's wasi-http implementation, which I authored, so I'm really proud to see it reused here!
One more plug: I've been working on https://github.com/yoshuawuyts/wstd/ with Yosh and Dan Gohman (creator of WASI) as a nice way to write wasi-http guests in Rust.
reply