WebAssembly Text Format (wat) is fine to use. You declare functions that run imperative code over primitive i32/i64/f32/f64 values, and write to a block of memory. Many algorithms are easy enough to port, and LLMs are pretty great at generating wat now.
I made Orb as a DSL over raw WebAssembly in Elixir. This gives you extract niceties like |> piping, macros so you can add language features like arenas or tuples, and reusability of code in modules (you can even publish to the package manager Hex). By manipulating the raw WebAssembly instructions it lets you compile to kilobytes instead of megabytes.
I’m tinkering on the project over at: https://github.com/RoyalIcing/Orb
> Which brands do people trust? - Which people do people of power trust?
These are often at odds with each other. So many times engineers (people) prefer the tool that actually does the job, but the PMs (people of power) prefer shiny tools that are the "best practice" in the industry.
Example: Claude Code is great and I use it with Codex models, but people of power would rather use "Codex with ChatGPT Pro subscription" or "CC with Claude subscription" because those are what their colleagues have chosen.
This is why Steve Jobs demoed software. Watch when he unveils Aqua, there’s a couple of slides of the lickable visuals and then he sits down and demos it. He clicks and taps and shows it working. Because that’s what you the user will do.
He’ll show boring things like resizing windows because those things matter to you trying and if he cares about resizing windows to this degree then imagine what else this product has.
Apple today hides behind slick motion graphics introductions that promise ideal software. That’s setting them up to fail because no one can live up to a fantasy. Steve showed working software that was good enough to demo and then got his team to ship it.
If you use something long enough, you'll get used to its idiosyncrasies. Jobs would have clicked and dragged 10px away from the rounded corner here instinctively. This is why the owner of an old car can turn it on and drive away in a blink while his son has trouble: hold the accelerator 10% down, giggle the key a little while turning, pull the wheel a bit, ... all comes natural to the owner.
Yes, and Mac owners will do the same thing. I don't use MacOS but people will just figure out the new behavior, be briefly annoyed by it, and then get used to it and move on. Apple could have done better here but users acclimate to much worse UX than this.
I'm working on a compiler for WebAssembly. The idea is you use the raw wasm instructions like you’d use JSX in React, so you can make reusable components and compose them into higher abstractions. Inlining is just a function call.
It’s implemented in Elixir and uses its powerful macro system. This is paired with a philosophy of static & bump allocation, so I’m trying to find a happy medium of simplicity with a powerful-enough paradigm yet generate simple, compact code.
Not really "goto statements" so much as the go-to arbitrary control flow semantic aka jump.
C's goto is a housecat to the full blown jump's tiger. No doubt an angry housecat is a nuisance but the tiger is much more dangerous.
C goto won't let you jump straight into the middle of unrelated code, for example, but the jump instruction has no such limit and neither did the feature Dijkstra was discussing.
A language community which so prizes the linked list is in no position to go throwing such stones.
Linux lucked out, when you're doing tricky wait free concurrent algorithms that intrusive linked list you hand designed was a good choice. But over in userland you'll find another hand rolled list in somebody's single threaded file parser and oh, the growable array would be fifty times faster, shame the C programmer doesn't have one in their toolbox.
You do you. Most people don't care about software that much in general. The most important thing is that it does the job and it does it securely.
C won't help you with bugs in any shape or form (in fact it's famously bug-friendly), so it often makes more sense to use a tech stack that either helps with those or lowers the cost on the developer side.
People care about the performance. There are numerous studies about that, showing, for instance a direct correlation between how fast a page loads and conversion rate. Also, Chrome, initially, the pitch was almost all about performance, and it was. They only became complacent once they got their majority market share.
It makes sense to use a tech stack that lowers the cost on the developer side in the same way that it makes sense to make junk food. Why produce good, tasty food when there is more money do be made by just selling cheap stuff, it does the most important thing: give people calories without poisoning them (short term).
Yeah but we're mentioning the performance of the language.
People do have a baseline level of accepted performance, but this is about perceived performance and if software feels slow most of the time it's just because of some dumb design. Like a decision to show an animated request to sign up for the newsletter on the first visit. Or loading 20 high quality images in a grid view on top of the page. Or just in general choosing animations that just feel slow even though they're hitting the FPS target perfectly without hiccups.
Get rid of those dumb decisions and it could have been pure JS and be 100% fine.
C has no value here. The slow performance of JS is not harmful here.
Discord is fast enough although it's Electron. VS Code is also fast enough.
But I'd also like to respond to the food analogy, since it's funny.
Let's say that going full untyped scripting language would be the fast food. You get things fast, it does the job, but is unhealthy. You can write only so much bash before throwing up.
Developing in C is like cooking for those equally dumb expensive unsustainable restaurants which give you "an experience" instead of a full healthy meal.
Sure, the result uses the best ingredients, it's incredibly tasty but there's way too little food for too much cost. It's bad for the economy (the money should've been spent elsewhere), bad for the customer (same thing about money + he's going to be hungry!) and bad for the cook (if he chose a different job, he'd contribute to the society in better ways!) :D
Just go for something in the middle. Eat some C# or something.
externalising developer cost onto runtime performance only makes sense if humans will spend more time writing than running (in aggregate).
Essentially you’re telling me that the software being made is not useful to many people; because the cost of writing the software (a handful of developers) will spend more time writing the software than their userbase will in executing their software.
Otherwise you’re inflicting something on humanity.
Dumping toxic waste in a river is much cheaper than properly disposing of it too; yet we understand that we are causing harm to the environment and litigate people who do that.
Slow software is fine in low volumes (think: shitting in the woods) but dumping it on huge numbers of users by default is honestly ridiculous (Teams, I’m looking at you: with your expectation to run always and on everyones machine!)
> Most people don't care about software that much in general.
This is an example of not caring about the software per se, but only about the outcome.
> [C is] in fact it's famously bug-friendly
Yes, but as a user I like that. I have a game that from the user-experience seams to have tons of use-after-free bugs. You see that as a user, as strings shown in the UI suddenly turn to garbage and then change very fast. Even with such fatal bugs, the program continues to work, which I like as a user, since I just want to play the game, I don't care if the program is correct. When I want to get rid of these garbage text, I simply close the in-game window and reopen it and everything is fine.
On the other side there are games written in Pascal or Java, which might not have that much bugs, but every single null pointer exception is fatal. This led to me not playing the games anymore, because being good and then having the program crash is so frustrating. I rather have it running a bit longer with silent corruption.
Sure, but this is perceived performance and it's 100% unrelated to the language.
It's bugs, I/O, telemetry, updates, ads, other unnecessary background things, or just dumb design (e.g. showing onedrive locations first when trying to save a file in Word) in general.
C won't help with any of that. Unless the cost of development using it will scare away management which requests those dumb features. Fair enough then :)
> or just dumb design (e.g. showing onedrive locations first when trying to save a file in Word)
Your example is not one of a 'dumb' design, it is a deliberate 'dark pattern' --> pushing you to use OneDrive as much as possible so that to earn more money.
maybe its not 'slow' but more 'generalized for a wide range of use-cases'? - because is it really slow for what it does, or simply slower compared to a specialized implementation? (this is calling a regular person car slow compared to an F1 car... sure the thing is fast but good luck takin ur kids on holiday or doing weekly shopping runs?)
It only matters when your threads allocate with such a high frequency that they run into contention.
A too high access frequency to a shared resource is not a "general case", but simply poorly designed multithreaded code (but besides, a high allocation frequency through the system allocator is also poor design for any single-threaded code, application code simply should not assume any specific performance behaviour from the system allocator).
Well, what is "such a high frequency"? Different allocators have different breaking points, and the musl's one is apparently very low.
> application code simply should not assume any specific performance behaviour from the system allocator
Technically, yes. Practically, no; that's why e.g. C++ standard mandates time complexity of its containers. If you can't assume any specific performance from your system, that means you have to prepare for every system-provided functionality to be exponentially slow and obviously you can't do that.
Take, for instance, the JSON parser in GTA V [0]: apparently, sscanf(buffer, "%d", &n) calls strlen(buffer) internally, so using it to parse numbers in a hot loop on 2 MiB-long JSON craters your performance. On one hand, sure, one can argue that glibc/musl developers are within their right to implement sscanf however inefficiently they want, and the application developers should not expect any performance targets from it, and therefore, probably should not use it. On the other hand, what is even the point of the standard library if you're not supposed to use it for anything practical? Or, for that matter, why waste your time writing an implementation that no-one should use for anything practical anyhow, due to its abysmal performance?
It's because that link redirects to the subs link (presumably unless you have a subscription).
I think you can override it by changing the link shortly after submitting? But of course everyone who isn't a FT subscriber will get redirected anyway.
Yes I’m also working on a SQLite parser, mine is in raw WebAssembly. Is yours open source too? This tool will be so useful. I have basic page reading and parsing of the CREATE TABLE schema: https://github.com/RoyalIcing/SilverOrb/blob/9dacad0ce521b0d...
My plan is to create a miniature .wasm module to read .sqlite files that works in the browser. It will be in the tens of kilobytes rather than the 1 megabyte that the official fantastic sqlite.wasm is. The reduced download means even on 3G you ought to be able to load within a few seconds. You can use SQLite files as your network payloads then, and perhaps even as the working mutable state synced between server and clients.
Mine is in TypeScript and for the purpose of parsing a file that happens to use sqlite, so I don't think I'll bother parsing the CREATE TABLE schema unless I have to. It's not currently posted anywhere but will be open source. It works, but the code isn't particularly great :)
I agree, and I really like the concrete examples here. I tried relating it to the concept of “surprise” from information theory — if what the LLM is producing is low surprise to you, you have a high chance of success as you can compare to the version you wrote in your experienced head.
If it’s high surprise then there’s a greater chance that you can’t tell right code from wrong code. I try to reframe this in a more positive light by calling it “exploration”, where you can ask follow up questions and hopefully learn about a subject you started knowing little about. But it’s important for you to realize which mode you are in, whether you are in familiar or unfamiliar waters.
The other benefit an experienced developer can bring is using test-driven development to guide and constrain the generated code. It’s like a contract that must be fulfilled, and TDD lets you switch between using an LLM or hand crafting code depending on how you feel or the AI’s competency at the task. If you have a workflow of writing a test beforehand it helps with either path.
I made Orb as a DSL over raw WebAssembly in Elixir. This gives you extract niceties like |> piping, macros so you can add language features like arenas or tuples, and reusability of code in modules (you can even publish to the package manager Hex). By manipulating the raw WebAssembly instructions it lets you compile to kilobytes instead of megabytes. I’m tinkering on the project over at: https://github.com/RoyalIcing/Orb
reply