Mox is written in Go. What advantages does this provide in terms of performance and security compared to traditional email servers, which are often written in C?
No buffer overflows, no user-after-free and no double free issues. There is a garbage collector which stops the world here and there to cleanup, but for anything that is not constantly busy, like a small mail server, this is not noticeable.
The biggest differentiator between Robyn and frameworks like FastAPI or Sanic seems to be the Rust runtime. But is Python's async actually the bottleneck that Rust is solving here?
Actual bottleneck in examples is sqlalchemy, like 10x overhead easily. I find it quite funny. There is no real difference in web frameworks for real apps in terms of overhead.
I hear this sort of thinking repeated often, and it makes sense at an intuitive level.
However, my experience is that this isn’t always the case. My company has a medium-large Python web app in production, and having viewed performance profiles, I can say definitively that Python code execution occupies a non-trivial amount of processing time. Of course the single largest time sink is the LLM requests we make, but if we could magically nullify the Python processing time, our latency would significantly improve.
The culprit in this case may be specific to FastAPI and Pydantic rather than Python generally, but the point still holds that not all web frameworks and resultant apps are equivalent.
That said, I’ve been dreaming of rewriting large chunks of it in golang. I’ve always known Python was relatively slower than other languages, but I saw a recent benchmark measuring golang as 50x faster than Python. Perhaps the benchmark won’t translate to real world performance, but still - 50x is difficult to ignore.
Returning static "hello world" response for slowest python framework is easily not bigger than 1ms (for fast frameworks it's 0.1ms or lower). Good luck to make the rest of your app to make something comparably fast if db or network is involved.
It's fascinating that the name 'Linux' was actually chosen by Ari Lemmke, not Linus himself. Details like this highlight the role of community and serendipity in the history of open-source software.
Quoting Wikipedia [1], which provides a bit more details (as of 2025-03-03):
Linus Torvalds had wanted to call his invention Freax, a portmanteau of "free", "freak", and "x" (as an allusion to Unix). During the start of his work on the system, he stored the files under the name "Freax" for about half of a year. Torvalds had already considered the name "Linux", but initially dismissed it as too egotistical. [2]
In order to facilitate development, the files were uploaded to the FTP server (ftp.funet.fi) of FUNET in September 1991. Ari Lemmke at Helsinki University of Technology (HUT), who was one of the volunteer administrators for the FTP server at the time, did not think that "Freax" was a good name. Therefore, he named the project "Linux" on the server without consulting Torvalds. [2] Later, however, Torvalds consented to "Linux".
To demonstrate how the word "Linux" should be pronounced ([ˈliːnɵks]), Torvalds included an audio guide with the kernel source code. [3]
[2] Torvalds, Linus; Diamond, David (2001). Just For Fun - The Story Of An Accidental Revolutionary. New York: HarperBusiness. p. 84. ISBN 0-06-662072-4.
This fly-and-roll robot sounds interesting, but I wonder how effective it would actually be on Mars. The thin atmosphere is definitely a major challenge for the flying mechanism.
The rapid evolution of JavaScript frameworks is both a blessing and a curse. While innovations like React Server Components promise improved performance, they also introduce complexities that can be overwhelming. How can developers balance the need to stay current with the risk of 'JavaScript fatigue'?
I just feel zero need to stay current. New frameworks may well be better, but incrementally, not radically. I stay "current" by keeping an eye on HN, and when a name becomes so familiar that it seems entrenched, I'll give it a try on my next greenfield project.
Not everyone can work like that; it's a side effect of the domain I work in. But I chose that in part because I'd rather get work done than chase new things.
I do the same, stay current by hanging out on HN. When a stack or concept appears to become mainstream or important for my career I’ll do a small personal project with it. I don’t try to become an expert I just want enough exposure that, if I had to, I could become productive in short order. I consider it employment insurance.
Zapier’s breach shows that even big SaaS companies can accidentally expose customer data in code repos. If they got hit due to a 2FA misconfiguration, how many other companies are at similar risk without knowing?
"An Amazon Web Services (AWS) engineer last week inadvertently made public almost a gigabyte’s worth of sensitive data, including their own personal documents as well as passwords and cryptographic keys to various AWS environments."
DeepSeek-R1 is an open-source alternative to ChatGPT Plus, but frequent 'Server is busy' errors and some security concerns make it less reliable for consistent use.
This is yet another example of how security-focused OSes like GrapheneOS are outpacing stock Android when it comes to real-world threat mitigation. If Cellebrite and similar forensics tools can still exploit mainstream Android devices, how much trust should we place in Google's security model? Does this suggest that hardened user-controlled OSes are the only real solution for privacy-conscious users?
Meanwhile, apps are starting to block GrapheneOS (and others), because GrapheneOS doesn't pass the Play Integrity "security" test. At the same time, stock devices that haven't received security updates for 5 years (and are vulnerable to known exploits) do pass Play Integrity.
Google is basically using Play Integrity to limit the functionality of Android variants like GrapheneOS (while pretending it's about security), and app developers are, for some reason, happily going along with it.
Your bank loses money when keyloggers/etc are used. Play integrity claims to avoid this problem, and a rooted device with an alternative ROM is a big red flag.
There are nuances here and there are probably ways app developers could support legit Graphene but not a malicious fork, but that’s a lot of work and expertise to reach an extra 0.01% of the population.
> Your bank loses money when keyloggers/etc are used. Play integrity claims to avoid this problem, and a rooted device with an alternative ROM is a big red flag.
Well... no? Maybe a device with things running as root is a yellow flag, but malware isn't going to replace the ROM or install a normal user-controlled root solution. Or put differently, if your "security" system flags the latest version of Lineage OS but doesn't flag a year-old stock ROM with known CVEs, then your system is actively fighting against security.
I understand the idea, but GrapheneOS doesn't even provide any kind of official support for rooting. It's not that they're checking to see if the device is rooted, they're just checking to see if it has Google's checkmark.
Meanwhile, rooted devices can pass Play Integrity (since they can spoof all the necessary bits via root).
You're not likely to pirate your own banking app, but thieves would love to patch the APK (or OS itself) so that it works "normally" except for skimming off credentials etc. in the background. This is functionally equivalent to what happens in app piracy (or game cheating, etc.), and all the same mechanisms are used to try to prevent it.
I don't blame the banking apps, it makes sense given their incentives and constraints. But if Play Integrity existed to serve the user, it would answer only to the user, and not "snitch" the device integrity status to apps and their backends.
Because it says “security” and looks good on paper. For some inexplicable reason, many banks don’t seem to care about actual security, even the modern ones. (Shame on you, Revolut)
If your app can be manipulated to your disadvantage by the user, then you are putting undue trust in client-side software, and you need to rearchitect your service.
Or, sure, you can stick with your lazy, slipshod design and try to paper it over with (unreliable and stupid) attempts to disempower the user.
Security isn’t binary, it’s about layers and probabilities.
A client that attests play integrity is less likely to be an attacker than one that isn’t. That doesn’t mean it’s impossible to use an attested device in an attack, or that all non-attested devices are malicious.
A good heuristic is that if you find yourself using absolutes and dramatic, insulting language… you do not have a valuable insight about security.
> A comfortable, yet completely unjustified, assumption.
I agree. Knowledgeable users are passing Play Integrity on rooted devices. The idea that these knowledgeable users who are willing to bypass local security checks are more trustworthy than users who honestly fail the security test feels like a strange stance on security.
If you want assurance that your code won't run on modified devices, don't let users run the code at all. Have them come into the bank branch and use hardware that you control.
Security, from an app developer perspective, isn't necessarily about the user getting exploited. It's also about protecting the app's interests.
Phones rooted by malware will fail Play Integrity, if checked well (hardware verification). For apps that are on the hook for paying for users' mistakes (banking apps, for instance), that makes a lot of sense.
Games care more about preventing cheaters than about users getting their Bluetooth stack getting exploited.
Remote attestation like Play Integrity can be used for legitimate security purposes, like in managed MDM settings, but it's not just about security.
This is very well said. Every time the subject of smartphones comes up, people will claim there are workarounds to the privacy, security, repair-ability, or usability issues. These arguments have some merit, but these workarounds are almost always very fragile, tenuous, and completely out of reach for most normal users. If you come to rely on these workarounds, you'll be playing a game of cat and mouse for years as Google and Apple find different ways to indirectly ruin your workflow.
This delay makes me wonder if Intel is struggling more than we think. With TSMC expanding in Arizona (despite their own delays), and Nvidia dominating the AI chip market, can Intel afford to push back its U.S. expansion? If the CHIPS Act incentives aren't enough to keep them on schedule, what's the real bottleneck—funding, demand, or execution?