Money can't just "go" somewhere, it needs a reason first, at least for book-keeping. I mean, VCs can get their invested capital back but on top of that, how would that money be transfered? $20B is a lot and for sure the VCs will not just write an invoice of $18B for consulting services.
When can we be done with these cheap comments? It has really become tiring to have a comment tree on every HN post for people who don't know what the article is about. As the author often didn't submit their own article it is just a complaint with no possible resolution. Instead of taking a few seconds to find out what the article is about and maybe even clarifying it for your fellow readers, you are taking that time to write a comment that only detracts from a possible conversation.
If you can't bring yourself to search for 5 seconds and find out what an article is about, maybe you just close it and move on.
The tone in which people like the parent comment is disgraceful. I’m sorry this is hacker news and hackers know that BEAM is the Erlang VM, no introduction or explanation needed. It is respected and admired as a great piece of engineering to be studied by all hackers.
He wasn’t an ass about it. And the people who don’t know what BEAM is can easily google their way to more information than they’d ever be able to get from a comment here.
Edit: I will say that’s a better take. It’s at least blatant about it being the personal attack you intended it to be instead of passive aggressively couching it behind the pretense of caring about community. If you want to be shitty to people, just do it. I’d prefer honest assholes than weasely manipulators.
You're looking at where I told someone not be rude, and you're interpreting that as a personal attack from me? Because I "want to be shitty"? No.
And the mention of community is not a pretense. I want them to recognize that a lot of qualified people don't in fact already know that specific thing. (This is separate from how easy it is to google.)
Also, if you're calling out personal attacks, the way they used the word "disgraceful" is much more personal than anything I said.
> You're looking at where I told someone not be rude
That’s just it, you didn’t tell them not to be rude. You hid behind nebulous shunning of “gatekeeping”.
>I want them to recognize that a lot of qualified people don't in fact already know that specific thing
Again, you didn’t actually do that though, did you? Instead you used the term colloquially meant to signal they are creating a “hostile” environment
> Also, if you're calling out personal attacks, the way they used the word "disgraceful"
I’m distinctly not calling out personal attacks, I literally just gave you a thumbs up for doing it instead of taking the virtue signaling finger wagging approach you previously used.
> Again, you didn’t actually do that though, did you? Instead you used the term colloquially meant to signal they are creating a “hostile” environment
> nebulous
I think you have an incorrect idea of what the word gatekeeping means.
It's not nebulous. It's saying they're doing a bad thing (being rude/synonym) by applying incorrect and artificial criteria about who is in a group (which I quoted).
Both versions of my comment had the same meaning. I wasn't hiding behind anything, just using different phrasing.
And no, my word choice was not to virtue signal. The word fit the situation so I used it.
No I understand clearly, it’s a weasel word and I dislike it. You use it to mask a personal attack (calling him an “ass”), shun behavior, and avoid the actual conflict of directly addressing what was said. Its annoying passive aggressive bullshit verbiage.
Calling out “gatekeeping” is virtue signaling at its very core.
"I’m sorry this is hacker news and hackers know that BEAM is the Erlang VM, no introduction or explanation needed." is being an ass to a lot of people.
Their tone was disgraceful, let me explain by giving you an example of how posts should be made.
“Hey! This looks interesting, quick search on Google didn’t explain what the BEAM is as well as I would like, can someone let me know what this is about in layman’s terms?”
This is inviting people to talk about the topic at hand. It puts the responsibility of knowing something squarely on the person who wants that information and it’s generally pleasant.
How the parent decided to phrase his desire for being spoon fed information was in fact disgraceful.
I still remember 14 years ago or so, when applied science posted his diy electron microscope build and a handful of top comments were low effort nerd snipes and criticisms.
Nothing to do about it, I don't think. Its the warty culture here.
> I was always fascinated with BEAM (Bogdan Erlang Abstract Machine, a VM for languages like Erlang and Gleam) and how it allowed easy spawning of processes that didn’t share state, allowed for sending and selectively receiving messages, and linking to each other thus enabling creation of supervision trees.
That's all it takes. When you're writing about a niche topic (and nearly everything and anything interesting is a niche topic) then explain your jargon. It's considerate, reminds people who are familiar but might have forgotten, and introduces people unfamiliar with it to what your topic is.
Sometimes people want to understand what they're reading about and not have to play a little "guess what this is about" game. Clarity is a quality of good writing.
You do have a point there. I did forget about the [-].
However, I read the comments in hopes they are interesting. If we have a culture where the "I don't know what this thing is"-type comment is popular, people will post those comments more and more. This leads other people to spend their time replying to it, instead of engaging with the content of the article. In other words, it distracts other commenters, who might otherwise have contributed something good.
Second, I think having low value comments is undesirable by itself. We could all start posting "First!" on articles, and everyone who hates that can simply minimize them. I think you can see why that would not be great. We can argue whether this is a low-value comment but I already did that in my original reply: it is not addressed to the article author (in this case they happened to show up but generally they don't) so the complaint doesn't lead to anything, and the comment complains about having to read about a concept they are not familiar with, but does the exact same thing itself.
You can use Tailscale to connect services together (not just someone's laptop to a service, replacing OpenVPN), but what if Tailscale has an outage? Will my services not be able to find each other anymore?
The tailscale login servers had an issue last week. My local network had an issue at the same time and all connections dropped. Then none of my stuff could reconnect because I couldnt connect to tailscale :(
Looking into setting up my own headscale instance now. This is the first issue I’ve had with tailscale but seems dumb that my local lan devices couldn’t even talk to each other.
(Tailscalar here) We're taking this kind of outage very seriously. In particular this outage meant newly connected devices couldn't reliably reach our control plane and couldn't get the latest network state. IMO that's not okay.
One of Tailscale's fundamental promises is that we want to try as much as possible to get our control plane and infrastructure as out of the way of your connectivity paths, while still using our infra to "assist" when there's connectivity issues (like difficult to traverse NAT), and maintain trust across the network, and keep everything up to date.
It's a tough balance and this year we're dedicating resources to making sure even small blips in our control plane don't mean temporary losses of connectivity across even your newly woken up devices. In particular we're taking a multi-pronged approach, right now. We're working in parallel to increase client tolerance of control outages (in response to cracks shown in this incident) and have an ongoing effort to make the control plane more resilient and available.
Your definition what is an ethical issue is reductive. It means the issue involves ethics, and they are obviously involved. Even if ultimately society at large would benefit from the disappearance of certain jobs, that can still create suffering for hundreds of thousands of people.
"The Liquid Glass effects are not expensive and anyone claiming they are has no idea how modern GPUs and animation work. Anyone saying it is is either just parroting or is an idiot."
I'm inclined to believe what I have experienced. I have never before experienced my 2020 iPad Pro to be remotely slow. I use it for some web browsing and YouTube viewing, so I really don't need a lot of computing power.
Now that I'm running the iOS 26 beta, I frequently feel animations going slowly or hitching. It's not terrible, but for the first time, I have the feeling that my experience using my iPad would be noticeably improved if I bought a new and more powerful one.
But I guess this makes me an idiot according to Mitchell?
Beta versions are always slow and sluggish. Just install the latest beta of iPadOS 18. It will be sluggish. The reason is that in beta versions there is a lot of logging and reporting running in the background which can not be disabled.
We will see. It feels worse than earlier betas; I have always put the public betas on my iPad, and this is the first time we're this late in the cycle and my iPad feels too slow. But nothing would make me happier than if this all just goes away when iOS 26 is released properly and all animations run at a smooth 120 FPS again.
1) that is not a paradox
2) the ending of the sentence (which you left out) gives context: iOS 18 beta is sluggish as well, so ‘being sluggish’ is not a liquid glass exclusive
I experience this basically any time I upgrade my phone OS. There's never anything new that makes me happy, it's always either they removed something I used, made something uglier, and always it's 2-3x slower than it used to be.
Same thing with Windows. If they just stopped touching it 20 years ago, it would be 50x more responsive now.
Just turn them all off along with transparency and whatnot in the vision impaired setting. I believe there's also a setting for scrolling or how pages move back in forth (seems to be faster to me)
I always so this with all phones as it saves battery life and feels way snappier to me than some random animation between windows.
Ehh more like spellcheckers aren’t something you only get in a word processor anymore, and autocorrect doesn’t help either. I’m getting the impression that there are much more malapropisms on the Internet (and much, much fewer outright typos and spelling errors) than there used to be, say, a decade ago, and I strongly suspect spellcheckers are to blame.
(Proofreading in professional publishing is, indeed and to that industry’s great shame, much less of a thing than it used to be, but that’s a different story.)
If that's so, can't he explain it ELI5 style instead of calling people idiots?
I have a hard time believing that the GPU is somehow magically energy efficient, so that computing this glass stuff uses barely any energy (talking about battery drain here, not "unused cycles").
Here's an attempt at that: The GPU is responsible for blending layers of the interface together. Liquid glass adds a distortion effect on top of the effects currently used, so that when the GPU combines layers, it takes values from (x + n, y + m) rather than just (x, y). Energy efficiency depends on how much data is read and written, but a distortion only changes _which_ values are read, not how many.
It needs to read more than one value. Otherwise blurring cannot happen. That's automatically more work. And also, considering the effect being physics based, even in your example, were it correct, calculating what n and m are is not trivial.
These UI elements (including the keyboard!) already blur their background, so that’s not a new cost. My 5 year old phone handles those fine. The distortion looks fancy, but since the shape of the UI elements is mostly-constant I’d expect them to be able to pre-calculate a lot of that. We’ll see when it ships!
My generous interpretation is that he means the GPU is magically energy efficient compared to the CPU. I wouldn't dispute that.
But Apple went down that xPU-taxing path a long time ago when they added the blur to views beneath other views (I don't remember what that was called).
The translucency goes all the way back to the original Aqua interface in Mac OS X. I believe the compositing started getting some GPU acceleration (Quartz Extreme) in Mac OS X 10.2 Jaguar all the way back in 2002.
Gaussian blurs are some of the most expensive operations you can run, and Apple has been using them for a long time. They’re almost always downscaled because of this.
The first retina iPad (the iPad 3 if I recall) had an awfully underpowered GPU relative to the number of pixels it had to push. Since then, the processors have consistently outpaced pixels.
Your device is easily wasting more time on redundant layout or some other inefficiency rather than Liquid Glass. Software does get slower and more bloated over time, often faster than the hardware itself, not in the ways you might expect.
The GPU has so much headroom that they fit language models in there!
The problem with these kinds of blur effects is not the cost of a gaussian blur (this isn't gaussian blur anyway as it has a lens effect near the edges). It's damage propagation and pipeline stalls.
When you have a frosted glass overlay, any pixel change anywhere near the overlay (not just directly underneath) requires the whole overlay to be redrawn, and this is stalled waiting for the entire previous render pass to complete first for the pixels to be valid to read.
The GPU isn't busy in any of this. But it has to stay awake notably longer, which is the worst possible sin when it comes to power efficiency and heat management.
Yes, that all makes sense! My understanding is that the damage propagation gets worse with depth (no limit) in addition to breadth (screen size). If the compositor has N layers, a blur layer, N more layers, another blur layer, etc. then there are a lot of "offscreen render passes" where you have to composite arbitrary sets of layers exclusively for the purpose of blurring them.
It's true that GPU is itself not busy during a lot of this because it's waiting on pixels, but whatever is preparing the pixels (copying memory) is super busy.
Downscaling is a win not just for the blurring, but primarily the compositing. KDE describes the primary constraint as the number of windows and how many of them need to be blended:
The performance impact of the blur effect depends on the number of open and translucent windows
As long as the lower blur layers are not fully occluded by opaque content, then yes - they all need to be evaluated, and sequentially due to their dependency. This is also true if there is transparency without blur for that matter, but then you're "just" blending.
Note that there are some differences when it's the display server that has to blur general output content on behalf of an app not allowed to see the result, vs. an app that is just blurring its own otherwise opaque content, but it's costly regardless.
(There isn't really anything like on-screen vs. off-screen, just buffers you render to and consume. Updating window content is a matter of submitting a new buffer to show, updating screen content is a matter of giving the display device a new buffer to show. For mostly hysterical raisins, these APIs tend to still have platform abstractions for window/surface management, but underneath these are just mini-toolkits that manage the buffers and hand them off for you.)
It's not an uncommon terminology in the WSI portion of graphics APIs, I was just pointing out that it doesn't actually mean anything to the hardware/lower stack. There are only buffers.
(There can be restrictions on which buffer formats and layouts can be used for certain things, scanout being particularly picky, but a regular window can be textured from pretty much any buffer.)
Gaussian Blur isn't the most efficient way of doing a frosted glass blur effect though. IIRC the current state of the art is the Dual Kawase blur, which is what KDE uses for its blurred transparency effect, I've never observed performance issues having it running on my machine.
A Gaussian blur is separable, making it far more efficient than many other convolutional filters, and convolutions are hardly the most expensive sorts of operations you could run.
Coupled with the reports of sluggish performance from the early betas, it’s understandable people would reach the conclusion that the new design pushes the hardware significantly more than before.
a.) Compute-cycles: Some added passes to apply additional shading on top of the completed render, or
b.) Power-consumption: Some added delay in putting components to sleep (reducing CPU/GPU-clock) on every screen update.
Deferred sleep for a portable, battery-powered device because of a longer UI-rendering pipeline can easily add up over time.
--
I'd be quite interested to see some technical analysis on this (although more out of technical curiosity than the assumption that there is something huge to be uncovered here...).
There's also the aspect of iOS prioritizing GUI-rendering over other processing to maintain touch-responsiveness, fluidity, etc. Spending more xPU-time on the GUI potentially means less/later availability for other processes.
For sure non-native apps trying to emulate this look (i.e. Flutter) will create a significantly higher impact on the power-profile of a device than a native app.
I think that list applies more to Eric. He is definitely in the 'conspiracy of nefarious forces are aligned against me' camp.
Sabine, I think she was just referring to how institutions can become calcified around certain ideas. The old concept that 'new' ideas need to wait for the founders of old ideas to die off. (can't remember exact quote).
"A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it"
Given that they are probably at least partly on Azure, this makes it less surprising because Azure has the worst IPv6 implementation of the 3 large cloud providers.
I’ve gone on long rants about it before right here on HN but I can’t be bothered digging up the old post…
… the quick and dirty bullet points are:
- Enabling IPv6 in one virtual network could break managed PaaS services in other peered networks.
- Up until very recently none of the PaaS services could be configured with IPv6 firewall rules.
- Most core managed network components were IPv4 only. Firewalls, gateways, VPNs, etc… support is still spotty.
- They NAT IPv6 which is just gibbering eldrich madness.
- IPv6 addresses are handed out in tiny pools of 16 addresses at a time. No, not a /16 or anything like that.
Etc…
The IPv6 networking in Azure feels like it was implemented by offshore contractors that did as they were told and never stopped to think if any of it made sense.
- You STILL can't use PostgreSQL with IPv6: "Even if the subnet for the Postgres Flexible Server doesn't have any IPv6 addresses assigned, it cannot be deployed if there are IPv6 addresses in the VNet." -- that's just bonkers.
- Just... oh my god:
"Azure Virtual WAN currently supports IPv4 traffic only."
"Azure Route Server currently supports IPv4 traffic only."
"Azure Firewall doesn't currently support IPv6"
"You can't add IPv6 ranges to a virtual network that has existing resource in use."
People say this a lot, please the board. But why would so many boards be hype-driven and CEO's be rational? It might just as well be the C-suite themselves who are the source of it.
Keep in mind that though it is often claimed that SQLite is DO-178B certified, they do not claim that themselves, they merely say DO-178B-inspired: https://www.sqlite.org/hirely.html
And while Airbus confirmed they are using SQLite, they did not claim they are using it in safety critical parts, which D. Richard Hipp confirms here: https://news.ycombinator.com/item?id=18039303
reply