"suspend" was always fragile, and "hibernation" a liability at times.
Close the lid button to shutdown... should be default behavior, as ssd/NVMe can boot a system so fast now it no longer makes sense to risk some fussy software glitching on resume. =3
With Debian 13 on ThinkPad X1 the hibernation is very reliable. Resuming from it while not instant still takes like 40 seconds. So I configured my laptop on lid close to sleep for 15 minutes and then hibernate. This way if I just to another room the wake-up is instant while longer pauses shuts the laptop down removing any security keys from the memory.
Nice, they know better. But it also makes me wonder, because they're saying "but what if you need to run another app", I'd expect for things like loadbalancers for example, you'd only run one app per server on the data plane, the user space stack handles that, and the OS/services use a different control plane NIC with the kernel stack so that boxes are reachable even if there is link saturation, ddos,etc..
It also makes me wonder, why is tcp/ip special? The kernel should expose a raw network device. I get physical or layer 2 configuration happening in the kernel, but if it is supposed to do IP, then why stop there, why not TLS as well? Why run a complex network protocol stack in the kernel when you can just expose a configured layer 2 device to a user space process? It sounds like "that's just the way it's always been done" type of a scenario.
AFAIK Cloudflare runs their whole stack on every machine. I guess that gives them flexibility and maybe better load balancing. They also seem to use only one NIC.
why is tcp/ip special? The kernel should expose a raw network device. ... Why run a complex network protocol stack in the kernel when you can just expose a configured layer 2 device to a user space process?
Check out the MIT Exokernel project and Solarflare OpenOnload that used this approach. It never really caught on because the old school way is good enough for almost everyone.
why stop there, why not TLS as well?
kTLS is a thing now (mostly used by Netflix). Back in the day we also had kernel-mode Web servers to save every cycle.
You can do that if you want, but I think part of why tcp/ip is a useful layer of abstraction is it allows more robust boundaries between applications that may be running on the same machine. If you're just at layer 2 you are basically acting in behalf of the whole box.
TCP/IP is, in theory (AFAIK all experiments related to this fizzled out a decade or two ago), a global resource when you start factoring in congestion control. TLS is less obviously something you would want kernel involvement from, give or take the idea of outsourcing crypto to the kernel or some small efficiency gains for some workloads by skipping userspace handoffs, with more gains possible with NIC support.
DNS resolution is a shared resource. The DNS client is typically a user-space OS service that resolves and caches DNS requests. What is resolved by one application is cached and reused by another. But at the app level, there are is no deconflicting happening like transport layer protocols. However, the same can be said about IP, IP addresses like name servers are configured system wide and shared by all apps.
It can be shared access to a cache, but this is an implementation detail for performance reasons. There is no problem with having different processes resolve DNS with different code. There is a problem if two processes want to control the same IP address, or manage the same TCP port.
Yeah, but there is still no reason why an "ip_stack" process can't ensure a different IP isn't used and a "gnu_tcp" or whatever process can't ensure tcp ports are assigned to only one calling process. An exclusive lock on the raw layer 2 device is what you're looking for I think. I mean right now, applications can just open a raw socket and use a conflicting tcp port. I've done to kill TCP connections matching some criteria by sending the remote end an RST pretending to be the real process (legit use case). Which approach is more performant, secure, and resilient? that's the what i'm asking here.
You do want to offload crypto to dedicated hardware otherwise your transport will get stuck at a paltry 40-50 Gb/s per core. However, you do not need more than block decryption; you can leave all of the crypto protocol management in userspace with no material performance impact.
There is no relationship between n and N, of course. N is some fixed constant.
Log(n) is unbounded, yes, but if a Turing machine halts in sub-linear time then there is some length input for which that TM halts before it reads every cell of the input tape. Therefore, it doesn’t matter how the size of the input grows, that TM must halt in constant time. Make sense?
I’m sorry but no. Maybe it’s my ignorance of TM’s, but O(log n) doesn’t read all input by definition. It doesn’t follow that it is therefore _independent_ of the input size.
What makes a/your TM special that this is the case?
I don’t mean to take too much of your time though, maybe I’m too dumb for this.
Edit: I can sort of see how O(log n) is impossible or at least O(n) in a TM, but to reduce it to O(1) makes no sense to me
Let me try to explain the proof in more detail. Assume the TM is sublinear. Then there exists some fixed N such that for every input of size n <= N, the number of steps the TM halts in is less than N.
Now consider any input I, and let I_N be the subset of I that fits in N characters (I.e. I_N is the same as I except with all characters past the Nth replaced by blanks).
Then by our earlier statement defining N, the TM must halt on I_N after fewer than N steps. But the TM’s behavior on I for the first N steps must be the same as its behavior on I_N, because it can’t possibly have read any different characters (since in N steps it can only have read at most the first N characters, and those two inputs are the same on that interval). Thus the machine must halt in less than N steps on I. But I was chosen arbitrarily, so we’ve shown that the machine halts in less than N steps on any input, and recall that N was a fixed constant.
Remember that TM’s read their input left to right. Therefore if there is some fixed number N where the TM halts on all length N inputs, then it would halt in the same number of steps if input where length N+1 because TM halts before it can read the additional input.
The supply and demand for something with change regardless of how it’s distributed. If you’re spending the money on food and housing, demand for better food and better housing will increase.
If it’s all going to a bunch of really wealthy people then the it will be focused on the types of assets that they would spend money on (businesses, yachts, whatever).
No matter the supply of money, there’s a limited supply of everything else.
There is nothing wrong with apt and dpkg. It’s just that Ubuntu infected apt with their poison by making ‘apt install firefox’ install a snap package and they’re poised to do it with more packages (maybe they already have).
I personally can’t think of anything software related that Ubuntu provides over Debian for normal desktop users. Only the Ubuntu 6-month release schedule can be a bit nicer.
It is a good idea to install security updates from unstable since they take extra time to reach testing and the security team only releases updates to unstable