The inherent problem of constant attentiveness being extremely challenging means that most relevant authorities say a life guard should not be on duty for more than an hour at a time, and they should have at least 15 minutes break between those hour shifts.
Even with those minds are going to wander (or there will be distractions - people asking for directions, etc) but there are multiple failsafes: there's generally more than one life guard, being distracted for a few seconds is not a failure (a life guard is having to scan a significant area so they don't have 100% awareness of 100% of their zone at once, so there's always potentially significant delay between something going wrong for someone, and a life guard seeing it), and the time to catastrophic failure is measured in (contextually) significant amounts of time.
The issue with self driving cars, as they are currently set up, is that they say "the car will do everything" and it does, but they then say "however the driver is still in control of the vehicle so if a crash happens it was the drivers fault for not paying sufficient attention".
In the pilot case: there are periods of flight where the pilot is doing very little for extended periods, but those are all at altitude, and the time from "something went wrong" to "it is irrecoverable" (in non-aircraft failure modes) is remarkably large (at least to me - my mental model was always 'something went wrong, it's seconds to crashing' until I binged air crash documentaries and even if they're trying it takes a long time to go from cruising altitude to 0). There are also modes where the pilots must always react immediately, whether or not they were distracted, or if they were focused but on a completely different task, but those modes are all close to "this alert occurs->reflexively do a specific action before you even know why".
Attentiveness is a real problem for long haul train traffic and multiple accidents have occurred because of it (or the loss of it), and there are many things they've tried to do to prevent the exact same problem that self driving cars introduce, and they simply do not work. At least for trains you can in principle (those the US seemingly does not) use safety cut offs such that a train that is not responding correctly to signals is halted automatically regardless of the engineers and operators. What companies frequently try to do is add variations of dead controller switches (similar to the eye tracking in "self driving" cars), but for the same reason that attentiveness is an issue in multiple hours of no operation those switches get circumvented (brains don't like to focus on a single thing while not actually doing anything for hours, muscles don't like being constantly in a single stress point for hours).
> Filesystem bugs that silently corrupt your data will also get synced and backed up
This is a very easy problem to solve: don't do incremental backups. Or have N backups and rotate, which isn't as good but still gives you more time to notice. Hard drives are cheap.
My guess -- Microsoft wasn’t excited about the company structure - the for-profit portion subject to the non-profit mission. Microsoft/Altman structured the deal with OpenAI in a way that cements their access regardless of the non-profit’s wishes. Altman may not have shared those details with the board and they freaked out and fired him. They didn’t disclose to Microsoft ahead of time because they were part of the problem.