1. The NTSB and FAA both found repeatedly, as have pretty much every other study - academic or otherwise - that people _cannot_ focus on watching a task without actually being involved in or doing the task. Any "self driving" model that requires the non-operator to be engaged in driving is simply not humanly possible. So the only reason for that stated rule is the manufacturers know these systems are unsafe, but nonetheless want to make themselves not be liable for their faulty products.
2. The only other "vehicle completely controls itself" system in practice is aircraft flight, where the pilots are necessarily engaged in all critical times, and for the remainder of the time all the systems either give sufficient warning before anything goes wrong (e.g. when autopilot disengages on an aircraft you have a substantial amount of time before impact - not the <=2s warning Tesla gives people), or the aircraft systems have specific extremely heavily trained signals for which there is only a single action and no immediate requirement to regain situational awareness (stall, acas, tcas, etc). The time these "self driving" systems provide to regain situational awareness and then take appropriate action is less than what we expect of trained pilots, yet it's somehow acceptable for random drivers.
The current self driving "you are expected to be in charge of the vehicle at all times" is entirely liability shifting: manufacturers are knowingly selling unsafe products, especially Tesla, they are lying about the capabilities of those products, and then they are saying it is the drivers fault when they fail.
What manufacturers are doing is no different from a manufacturer selling a car with an ABS system that fails, and saying "the driver is responsible for identifying ABS has failed and pumping the brakes if it does, if they fail to do so we do not accept liability for the ABS failing"
Aircraft autopilot flight is much more like cruise control in a car. The pilot flying is actively monitoring instruments and adjusting settings to make the autopilot do what's needed. Not playing games on their phone.
Especially in the terminal area there is no such thing as the autopilot flying the plane with no pilot involvement. There are heading, speed and altitude adjustments as input to the autopilot just like a car driver would be actively involved when driving with normal cruise control.
The whole point is that they are not _constantly_ monitoring, because again it’s not something people are physiologically capable of.
The difference in aircraft autopilots is that if you take 5 or 10 or even 30 seconds to notice an issue, you correct. Whereas for “self driving” you have 2 seconds or less, and are expected to have complete situational awareness for the entire time. [edit: to notice non critical problems, immediate response failures have significant repetitive training to cause immediate correct response - stall, tcas, mcas, etc]
As in, the defined “correct” way to use self driving cars is to have as much focus awareness and concentration as you use while actually driving, only without actually doing anything at all. Which again, is not something that any person is capable of doing. This is established, even in the context of pilots.
In the context of commercial pilots as I understand it, autopilot is often disengaged for the purpose of ensuring they’re engaged (in addition to standard elements of not getting rusty).
And again, if the autopilot fails in flight, pilots who are trained specifically to deal with the myriad issues and complexities of failures and autopilot disconnects, have literal orders of magnitude more time to react than untrained occupants of “self driving” cars.
Referring to autopilot not existing in terminal or during takeoff/landing (though I do understand ILS or similar can handle normal landing?). Those are times when the pilots are actively engaged in operating the aircraft (even with ILS).
The issue with autopilot/self driving is not what happens while operating in an environment that requires active engagement by the operator (pilot or driver), nor what happens at points of engagement (start or end of travel, making adjustments to travel), it is that humans are physiologically incapable of maintaining engagement in an activity if they are not actually involved in the activity. Again, this is something that has been established in study after study, and applies to extremely well trained pilots responsible for hundreds of lives just as much as it does random driver.
These cars “self driving” systems require the occupant (they are not driving) of a car to have greater control and engagement, and faster reaction times, than a trained pilot, in a system that has even less cause for engagement.
You referenced how autopilot in an aircraft requires adjustments (which as far as I understand are not being made constantly, but no matter, let’s assume they are), “self driving” cars do not even have that. The argument “aircraft autopilot requires engagement from the pilot” does not make my point about “self driving” cars incorrect: it in fact further demonstrates how unsafe “self driving” cars are - pilots in this model do have actual engagement with the operation of the self flying plane, in addition to increased training, significantly larger time to react, and significantly better response options when no-delay responses are needed.
Self driving cars say the occupant is required to have greater engagement in driving the self driving car, while doing less, having less training, having less time to gain full situational awareness, having less time to respond when the system fails, and not having any clear response options or training for immediate-response-needed events.
I’m genuinely curious, and I’m sure that NTSB or FAA must have published some statistics on the amount of time pilots need to notice and respond correctly to different failure modes.
The inherent problem of constant attentiveness being extremely challenging means that most relevant authorities say a life guard should not be on duty for more than an hour at a time, and they should have at least 15 minutes break between those hour shifts.
Even with those minds are going to wander (or there will be distractions - people asking for directions, etc) but there are multiple failsafes: there's generally more than one life guard, being distracted for a few seconds is not a failure (a life guard is having to scan a significant area so they don't have 100% awareness of 100% of their zone at once, so there's always potentially significant delay between something going wrong for someone, and a life guard seeing it), and the time to catastrophic failure is measured in (contextually) significant amounts of time.
The issue with self driving cars, as they are currently set up, is that they say "the car will do everything" and it does, but they then say "however the driver is still in control of the vehicle so if a crash happens it was the drivers fault for not paying sufficient attention".
In the pilot case: there are periods of flight where the pilot is doing very little for extended periods, but those are all at altitude, and the time from "something went wrong" to "it is irrecoverable" (in non-aircraft failure modes) is remarkably large (at least to me - my mental model was always 'something went wrong, it's seconds to crashing' until I binged air crash documentaries and even if they're trying it takes a long time to go from cruising altitude to 0). There are also modes where the pilots must always react immediately, whether or not they were distracted, or if they were focused but on a completely different task, but those modes are all close to "this alert occurs->reflexively do a specific action before you even know why".
Attentiveness is a real problem for long haul train traffic and multiple accidents have occurred because of it (or the loss of it), and there are many things they've tried to do to prevent the exact same problem that self driving cars introduce, and they simply do not work. At least for trains you can in principle (those the US seemingly does not) use safety cut offs such that a train that is not responding correctly to signals is halted automatically regardless of the engineers and operators. What companies frequently try to do is add variations of dead controller switches (similar to the eye tracking in "self driving" cars), but for the same reason that attentiveness is an issue in multiple hours of no operation those switches get circumvented (brains don't like to focus on a single thing while not actually doing anything for hours, muscles don't like being constantly in a single stress point for hours).
1. The NTSB and FAA both found repeatedly, as have pretty much every other study - academic or otherwise - that people _cannot_ focus on watching a task without actually being involved in or doing the task. Any "self driving" model that requires the non-operator to be engaged in driving is simply not humanly possible. So the only reason for that stated rule is the manufacturers know these systems are unsafe, but nonetheless want to make themselves not be liable for their faulty products.
2. The only other "vehicle completely controls itself" system in practice is aircraft flight, where the pilots are necessarily engaged in all critical times, and for the remainder of the time all the systems either give sufficient warning before anything goes wrong (e.g. when autopilot disengages on an aircraft you have a substantial amount of time before impact - not the <=2s warning Tesla gives people), or the aircraft systems have specific extremely heavily trained signals for which there is only a single action and no immediate requirement to regain situational awareness (stall, acas, tcas, etc). The time these "self driving" systems provide to regain situational awareness and then take appropriate action is less than what we expect of trained pilots, yet it's somehow acceptable for random drivers.
The current self driving "you are expected to be in charge of the vehicle at all times" is entirely liability shifting: manufacturers are knowingly selling unsafe products, especially Tesla, they are lying about the capabilities of those products, and then they are saying it is the drivers fault when they fail.
What manufacturers are doing is no different from a manufacturer selling a car with an ABS system that fails, and saying "the driver is responsible for identifying ABS has failed and pumping the brakes if it does, if they fail to do so we do not accept liability for the ABS failing"