Airbus is not immune to design & manufacturing issues with fatal consequences, they’re just not too-of-mind these days. A similar issue seems to have ‘cropped up’ on this flight: https://en.wikipedia.org/wiki/Qantas_Flight_72
> Temporary inconsistency between the measured speeds, likely as a result of the obstruction of the pitot tubes by ice crystals, caused autopilot disconnection and [flight control mode] reconfiguration to "alternate law (ALT)".
- The crew made inappropriate control inputs that destabilized the flight path.
- The crew failed to follow appropriate procedure for loss of displayed airspeed information.
- The crew were late in identifying and correcting the deviation from the flight path.
- The crew lacked understanding of the approach to stall.
- The crew failed to recognize the aircraft had stalled, and consequently did not make inputs that would have made recovering from the stall possible.
It's often easy to blame the humans in the loop, but if the UX is poor or the procedures too complicated, then it's a systems fault even if the humans technically didn't "follow procedure".
Both unsophisticated lay observers and capital/owners tend to fault operators ... for different reasons.
Accident studies and, in particular, books like _Normal Accidents_[1] push back on this assumptions:
"... It made the case for examining technological failures as the product of highly interacting systems, and highlighted organizational and management factors as the main causes of failures. Technological disasters could no longer be ascribed to isolated equipment malfunction, operator error, or acts of God."
It is well accepted - and I believe - that there were a multitude of operator errors during the Air France 447 flight but none of them were unpredictable or exotic and the system they were tasked with operating was poorly designed and unhelpfully hid layers of complexity that suddenly re-emerged during tremendous "production pressure".
But don't take my word for it - I appeal to authority[2]:
"Automation dependent pilots allowed their airplanes to get much closer to the edge of the envelope than they should have ..."[3].
or:
@ 14:15: "... we see automation dependent crews, lacking confidence in their own ability to fly an airplane are turning to ther autopilot ..."[4].
The relief second officer basically pulled up when the stall protection had been disabled and by the time the other pilot and captain realized what was happening it was too late to save the plane.
There is a design flaw though: the sidesticks in modern Airbus planes are independent, so the other pilot didn’t get any tactile feedback when the second officer was pulling back.
You do get an audible "DUAL INPUT DUAL INPUT" warning and some lights though [1]. It is never allowable to make sidestick inputs unless you are the single designated "pilot flying", but people can sometimes break down under stress of course.
The reality is that CRM is still the most important factor required to have a reasonable chance of turning what would otherwise be a catastrophic aviation incident into something that people walk away from. Systems do fail, when they do it's up to the crew to enact memory items as quickly as possible and communicate with each other like they are trained to.
Unfortunately, sometimes they also fail in ways that even a trained crew isn't able to recover the aircraft. That could be a failure that wasn't anticipated, training that was inadequate, design flaws, the human element, you name it. Actions of the crew being put in an accident report isn't an assignment of blame, it's a statement of facts - the recommendations that come from those facts are all that matters.
This is one of those situations where I think it'd be fun to be a flight simulator "operator". Finding new ways to cause pilots to figure out how to overcome whatever the plane is doing to them. Any pilot that ever comes out of a simulator thinking "like that would ever happen" instead of "that was an interesting situation to keep in mind as possible" should have their wings clipped.
Taking a grain of salt since it's from a movie, but one of the things about Sully setting the plane down in the river was due to his experience of not just the aircraft itself but also situation awareness to realize he was too low to safely divert to an airport. He instinctually "skipped" several steps in the procedures to engage the APU which turned out to be pretty key. The intimated thing being that the procedure was so long that they might not have gotten to the APU in time going step-by-step.
Faulting the crew is a common thing in almost all air incidents. In this case the crew absolutely could have saved the plane, but the plane did not help them at all.
Part of the sales pitch of the Airbus is that the computer does A LOT of handholding for the pilots. In many configurations, including the one that the plane was flying in at the start of the incident, the inputs that caused the crash would have been harmless.
In that incident the airspeed feed was lost to the computer and it literally changed the flight controls and turned off the safety limits, and none of the three people in the cockpit noticed. When an Airbus changes flight control modes, it does not keep inputs idempotent. Something harmless under one set of "laws" could crash the plane under another set of laws. In this case, what the pilot with the working control stick was doing would not have caused a crash, except that the computer had taken off the training wheels without anyone noticing.
As a result of changing the primary controls one pilot was able to unintentionally place the plane in an unrecoverable state without the other pilots even noticing that he was making control inputs.
Tack on that the computer intentionally disregarded the stall warning emanating from the AOA sensor as erroneous at a certain point and did not alert the pilots that the plane was stalled. You are taught from day one of flight training that if you hear the stall alarm you push the power in, and push the nose down until the alarm stops. In this case the stall warning came on, and then as the stall got worse, it turned itself off, with the computer under the mistaken belief that the plane could not actually be that far stalled. So the one alarm that they are trained to respond to in a certain way to recover the plane from a stall was silenced. If I was flying and I heard the stall alarm, then heard it stop, I would assume that I was no longer stalled, not that the plane was so far stalled that the stall alarm was convinced it had broken itself.
So yes, the pilots flew the aircraft into the ground, but the computer suffered a partial failure and then changed how the primary flight controls operated.
Imagine if the brake pedal, steering wheel, and accelerator all started responding to inputs differently when your car had a sensor issue. That causes the cruise control to fail. Add in that the cruise control failure turns off ABS, auto-brakes, lane assist, and stability control for some reason. Oh yeah, there's a steering control on the other side of the car on the armrest and the person sitting there can now make steering inputs, but it won't give feedback in your steering wheel, and also your steering wheel still can be manipulated when the other guy is steering, but it is completely disconnected from the tires while the other guy is steering. All of the controls are also more sensitive now, and allow you to do things that wouldn't have been possible a few seconds ago. Also, its a storm in the middle of the night, so you don't have a good visual reference for speed. So now your car is slipping, at night, in a storm, lights are flashing everywhere, nothing makes sense since the instruments are not reading correctly. However, the car is working exactly as described in the manual. When the car ends up in a ditch, the investigation will find that the cause of the crash was driver error since the car was operating exactly as it was designed.
Worth noting that Boeing (and just about every other aircraft on earth) has linked flight controls between the two pilot's positions that always behave in the exact same way so this type of failure could have never happened on a 737 for example.
At the end of the day, this was pilot error, but more in a "You're holding it wrong, I didn't design it wrong" kind of way. After all, there were three people with a combined 20k flying hours, including thousand of hours in that design.
If three extremely qualified pilots that have literal years of experience in that cockpit, who are rigorously trained and tested on a regular basis for emergencies in that cockpit, can fly the thing into the ground due to a cascade from a single human error... maybe the design of the user interface needs a look.
You also conveniently skipped over the parts of the wikipedia article where they charged the manufacturer with manslaughter, and documented dozens of similar incidents, and the entire section outlining the Human Computer Interface concerns.
It's not clear that concentrating so much money on a single government project fosters more benefit than having the money go to a variety of other projects. It is easy to see the items developed by something like the Mercury/Gemini/Apollo program, but we often miss the innovations created by private industry when you compare a similar total spend (Bastiat's "seen and unseen"). One example of the alternative is how (private) demand for cellular telephony has triggered many innovations, from powerful and efficient microcontrollers to bright LCD screens, and fast-charging lithium-ion batteries.
If a company is unwilling to jump through its self-imposed barriers to paying for things it wants, then it obviously doesn't value those features/items. This is definitely a case of 'voting with [one's] dollars'.
As someone with a little experience with the 'advertiser side' of Google, they also push junk to their paying clients, using every opportunity to sell terrible, worthless placements to advertisers. Which is to say that the problem is not that 'searchers' are the product, the problem is that Google is not focused on creating value for its counter-parties.
That’s strictly not true, at least for “old media” advertising.
Advertisers are selected based on being palatable for the content and the audience. It’s common for content licensing deals to has stipulations about which advertisement is acceptable. Virtually all platforms — even Google Search — has rules about the type of advertisements you see. There are of course laws that prohibit the advertising of certain types of products in certain places.
Even if these scams had the money for a full page NYT article, they wouldn’t had gotten it.
I agree that Google is benefiting from being the dominant player in a two-sided marketplace (which makes it harder to compete), but we can always choose not to use it, both as advertisers and as searchers. Google’s exploitation of its counter-parties has definitely caused me to use alternatives more and more often.
Google is my third choice for searches. I try Ecosia first, but their indexing is garbage so I typically then go to Brave. If Brave doesn’t have it then I submit to the evil overlords at Google. Thankfully Brave indexing is pretty good so it‘s had a measurable impact on the amount of search I actually put through Google.
It looks like these laptops are usually sold with Windows; are you saying that every manufacturer should be obligated to develop drivers for every software which is theoretically compatible with it? Or are you just saying that we need even more caveats in the interminable EULAs we all just click through?
Maybe the obligation should be to provide adequate information about the hardware, so anyone could make a driver for their own software if they so desire.
This used to be the case, but bonds have been positively correlated with stocks in recent years, so they have not been an effective hedge. Additionally, it seems possible/likely that we are headed into the long-predicted COVID stagflation, where growth is slow, so interest rates are low, but inflation remains high, which makes bonds unappealing.
Keiretsu are ways to hedge against loss, you form interlocking relationships that spread both risk and success around. In this case no one is sending actual money, they're sharing obligations with each other.
>"Zvi and the Cato institute both have lengthy pieces about why the Jones act is bad [1] [2], and whether or not you believe that has entrenched our shipbuilders, the US essentially manufactures no ships compared to South Korea and China."
One issue is that Naval ships are very different from commercial vessels, and at least in the USA, almost no shipyards have shared facilities and staff between the two products since WWII. Interestingly, most other countries do not build most of their naval tonnage (destroyers and frigates) to the same standards that the USA does (European countries are notable for using commercial hulls standards for these ships).
On a related note, the Odd Lots podcast had a (relatively) recent Jones Act debate episode, which is worth a listen if you're interested in the subject.
There was a television show (episode) about another design issue (which was fatal) some time ago: https://en.wikipedia.org/wiki/Air_France_Flight_447
reply