There just seems to be a misunderstanding on the intention of a though experiment here. There is a subtle implication that because the thought experiment is "contrived" that its less valuable.
When thinking about philosophy, a core part of philosophy is to reduce concepts, ideas, or beliefs into abstractions (or a "spirit" or "essence" of what their intention is), the thought experiment presents itself as a perfectly useful tool to help challenge those concepts.
Ethics is not about absolutes, but really about sussing out where gray areas exist in what some might believe are black and white situations. Additionally, most of the scenarios are presented as a thought experiment, because conducting a real experiment with those conditions would be wildly _un_ethical.
How are we supposed to determine the nuance of valuing human life if we were bound to doing actual experiments? Also, who in their right mind would conduct such as experiment?
Zimbardo faced enough flack testing the limits of obedience and authority in the Standford Prison Experiement.
Why does every comment like this, criticising an article, always claim the author doesn't understand the concept they're talking about? You can just disagree with something without using cheap tactics like this to imply the author is ignorant.
I see no evidence of this claimed misunderstanding in the linked article. And there's nothing "subtle" about it saying thought experiments lack value because they're contrived, that is the central point its making.
Do you not see the irony of then taking the position that my tactics are “cheap”?
Having read the same article, my takeaway from their characterization of thought experiments was that they misunderstood their real purpose. To pose thought experiments against what real professionals would do in particular scenarios really does undermine what the point of presenting thought experiments does. They’re used to challenge our notions about the choice we make and think about how we shape our moral code.
They ultimately are intended to make you think more about why you think a particular way about something.
As I see it, it's just a tool to try an unravel principles. They aren't trying to solve specific situations. Thought experiments often have counter thought experiments, like with the trolley problem, if pulling the lever seems more ethical, when you have 4 patients who will die without an organ transplant, and you have 1 patient who is healthy and organs could be harvested to save 4. It starts becoming harder to work out what the critical principles are. Just like in the article of the killing vs letting die, the assassin example is an alternative thought experiment. This is more important if you have to encode these things in laws / policy where you need a more generic guide to what is acceptable. But you wouldn't rely just on thought experiments to do this, they are just tools to help shake the "thought tree" to see what falls out.
You're totally striking on exactly how I would frame thought experiments. They're a tool that helps further a conversation and discussion, to challenge what might be a preconceived notion.
Once you shake out enough thoughts from the thought tree, you can hopefully discover what they're all pointing to in order to find what might _actually_ be driving an ethical decision.
Taking ethics out of context, squashes the very life out of it. The related context is the whole ball game. Further, pretending one thing is like another (adult violinist dependent on your kidney; foetus dependent upon mother) erases several critical ethical points in an obvious attempt to argue toward a conclusion. Its disingenuous.
A simpler example:
Killing a convicted murderer, is murder again! Oh, except the murderer has a whole different ethical context than say a child: the murderer is again a threat to more people; the murderer had agency and could have decided not to murder; the act has societal benefit instead of harm; and on and on.
And if you don't like my argument, then voila! Taking your ethical dilemma and recasting it as mine seems "not fair!" to you. Which ironically is my point and QED.
Nowhere in that statement was "I" stated or implied. The actor can be anything you want - a group, the state as an apparatus, a quantum-triggered death box, etc.
> philosophy is to reduce concepts, ideas, or beliefs into abstractions
This in itself can be see as problematic, as one can say that trying to reduce "concepts, ideas, or beliefs into abstractions" is futile, as those "things" are not black and white, they do not conform to Aristotelian logic of one thing/concept being either "true" of "false", there's a "continuum" (for lack of a better word) when trying to define each and everyone of those terms.
In other words one cannot be moral or not-moral (to go back to the trolley problem), there's a moral "continuum" which even varies with time (I know I'm less "moral" before I had the chance to drink my coffee early in the morning).
But even on the continuum, there will be usually areas. If you consider the trolley problem and you're dealing with a threshold deontologist, having zero persons on the track will likely result ("I don't throw the guy over the bridget", hopefully) in a different answer then having one person on the track, and at some point the answer will shift again. We'll only need to reduce the problem to find the points of change (or areas, as few people will probably say "well, at 99? no, but for a 100 lives, yeah, I'll throw him over"), we don't need to keep a 1:1 mapping of all possible input values with the output values.
To use things, we usually reduce the single parts and use abstractions to deal with what's important (to us in that context). You don't deal with individual atoms, you look at clusters and clusters of clusters etc and understand and use them alltogether as a pen, just like you've similar pens before, even though they were 100% completely different if you look close enough.
I agree, but also disagree, because many people take this as a literal problem. People think "how can we have self driving cars until they can solve the trolley problem in a way I find agreeable?" but that's not really how the world works. Just try not to hit people. And not just practically in the sense that literal trolley problems never occur. If you find yourself in a situation of needing to choose the lessor of two evils, you're probably just best off believing that absolute morality doesn't exist and you're just going off your gut because that's how you, as an individual human, work.
I mostly agree. And it's certainly way overblown in the context of self-driving.
On the other hand, we can imagine at some point, there may be some broad principles that engineers will have to consider when programming responses to an impending accident. Perhaps slamming on the brakes is essentially always the optimum solution from the point of driver safety if several people unexpectedly appear in front of the car even though braking will be insufficient to avoid hitting them albeit at a reduced speed. On the other hand, there's an unquantifiable but known risk (to the driver) to swerving in order to avoid the otherwise certain collision.
That a human would pick one or the other approach in a split second without really having time to think about it doesn't really inform what the programmed response of a vehicle that does have time to deliberately pick from several options should be.
Added: It is, of course, pretty much an academic exercise today in that cars can't reliably even be prevented from running into highway dividers. But, in a few decades or whenever, one can imagine being at a point where the technology is good enough that some framework is needed for making rules. Maximizing driver safety at all costs vs. trying to do the least harm overall may well not result in the same rules being programmed.
I don't think the author's objection is to thought experiments in principle but rather, to the way they are composed and used.
After all, a thought experiment that only holds true given an impossible premise would have no rhetorical power if we weren't willing to extrapolate our conclusions beyond the premise, to the possible. Which we are - that's why the trolley problem is taught to people who aren't railway signalmen.
There's a risk there are arguments we think are convincing us because they have isolated a pure abstraction - but actually it's the false dilemma in the premise that convinced us, while the argument distracted us.
I believe that's why you don't use the most extreme thought experiment and then go "okay, thanks, we're done here". You want to be able to zero in on different factors. E.g. save the musician by giving five minutes instead of nine months? Lower the risk of death of a part of society by accepting some restrictions on your freedoms?
I view it like a black box where you can define inputs and observe the output and your goal is to understand what calculations are made inside. Thought experiments are essentially fuzzing it to understand what's happening. "What if it's a fat guy? What if it's a smoker? What if it's a politician? What if it's a homeless person? What if there are 10 people, what if there are a million?" and you're iterating over lots of options to get closer to understanding the formula.
If one could train a chicken to pull or not pull a lever to save x amount of chicken lives, does it weigh on the conscience of the trainer that they’ve compelled a self-aware creature to kill one of its own kind?
(I’m not asking this to refute your point, it’s well met, I just found the opportunity to poke at the imaginary ethical elephant too much to resist, and no I’m not proud of it-those were good hens)
There are animals people have deep affection for and you’ll find great resistance to these experiments, never the less, I think you can get valid data from experimenting on “lower” animals.
The whole point of using a thought experiment is to gain intuitions about the "but what if it wasn't a person, but a chicken instead? Would that be equivalent? Why?" Ie, the value of the experiment is in the experiential learning of the attempt to arrive at the answer. At least, imho
Sure, you can do things that you consider evil just to see how you would really react to being presented with such an unpleasant binary choice, but if you chose to do it, was it in fact not evil to you, are you just pathological, or are you perhaps in toddlerhood, where you don't yet really have an intuition for your own relationship with causality, such as it is?
That said the experiment does deny the always precent uncertainty (or hope, depends how you call it). Arguably the most ethical course of action would be to do everything to prevent any deaths. This might be futile, but it's impossible to know at the time.
It’s impossible to know the net results after hundreds of years. The assassination of archduke ferdinand, might be a net positive for people currently alive today or a net negative, but it really seemed like a bad thing for the next several decades.
Essentially, people can’t make judgements with total information.
I am saying, it’s such a large change we can’t tell if it currently benefiting us. It’s one of those events that had such a large impact it’s impossible to say what the world would be like without it.
It seems like it was responsible for WWI, WWII, the Spanish flu, etc but would worse things happened instead? I doubt it, but maybe it indirectly prevented total nuclear war.
Which when you get down to it is true of everything, we need to act with imperfect information.
It’s literally a non-argument designed to not even consider my position and talk over it with flights of fancy. Every decision is based on incomplete information, ever. Your argument is an argument against the very state of knowing. It’s Descarte but with even less understanding of a causal system.
Your argument: we can’t possibly know everything, therefore assume nothing is correct. Except reality is a persistent illusion, so we work in that reality. Your fantasy world where there is a god that can see all futures is fantasy. Don’t confuse the imagination with reality.
> Whom? Well, the greatest number of people possible. All time frames. All scopes. You philosophers always trying to confuse the simple.
Your philosophy is literally dependent on information that’s impossible to know which is a real limitation. In the real world we can estimate harms and benefits, but it’s the unknown factor which makes sacrificing someone ‘for the greater good’ so abhorrent.
If the rule is to minimize total probable harm, then you need to consider second order effects of choices. You further need to carefully consider/model repercussions rather than assuming each choice has obvious costs. Simply shooting infected people may reduce the risks of spreading a plague in the short term, but it’s got huge long term issues as people then try and hide their infection etc.
It’s not just philosophy. So much of economics education is getting students to narrow their vision down to the something that somebody else wants them to see. Force the assumptions on people and then tell them how smart they are when they draw the inevitable conclusions. It’s a trap. Normal people forget about the assumptions and walk away believing the conclusions. It’s a technique for reshaping people’s intuitions.
haha, yes! Why is everyone relevant in philosophy dead?
The answer to the trolley problem is that it doesn't matter if you pull the lever or not. You should figure out who or what created the situation and truly eliminate the problem at its source.
The Trolley Problem poses this question: if a trolley is about to run over and kill five people standing on the tracks, and you can shunt it to a different track where it would only kill one person, should you do it? Or what if you could save those five by throwing a person near you onto the tracks; should you do it?
Many people feel intuitively that it would be wrong to throw that person onto the tracks, and an argument attributes this to a supposed essential difference between killing someone and deciding to let that person die.
I disagree. I too believe it would be wrong to throw that person onto the tracks, in real life, but not in the hypothetical trolley problems. There is no ethically significant difference between killing a person and letting the person die, if (as supposed in the trolley problems) there is no doubt that the death will occur.
The reason, in real life, why killing someone is ethically different from letting someone die is that real life is full of surprises: the person might not really die. If you kill him, his death is pretty certain (though not totally; just recently a man was hanged in Iran and survived). If you merely don't take action to save him, he might survive anyway. He might jump off the track, for instance, or someone might pull him off. All sorts of things might happen. Likewise, throwing the one person onto the track might not succeed in saving the other five; how could you possibly be sure it would? You might find that you had done nothing but cause one additional death. Thus, in real life it is a good principle to avoid actively killing someone now, even if that might result in other deaths later.
The trolley problems invalidate the principle because of the unlikely certainty that they assume. Precisely for that reason, they are not a useful moral guide for most real situations. In general, difficult artificial moral conundrums are not very useful guides for real-life conduct. It's much more useful to think about real situations. In the free software movement I have often decided not to propose an answer to a general question until I had some real cases to think about.
For real driverless cars, the trolley problem never arises: the right thing to do, whenever there is a danger of collision, is to brake as fast as possible.
More generally, the goal is to make sure to avoid any trolley problem.
> The trolley problems invalidate the principle because of the unlikely certainty that they assume.
We're living the trolley problem problem in real life, right now, because human beings are bad at estimating risk. A lot of people are operating with a certainty that IF you are EXPOSED to the virus, you WILL contract it, and it WILL be terrible, and PROBABLY fatal. Therefore, no amount of safekeeping -- masks, distancing, wiping down delivered groceries, baking mail and newspapers -- can be too aggressive. For such people, diverting the "trolley" to make long-running, terrible financial hardships for a third of the population is an easy call, when they feel they are literally saving the life of everyone on forward track.
I don't see how this could be framed as a trolley problem. A trolley problem needs to compare two equivalent goods, where intentional 'action' is likely to cause less overall loss but at a direct cost to an 3rd party. Neither side regarding lockdown views this problem as such:
People who support aggressive measures don't view their actions as killing the bystander. 'Financial hardship' is far from letting people die (i.e. no equivalent good from their PoV) and they almost always support measures to alleviate economic damage.
People who oppose lockdown have differing arguments which don't map into a trolley either. An argument for lack of efficiency isn't relevant. Others' argument that governments' action is taking away liberty for security is closer, but no trolley either. For that, they'd have to weigh liberty of 'bystanders' again liberty of those that will get sick, and it's clear their concept of liberty does not accept the latter condition as a loss of liberty (no equivalent good either).
You might wish to drop my suggested equivalent good criteria. I'm not so sure. If diverting the trolley merely slightly inconveniences the bystander, are we still talking about a trolley problem?
I wasn't arguing that the virus response was a trolley problem, but that it was a "trolley problem problem." Your response is almost too good to be true.
Ouch. I guess I fell into the trap where one reads a word and doesn't notice it repeats twice (trolley problem problem).
But I think I still got the jist of your argument: That people supporting lockdown are applying trolley problem logic ("hurt the economical bystanders to save sick people").
What I wrote in response applies. That isn't trolley logic since the goods are noncomparable. I'll go further: financial harms in general aren't a good fit for a trolley problem, since one can always compensate later.
> the right thing to do, whenever there is a danger of collision, is to brake as fast as possible.
This actually isn't true in every scenario. What if there is a car following close behind? What if the car behind has more passengers than the car in front of you? What if swerving will kill pedestrians, but less than the passengers in the car you're about to hit?
There are a lot of variables, and it's really not as straightforward as you make it sound
That's the entire point of Stallman's essay. Those what-ifs (EDIT: well, #2 and #3) aren't things the self-driving car (or a human driver, in most cases) will compute in the milliseconds it takes to make a decision.
> What if there is a car following close behind?
They may be able to stop fast enough. Whereas if you intentionally don't brake hard enough and strike the car in front of you, you (the computer) have traded a possibility for a certainty. You control your braking and following parameters, they control theirs. (EDIT: Yeah, if there's a lot of distance in front of you, then the car can make use of that. It doesn't need to screech to a halt in every case. But this isn't trolley-car territory, it's just the standard way adaptive cruise and emergency braking already works)
> What if the car behind has more passengers than the car in front of you?
Irrelevant. It is not the car's job to be counting the number of occupants in every other car on the highway.
> What if swerving will kill pedestrians, but less than the passengers in the car you're about to hit?
I want the self-driving car to not make that choice. The passengers in the car, however numerous, are much more likely to survive impact than unprotected pedestrians.
It really is straightforward. Don't burden self-driving cars with silly edge-cases. Apply the most effective mitigation known in the general sense, and see that everyone is happy because they can anticipate the car's choices a bit better.
> That's the entire point of Stallman's essay. Those what-ifs (EDIT: well, #2 and #3) aren't things the self-driving car (or a human driver, in most cases) will compute in the milliseconds it takes to make a decision.
It could be computed in the seconds, or minutes which lead up to the incident. A smart driver, human or AI, has a ready, already computed backup plan.
You count the number of occupants in the car behind you, and you factor that into your braking strategy? Or, do you count occupants in the car ahead, and prepare to do the pedestrian math if a quick avoidance is needed?
No, of course not. If I'm about to rear-end another car, even if it's a 1973 Ford Pinto with 4 occupants, I'm not going to swerve toward sidewalk pedestrians in lieu of slamming on the brakes. Even if I have all the information ahead of time on that car's faults and occupancy level. Nor would I want a computer to make that decision either.
You're not wrong. These are things a computer could do. But I'd say that they're not something we want them to be doing. We don't need programmers debating trolley-problem scenarios. They can focus on just reducing or eliminating impacts directly caused by the system they're building. In other words, just hit the brakes.
I agree. I’m just asserting that the point made by Stallman about there not being enough computational capacity in the milliseconds before a crash to compute the trolly problem is a straw man.
Have you worked in AV? “Save Driver” vs “Save Pedestrian” isn’t actually a problem that comes up in the real world. There’s never a situation where you’d use a pedestrian as a meat bag to absorb a collision or fling a driver over a median to save a pedestrian. The dilemma is completely a philosophical construct, no self driving car will ever need to solve it. In reality, saving drivers and saving pedestrians are completely orthogonal problems.
That's not the typical problem in AV's trolley problem, nobody argues (publicly) "the driver should always be first". You don't even need the driver to play a role, you can use an AV truck, train or drone.
"Your brakes are gone, you cannot decrease speed. You are on a collision course with 10 people who are crossing the street. You can slightly alter your direction to run over one person instead."
You can alter that as required, put a dog in the other lane. Or an old person. If you want to go deeper: put a light car in the other lane with the assumption that the driver of that car has a 10% chance to survive.
Give http://moralmachine.mit.edu/ a try, they've set up a few different scenarios. "None of this will ever happen, the car will simply learn to fly and avoid the accident" is not an option unfortunately.
What I am saying is that in for both meatbag and AI drivers, the “brakes failed headed towards a crowd” situation is so rare that the driver has neither the time nor experience/training data to make any kind of well-reasoned ethical decisions, and basically ends up performing an action that is close to random, and that the situation is so rare that the impact isn’t relevant when policymaking/regulating AV. It’s like the difference between the deaths from drunk drivers and the deaths from drivers being stung by bees (which do exist, but not enough to make rules like every car needs a bee repellent).
Essentially: it's rare, we shouldn't worry about it?
I don't think that's a valid argument. A driver may not have a lot of time to ponder the consequences of his actions (but you can create a hypothetical that would give him a few seconds, of course, but hypothetical situations are an issue in this thread), but that doesn't mean the AV that's not as limited in recognition and calculations shouldn't calculate probable outcomes and make a decision, even if it's rare. And once you agree that it should if it can, there's no sense in saying "but we shouldn't talk about what it should optimize for".
And of course, we can generally expect that if it is possible, that it will be done and Mercedes will ask: who's paying us, the person in the car or the person outside the car, so they will gravitate towards saving their customer. At that point you have made the decision, but you've avoided having a debate over ethics.
Essentially: the situation only exists in the contrived world of ethicists, so the problem may be important for ethicists to solve but isn't important for AV engineers to solve.
It's like asking a parent the question, "If both your kids were falling off a cliff at the exact same time and you could only save one of them, which one would you save." The parents says "well whichever one is closer to me." The ethicist says "No they are the exact same distance from you, with the one near your non-dominant arm exactly the amount closer to you to cancel out the advantage of your dominant arm." Well, Okay. It's a stressful question to be asked and to try to answer, but the answer in the actual real world is that the situation doesn't actually exist, and even if you were to replay the universe 5 quintillion times until the situation came up, the flesh and blood parent would panic in the moment and save neither one. So ethicists can debate the situation ad infinitum and entertain themselves doing so, but the situation only exists in the contrived world, so the answer doesn't matter.
You absolutely cannot imagine a situation where a car will most likely cause damage but the driver (be that a computer or a human) can influence where it hits? And if it can be influenced, why wouldn't you want the least amount of damage vs some random amount of damage?
I'm not talking about Descartes and his all-powerful god that may fool him. "Which of your kids do you send to college if you don't have enough money to send both?" is a similar question, and that may be unthinkable and totally unrealistic for a SWE, but it's a pretty normal situation for lots of people.
I was in just such a conversation the other day, where we were discussing the implications of the second amendment with regards to tyranny in the US. A participant would not allow the conversation to continue without a description of how tryanny would take hold, and no hypothetical was realistic enough. They could not have brought the conversation to a halt faster if it were deliberate. Ultimately, they didn't agree with the point (which was sound) because they believe that our "democracy" is sufficient to prevent tyranny now and forever.
The point of discussions about scenarios whose parameters cannot be known is to find a useful way to elide the unknowns and come to conclusions that we can agree to and understand. Failing to see the forest for the trees is not a problem with the exercise; it's a problem with the participant.
The problem is that there's nothing inherent about a hypothetical that elides irrelevant details, it elides only whatever the author of the hypothetical wants to elide. Hypotheticals can let us agree on one aspect of a situation, but it can't make us agree on how relevant that aspect is to the situation we're actually interested in. This is why the trolley problem as applied to autonomous driving is so problematic, whilst it's an interesting thought experiment, its totally irrelevant to the person actually designing a self-driving car. It's a perfectly valid position to take that a situation is complex enough that hypotheticals aren't ever going to be helpful in reasoning about it.
It is sad that I am far more willing to believe those events plausible. There is a saying about attributing to stupidity before malice, and I think that also applies to the foundation of how such an outcome could occur.
One challenge that I have always had with the trolley problem is that in general solutions are not symmetrical. By this, I mean that as an observer, I might conclude that several different choices made by the subject of the experiment were ethical. For instance, if the subject threw the switch and caused one person to be killed (instead of multiple), I would see that as ethical. I would also see it as ethical if the subject did nothing, either out of shock or out of refusal to actively participate in anyone’s death. So I guess the problem is that I’m not convinced that the thought experiment generates objectively ethical outcomes, only subjective ones.
> So I guess the problem is that I’m not convinced that the thought experiment generates objectively ethical outcomes, only subjective ones.
But that's also literally what it's supposed to do. It's a device to poke around and figure out your positions, e.g. are you into consequentialism or do you prefer deontology, are there circumstances that can change your position etc.
I often find people (not you, people in general) rejecting thought experiments, because they are not comfortable with their intuitive decisions and they feel that it will be exposed when they're forced to apply it to a hypothetical situation that does not leave them an easy out ("this wouldn't happen, I don't walk near train tracks, ever").
Absolutely bizarre to see a philosopher writing at some length that some questions are bad because they are hard to answer cleanly. Does he know what his profession is?
It's also poor form to sling broad accusations about what "some" philosophers do without citing any examples.
>Absolutely bizarre to see a philosopher writing at some length that some questions are bad because they are hard to answer cleanly.
It's hardly bizarre. Wittgenstein, one of the most famous philosophers of the 20th century, wrote that most philosophy was incorrect because philosophers failed to clearly define their terms.
"The right method of philosophy would be this. To say nothing except what can be said, i.e. the propositions of natural science, i.e. something that has nothing to do with philosophy: and then always, when someone else wished to say something metaphysical, to demonstrate to him that he had given no meaning to certain signs in his propositions. This method would be unsatisfying to the other — he would not have the feeling that we were teaching him philosophy — but it would be the only strictly correct method".
Why? Why is questioning presumptions bad? If the accusations end or weak then it proves philosophy still has it right. On the other hand it could cause a rethinking and perhaps fine some merit in the criticism,
I dislike these types of thought experiments because they always involve a preexisting problem where a random bystander has to become a hero that has to save everyone. There are a lot of situations in which fate is just playing out and you can't do anything about it and struggling will only make everything worse. In reality, the hero doesn't actually know which lever will actually save lives and he also doesn't know if the people he is saving are actually in danger.
If the audience cannot enter into the problem, then its value as a tool is diminished.
Case studies are also fraught with peril, but the analysis of where the particulars end and the general principles begin seems the bulk of the exercise anyway.
More than that, they reduce too much of the problem away. It reminds me of the two capacitor paradox [1] which (spoiler alert) only arises because that configuration isn't actually possible to realize in terms of ideal circuit elements.
In particular, we spend a lot of time thinking about what would have to be, in most formulations of the problem, a quickly-decided act (otherwise, why not simply untie the people?) and we have absolute certainty given us as to the consequences, vs. all the uncertainty in real life. The problem is used to get rid of those elements as distractions, but they're essential features of people's reasoning (and reasoning ability) so they're not so easily discarded.
The trolley problem is a non-problem. I don't know about you guys, but nobody I know is ever going to buy a car that might decide to kill them to save some stranger(s).
The trolley problem in regards to autonomous cars is a total sideshow when the cars we do have cant reliably tell people from bicycles from other cars.
Its often used as a sort of backdoor to squeeze in the argument that we should let X people die to autonomous cars because in the future they will definitely be safer, for sure. (some very shaky priors there)
These arguments are usually introduced with massive assumptions about the applicability and reliability of statistics gathered from cars that basically give up when even slightly challenged with anything remotely ambiguous or difficult (how many people live where there is perfect weather almost every day?), and then are applied to the entire gambit of circumstances human drivers deal with every day, by the millions.
Don’t know why you were modded down; you’re absolutely right. WRT self driving cars, I would never buy or knowingly ride in one which had programming which did not in all cases act to maximize the safety of the occupants. ML is stupid; the last thing I want is for a computer to try to make split second ethical and value judgments about people outside my car when deciding how best to save my life. I don’t want a car driving me off a cliff to avoid hitting a school bus, for instance. Are those kids or dwarves? How many kid lives add up to more value than my own? What if my kid is in the car with me? It quickly becomes obvious that the problem isn’t relevant to self driving cars.
That seems overblown. Would you ever ride in a car with a driver that might sacrifice the lives of everyone in the car to save 1,000 people?
I suspect that lots of people have actively made decisions while driving their own car that endangers their own life in order to potentially save the life of someone else.
Well, I would not want to get into the car of someone with a driving habit of risking the live of 1000 people in the first place.
The real solution to this is to avoid situations where such choice must be made, by keeping safe distances and speed.
Good drivers do this naturally. AI must do this too.
That’s never part of the thought experiment. The trolley problem isn’t asking whether you should risk one life or risk 100 lives. It’s asking what you would do in a situation that you were placed into.
I'm not sure I'd make that assumption. I'm sure lots of people have made a split-second reflexive decision to avoid hitting a person/people/deer in a collision that still carries some danger to themselves. I wonder how many have made a deliberate decision to drive off the road to considerable hazard (e.g. off a cliff) in order to avoid hitting someone at no danger to themselves.
Very close analogues to the trolley problem exist. I remember reading about a hospital which replaced the staff that measured out drug prescriptions with a computerized system. It made far fewer mistakes but, critically, it did allow for some mistakes which human operators would have caught and corrected. When I read the article I remember being annoyed that they didn’t mention the trolley problem at all because it was an almost perfect analogue.
I read about a system like that, and the thing is that it not only caused different mistakes, but the mistakes are scary because they are potentially unbounded as well as novel and extreme.
When the machine/program seems perfect, then people get conditioned not to question it. There was a case where a UI issue caused the selection of an adjacent dosage to what was intended, and a nurse gave a patient something like 100 pills.
There were several checks in theory, but everybody was like "gee this seems weird, if the computer says so, it must be right".
Automation inherently multiplies actions, so the fear is that bugs don't just cause a single error, but many, until someone notices and has a chance to react.
You face the trolley problem every day. You keep far more money than you need, knowing that people will die today without it. You would categorically dismiss a law requiring healthy people to donate organs for those who need them.
The point isn't to change your mind about that. The point is to consider an example that has some similarities and ask why you reach an opposite conclusion. The insights gained from that help clarify the human moral instinct. The project is to understand how we think and feel, not to dictate what we should think or feel.
Could this be due to the rise of pure mathematics in influencing philosophers? In pure math, counter examples play a very important role: show once counter example, disprove an entire claim.
The violinist in the article is clearly trying to be a pure math- like counter example, but to me, misses the emotional aspect of pregnancy (at the least) - a fetus is not a random person, it is you (partly), and furthermore, many would say society (and the species) depend on having babies, whereas the society and the species do not depend on violin players.
Not saying I agree with these arguments only that the real world is substantially messier than pure math, and thus, pure math thinking may stumble when applied to real world problems.
Tl;dr: humans are not rational in our beliefs and the continued attempts on all disciplines to assume we are, are well, irrational.
I've rarely heard someone make the counter-point that a fetus is technically half "you", which really does fundamentally change the violinist example.
The violinist experiment does have a lot of holes it it, but I think that one in particular almost turns it on its head, particularly because it circumvents the crux of the problem around consent (which I think is at the core of the problem).
It would seem you would have to change the thought experiment to have the violinist actually be related to you (say a sibling). Now, would a person feel _as_ upset that they had to allow their sibling to use their kidneys for nine months in order to stay alive? That really changes it.
A fetus is half another person, whose "selfish genes" would benefit by the fetus taking an excessive share of resources. So there's an inherent tension that isn't determined by culture or moral theories.
But civilisation doesn’t depend on maximising births, and actually probably would benefit from more resource intensive to produce artists, musicians, dancers, etc - is the other side of that argument.
> Had the context been one in which a hitman was preparing to take a hidden shot at a target, and the target then died of a sudden cardiac arrest as the hitman remained out of sight, it’s far from clear that killing and letting die would be equally bad.
But the two situations are not equivalent. The hitman is almost certainly not in a position to save his intended victim so he is not in any meaningful sense letting him die.
I think its a silly response to the problem, yes if you were near the lever and you so happened to know one of the groups then you might argue pulling the lever one way or the other.
But is that simply you justifying your actions?
The problem isn't absurd when posed to humans because, how about the scenario where you don't know either person and you need to make a decision in a matter of seconds. That's the point.
If my mother was on one of the tracks, I might justify my actions to pull the lever and have it veer into the other group, but that isn't necessarily a morally correct decision.
The author in the article says that the problem is absurd when posed to humans, then goes onto applying the same principles of the problem to AI and cars but does not say that the problem is also absurd with regards to AI, despite coming to the same conclusion.
This is where the problem is: life is way more complex and nuanced to distill down to such a simplistic scenario. Measuring morality is not an instantaneous action.
Ethics thought experiments are interesting and sometimes fun and you can't deny the value of getting people thinking about choices and behavior.
But it's always seemed to me that they, along with ethics in general, miss the point. I don't think absolute right and wrong, or even absolute "lesser of two evils" is a particularly useful goal.
It's the contextual framework that matters... Values, priorities, fears, desires, needs, the things that comprise a person's identity and worldview. Those things are going to win over ethics every time when real world decisions are being made.
IMO there's a lot more value in exploring those things, as opposed to ethics, if the ultimate goal is to impact the behavior of a individuals in a society.
The irony here is you just pigeonholed "ethics" and then made several ethical arguments yourself while saying you don't care about ethics!
To add some formal language:
> I don't think absolute right and wrong, or even absolute "lesser of two evils" is a particularly useful goal.
This sounds like a metaethical argument - what underpins ethics is a very important question that a lot of people miss, but that argument is actually the base of ethical stances. The formal metaethical belief here is objectivism. Some other options are things like subjectivism, cultural relativism, error theory, and non-cognitivism.[1]
> It's the contextual framework that matters... Values, priorities, fears, desires, needs, the things that comprise a person's identity and worldview. Those things are going to win over ethics every time when real world decisions are being made.
Ethics (once you get past metaethics) is almost always around a framework, and focuses exactly on everything you listed. Aristotle explicitly focused on values to built his ethical framework. Foucault talks a lot about fear and power. Most of consequentialism and utilitarianism focuses on needs and desires. Rawls and egalitarianism is an example of talking about priorities.
To me, it sounds like you have a gripe with the impracticality of philosophers talking about ethics but care quite a lot about ethics itself. If so, I'd be with you strongly on both accounts.
> Those things are going to win over ethics every time when real world decisions are being made.
And the point of the thought experiment is to uncover those things, and they make up that person's personal ethics. Maybe their ethics are "I'll always save my family, fuck everybody else", maybe their ethics are "I only save people who share my skin color", or maybe they are "I always maximize the damage, I hate people", using hypothetical situations allows you to figure that out without taking that person on daily walks and sacrificing lots of people on the train tracks to find out.
The problem here is reminiscent of the classic short story "The Cold Equations": A young woman is fooling around near a space ship and ends up accidentally stowing away on a craft needed to move serum to a colony in the grips of disease. The mass on the ship is accounted for to the gram (the gramme, even!) so her excess mass means the ship no longer has the fuel to make it to the colony. The dashing space hero has to jettison the innocent young woman in order to save untold numbers of people.
OK, what's blindly, blisteringly wrong here? First, the idea that a space ship small enough its fuel would be rationed out by the gram would have enough room in the crew compartment for someone to hide in is ludicrous. Second, launching without a checklist? Are you out of your tiny little mind? Third, allowing unknown people to bring unknown contaminants into a space ship? Having a "KEEP OUT" sign doesn't save the colonists from another plague, now, does it?
The moral of the story is that it's hard to keep your mind on the supposed lesson when the flaws jump up and down and yell at you.
Taking a different tack, by thinking too hard about the consequences of the thought experiment instead of the lead-up to it, there's the Jew in the attic. You know how it goes: A Jewish family is hidden in your attic or guest bedroom or someplace and the Nazis come knocking. Do you lie to save the Jews? "Of course", you say, and come up with a nice logical argument for why your ethical system demands you lie in this instance. All functional ethical systems can come up with such an argument with minimal fuss. However: The Nazis were not very nice, you know. If they thought a town was holding out on them, they'd initiate reprisals. They'd kill a whole town in a fit of fascist pique. Saving a half-dozen Jews could doom a few thousand innocents, likely including the original Jews. But that's out of scope for the thought experiment.
SpaceX just launched a shuttle into orbit that had a 1 second window for launch. They had zero extra fuel on board. It had 2 passengers and room for 2 more.
It's not ridiculous at all to think that they'd fuel a supply ship with exactly the fuel it needed and not more, and that that supply ship might have some empty space in it, especially if it were very, very large.
Nitpick: it had excess fuel on board. Even beyond any institutional safety margins, the first stage landed on the drone ship. That portion of the mission required the stage carry fuel in excess of the requirements for its primary mission.
That's a famous story I've heard of, not sure if I ever read it...but I feel like it could be compared and contrasted with "The Only Neat Thing To Do", kind of a different take on a young woman looking for adventure who ends up paying the ultimate price for the good of humanity.
Edit:
Haha, I clicked on a link to goodreads and the second comment compares it to "The Cold Equations".
The story of Lidice doesn't precisely fit the scenario, but is close enough in spirit that I'd call it a match:
The Lidice massacre was the complete destruction of the village of Lidice, in the Protectorate of Bohemia and Moravia, now the Czech Republic, in June 1942 on orders from Adolf Hitler and Reichsführer-SS Heinrich Himmler.
In reprisal for the assassination of Reich Protector Reinhard Heydrich in the late spring of 1942, all 173 males from the village who were over 15 years of age were executed on 10 June 1942. A further 11 men from the village but who were not present at the time, were later arrested and executed soon afterwards, along with several others who were already under arrest.[2] The 184 women and 88 children were deported to concentration camps; a few children who were considered racially suitable and thus eligible for Germanisation were handed over to SS families and the rest were sent to the Chełmno extermination camp where they were gassed.
Anthropoid was the assassination of Reinhard Heydrich.
> More than 13,000 people were arrested, including Jan Kubiš' girlfriend Anna Malinová, who died in the Mauthausen-Gusen concentration camp. First Lieutenant Adolf Opálka's aunt Marie Opálková was executed in the Mauthausen camp on 24 October 1942; his father Viktor Jarolím was also killed.[52][53] According to one estimate, 5,000 people were murdered in the reprisals.[54]
> In February 1944, the SS Division Das Reich was stationed in the Southern French town of Valence-d'Agen,[1] north of Toulouse, waiting to be resupplied with new equipment and fresh troops. Following the Allied Normandy landings in June 1944, the division was ordered north to help stop the Allied advance. One of its units was the 4th SS Panzer Grenadier Regiment ("Der Führer"). Its staff included regimental commander SS-Standartenführer Sylvester Stadler, SS-Sturmbannführer Adolf Diekmann commanding the 1st Battalion and SS-Sturmbannführer Otto Weidinger, Stadler's designated successor who was with the regiment for familiarisation. Command passed to Weidinger on 14 June.[2]
> Early on the morning of 10 June 1944, Diekmann informed Weidinger that he had been approached by two members of the Milice, a collaborator paramilitary force of the Vichy Regime. They claimed that a Waffen-SS officer was being held prisoner by the Resistance in Oradour-sur-Vayres, a nearby village. The captured officer was claimed to be SS-Sturmbannführer Helmut Kämpfe, commander of the 2nd SS Panzer Reconnaissance Battalion (also part of the Das Reich division). He may have been captured by the Maquis du Limousin the day before.
> On 10 June, Diekmann's battalion sealed off Oradour-sur-Glane and ordered everyone within to assemble in the village square to have their identity papers examined. This included six non-residents who happened to be bicycling through the village when the SS unit arrived. The women and children were locked in the church, and the village was looted. The men were led to six barns and sheds, where machine guns were already in place.
[snip]
> Several days later, the survivors were allowed to bury the 642 dead inhabitants of Oradour-sur-Glane who had been killed in just a few hours. Adolf Diekmann said the atrocity was in retaliation for the partisan activity in nearby Tulle and the kidnapping of an SS commander, Helmut Kämpfe.
You reject science fiction for being too unrealistic in its imperfect space travel engineering, meanwhile NASA blew up two crewed Space Shuttles by launching shuttles that they knew had failed components.
The point is less that I find it unimaginable that space organizations have flaws, and more that those flaws spoil the supposed lesson by diffusing the responsibility: The story only works if the young woman is entirely responsible for her own death, by placing herself in a position where the laws of physics and morality dictate she has to die to save many others. If she died because the NASA-equivalent was run by buffoons, the moral lesson is lost and she becomes a murder victim as opposed to an inadvertent suicide.
I'm stunned that this article was published just a few days ago, because recent events have illustrated just why it can be important to think about strange, unrealistic hypotheticals. How much easier would lockdown debates have been, if we had the conceptual frameworks in place to talk frankly about how many lives must be saved to justify suspending certain freedoms?
The value in these things is being able to consistently extrapolate from weightings, and to understand all of the consequences of different weights. Most people just have vague moral intuitions that lack consistency.
I've been thinking about the Trolley Problem these days as well. It's not just "How many lives must be saved to justify suspending certain freedoms?". More importantly, it's "How many lives must be forfeit to save someone from dying of COVID-19?". Based on the collateral damage estimates I'm seeing, we might be forfeiting five or ten for each save. And if one switches to quality-life-years as a measure, the picture might be even worse.
When thinking about philosophy, a core part of philosophy is to reduce concepts, ideas, or beliefs into abstractions (or a "spirit" or "essence" of what their intention is), the thought experiment presents itself as a perfectly useful tool to help challenge those concepts.
Ethics is not about absolutes, but really about sussing out where gray areas exist in what some might believe are black and white situations. Additionally, most of the scenarios are presented as a thought experiment, because conducting a real experiment with those conditions would be wildly _un_ethical.
How are we supposed to determine the nuance of valuing human life if we were bound to doing actual experiments? Also, who in their right mind would conduct such as experiment?
Zimbardo faced enough flack testing the limits of obedience and authority in the Standford Prison Experiement.