Philosophy Experiments
  • EvilRedEye
    Show networks
    Twitter
    adrianongaming
    Xbox
    EvilRedEye8
    PSN
    EvilRedEye8
    Steam
    EvilRedEye8

    Send message
    Vela wrote:
    The seed people scenario didn't require you to care for them until they turn 18, get a job and move out though. It was really only focussed on the 9 month gestational period 'inconvenience' aspect.

    IIRC the first question involving seed people implied or outright stated you'd be responsible for the seed person while the second one just didn't mention it.
    "ERE's like Mr. Muscle, he loves the things he hates"
  • See maybe that's a problem of association on my part. I've been playing Pikmin recently, and, well, some losses are necessary.
    "Sometimes it's better to light a flamethrower than curse the darkness." ― Terry Pratchett
  • While I can sympathise that people have a problem accepting that surreal scenarios are a valid test for their reasoning, consider this:

    You have two dragons. They mate and give you another dragon. How many dragons do you now have?

    Get the point?
    It makes no difference that this scenario could never happen, the thing being tested is your grasp of mathematical logic regardless of whether what you're applying it to is imaginary or real.

    Impossible thought experiments have been used as tool to investigate the consequences of theories for millennia. Einstein had some famous thought experiments that involved one being able to move at the speed of light and observe what an adjacent light beam was doing. These experiments are physical impossibilities in reality, but because the principals being investigated are not corrupted by hypothetical light-speed observers the experiments are legitimate tests for the ideas being applied. It doesn't matter if you pretend to be a magical dragon moving at light speed able to observe how light behaves.

    Also, just because the details of a situation are unknowable does not mean they cannot exist.
    Let us take the fat man example.

    Scenario -A-:
    The fat man claims to have planted a bomb that will explode in 24 hours killing 1M people. Is it ok to torture him to find out?

    Scenario -B-:
    Same scenario as -A- but you know that the bomb is real and the fat man knows where it is. Now is it ok to torture him to find out?

    Scenario -C-:
    Same scenario as -B- but you also know that the fat man cannot be tricked into revealing the location of the bomb, nor is it possible to appeal to his better nature, nor is it possible to persuade him that he was wrong to plant the bomb in the first place and you happen to know that if the fat man is likely to give up the location if tortured. Now is it ok to torture him to find out?

    Scenario -B- is just as absurd as scenario -C-, there is no way you could know that the fat man is telling the truth about the bomb in the first place, yet I don't recall anyone objecting to it. What you need to understand is that all of the details in scenario -C- might be true in scenario -A-, you just don't know about them. Also, it doesn't matter whether you think that torture is Always/Sometimes/Never good because the questions are there to decipher if you are making those decision consistently, and on what grounds if any you are changing them.

    So long as principles being examined are analogous enough to reality that the mechanisms of thought that guide your answers can be tested for consistency then there is no problem.
  • It's nothing to do with the scenarios being fantasies - you can make them about dragons or whatever. It's to do with artificially narrowing courses of action. If all the things you would actually really do were made arbitrarily impossible, what would you do then? Who cares? How is anyone supposed to say what they would do if they couldn't do the things that they would do? There's no logic there.
  • EvilRedEye
    Show networks
    Twitter
    adrianongaming
    Xbox
    EvilRedEye8
    PSN
    EvilRedEye8
    Steam
    EvilRedEye8

    Send message
    so you dont agree with abortion, ok. now imagine that you're in a parallel world, an abortion world, and everyone is aborted and all there is to do is abortions and all you can do is abortions, would you do an abortion? lol ur inconsistent now
    "ERE's like Mr. Muscle, he loves the things he hates"
  • JonB wrote:
    It's nothing to do with the scenarios being fantasies - you can make them about dragons or whatever. It's to do with artificially narrowing courses of action. If all the things you would actually really do were made arbitrarily impossible, what would you do then? Who cares? How is anyone supposed to say what they would do if they couldn't do the things that they would do? There's no logic there.
    It matters because it is only by isolating the parameters that determine your actions that one can understand how you are going about making moral decisions.
    For example:
    -Let's assume someone states that they think torture is morally unacceptable in any REAL circumstance
    -Let's assume that given scenario -A- (as I stated it in my last post) they think that torture would be morally unacceptable because they cannot know if the fat man is telling the truth.

    -Let's assume that given scenario -B- (as I stated it in my last post) they think that torture would be morally unacceptable because even though they know the fat man is responsible for a real bomb, they cannot know if the fat man is likely to give up the information under torture.  Immediately we have discovered that this persons view of morality is consistent with their claim that torture is morally unacceptable, and that they base their morality on the fact that there are unknown factors both in reality and in the hypothetical, and that they don't think that responsibility for putting people in danger is enough of a reason to torture someone.

    -Let's assume that given scenario -C- (as I stated it in my last post) they think that torture would be morally acceptable because they know it is now likely to work and save lives.  Immediately we have discovered that this persons view of morality is still consistent with their claim that torture is morally unacceptable in the real world, but that they think morality is contingent upon knowledge we don't necessarily have access to about the likelihood of success of torture.

    Now we have established a benchmark for consistency, because when a new set of questions arises that specifically appeals to moral assessments of torture based on responsibility of the culprit, or likelihood of success we can see whether the test subject makes decisions based on the same contingencies.  We can also put those contingencies into scenarios that better reflect reality and see if the test subject remains consistent.

    It's not about the Dragons.
  • Some_Guy wrote:
    -Let's assume that given scenario -C- (as I stated it in my last post) they think that torture would be morally acceptable because they know it is now likely to work and save lives.  Immediately we have discovered that this persons view of morality is still consistent with their claim that torture is morally unacceptable in the real world, but that they think morality is contingent upon knowledge we don't necessarily have access to about the likelihood of success of torture.
    All you've discovered is that morality is contingent on the way things actually work in reality, which isn't much of a discovery. You could still just skip this stage and go straight to scenarios that better reflect reality without losing anything.
  • Perhaps this is true sometimes, but people have all sorts of biases that kick into effect when they are only dealing with scenarios they can relate to.  The whole point of a surreal hypothetical is to isolate the the mode of reasoning in a restricted abstract reality.  This also establishes a base line for consistency of reasoning because if the test subject thinks that morality of torture is based on the likelihood of success in an abstract reality, but when a real-life scenario is give that relies upon that same contingency they suddenly change their reasoning; then they are inconsistent in their reasoning.
  • Some_guy wrote:
    It matters because it is only by isolating the parameters that determine your actions that one can understand how you are going about making moral decisions.

    This is exactly the point. People are thinking of them as "tests" with right and wrong answers. They are designed to make you think about the moral choices you're making and what constitutes morality and how context can define that.

    The most basic example I can think of is that most people would agree cannibalism is wrong. But given a life or death scenario, would you resort to it to survive?

    If you say yes, you're admitting that there are certain situations whereby your held moral beliefs can be changed.

    The questions are designed to be as morally neutral as possible to try and obfuscate common scenarios whereby moral beliefs can be challenged. So instead of saying "A woman comes in for an abortion and oh, by the way you have to do it. Do you?" it asks you about people seeds.

    Instead of saying “do you torture this guy, knowing what you know about torture being not so effective?” it asks you “Do you torture this guy knowing it will definitely work and everyone will be saved?”

    You need to be removed from some sense of reality in order to get to the meat of the question.
  • +1

    You may have a Dragon.
  • Some_Guy wrote:
    Perhaps this is true sometimes, but people have all sorts of biases that kick into effect when they are only dealing with scenarios they can relate to.  The whole point of a surreal hypothetical is to isolate the the mode of reasoning in an abstract reality.  This also establishes a base line for consistency of reasoning because if the test subject thinks that morality of torture is based on the likelihood of success in an abstract reality, but when a real-life scenario is give that relies upon that same contingency they suddenly change their reasoning; then they are inconsistent in their reasoning.
    But... you haven't established that the likelihood of success is what is contingent - it could be the fact that you eliminated all other options. And... there is no real life scenario that relies upon the same contingency. If there were you wouldn't have needed the surreal scenario in the first place.
  • Eliminating all other options is the point.  If the test subject changed their mind about what torture constitutes as morally acceptable because we eliminated all options other than 'likelihood of success' then we have established that their concept of morality on this contingent upon it (unless they are lying in their answers).  If they didn't change their mind then we have established that their sense of morality is not contingent upon 'likelihood of success'.

    Now we can test to see how consistent they are with their reasoning by comparing how they answered other questions that rely upon the same variable.  For the purposes of testing consistency of reasoning it is irrelevant if the scenario could happen or can only be imagined to happen in a restricted abstract reality.  Either one's reasoning is consistent, or it is not.

    Also, if one's sense of morality is contingent upon 'likelihood of success' in one scenario, then we can can test to see if their morality is always contingent upon the same variable, because if it isn't, then they are basing their morality on inconsistent grounds.

    Furthermore, I think it's worth adding that all concepts of reality are abstract.  Our brains take in perceptions about reality and abstract them into a model which we use to navigate our way around it.  Conceptualizing 'people seeds' is just as abstracted as conceptualizing abortion, which is something almost no member of this forum could even experience for themselves - and yet every male to post so far has had a moral opinion on it and hasn't objected that it is too abstract an idea.  Both scenarios are asking you to evaluate a concept because we understand all of reality as a concept, and it is how we go about rationalizing our evaluations of it that is being tested.
  • Some_Guy wrote:
    Eliminating all other options is the point.  If the test subject changed their mind about what torture constitutes as morally acceptable because we eliminated all options other than 'likelihood of success' then we have established that their concept of morality on this contingent upon it (unless they are lying in their answers).  If they didn't change their mind then we have established that their sense of morality is not contingent upon 'likelihood of success'.

    ...

    Also, if one's sense of morality is contingent upon 'likelihood of success' in one scenario, then we can can test to see if their morality is always contingent upon the same variable, because if it isn't, then they are basing their morality on inconsistent grounds.
    I could possibly answer yes to using torture in that scenario, without it necessarily meaning my sense of morality regarding torture is contingent on likelihood of success. As I said in the last post, it could be contingent on having far more preferable options. It's not that it's likely to succeed, it's that in that scenario it's the only possible course of action other than doing nothing.

    If someone still says no to torture, you can infer their sense of morality is not contingent on likelihood of success. But it does not automatically follow that if they say yes to torture in that scenario their morality is contingent on likelihood of success. You've made that jump in the two sentences bolded in the quote.
  • People's morality in thought experiments doesn't always match up with reality though.

    Everyone says they would help a stranger in trouble, yet we have the bystander effect where dozens will walk past an injured person and not even acknowledge them.
    "Sometimes it's better to light a flamethrower than curse the darkness." ― Terry Pratchett
  • ffs. Had a longish post zapped. will try again later.
    I'm still great and you still love it.
  • Anyhoo, the gist of the long post I wrote was:
    People's morality in thought experiments doesn't always match up with reality though.

    Why not? That's the crux of things. If these tests, as a collection, not as individual entities, do a good job of stripping away fluff/biases, then inconsistensies may well be an issue that lies with the subject. And they may want to think why it is that they're being inconsistent.

    However, I'm not out of sympathy with what Jon is saying above, I think there are holes/poorly written bits in some of these tests, and individually, they can butt up against other biases (eg moto seems to have beef with the more abstract/fantasy/Sf nature of some of these experiments, in that sense, I don't think the setting of the scenario is doing it's job, not sure that's anyone's fault, however.), but I think SG has laid out pretty well how and why they can be useful and important, and how the methodology should work.

    If one more person points out that "life is more complicated than that" I may have to punch them in the fuck.

    I'm still great and you still love it.
  • JonB wrote:
    I could possibly answer yes to using torture in that scenario, without it necessarily meaning my sense of morality regarding torture is contingent on likelihood of success. As I said in the last post, it could be contingent on having far more preferable options. It's not that it's likely to succeed, it's that in that scenario it's the only possible course of action other than doing nothing. If someone still says no to torture, you can infer their sense of morality is not contingent on likelihood of success. But it does not automatically follow that if they say yes to torture in that scenario their morality is contingent on likelihood of success. You've made that jump in the two sentences bolded in the quote.

    That's disingenuous. If you choose to torture in a scenario where the only course of action that might yield information from the fat man is torture, then you are torturing him because there is some likelihood of success, are you not? You would not be torturing him if there were no likelihood of success because then the scenario would literally be having no course of action that might succeed. Also, doing nothing is a real alternative.

    The argument “I'd torture someone if I knew it was likely to succeed and it was my only option” is clearly contingent upon likelihood of success, when all other options have been eliminated – but we cannot know that it is only because all other options have been eliminated. In order to get that information we would need to ask another hypothetical question that has a likelihood of success for torture and a bunch of other non-torture options. And, even if it turned out that their willingness to torture was because all other options were eliminated, the fact remains that they value saving lives more than they value the idea that we ought not to torture people responsible for putting those live in danger, in order to save them; because torturing him might succeed.
    Success is still the contingency, but only in a specified context.

    Not necessarily to do with you Jon, but in the broader spectrum of objections in his thread I think that this is a common theme:

    “This scenario is not consistent with apparent reality therefore my rationalisations don't have to be consistent with ones I would make in reality”

    The implication being that any inconsistency in their thinking is forced by the scenario and not their thinking. This is equivocating “consistency to reality” which they are not responsible for, with the “consistency of their own thinking” which they are responsible for. One cannot shirk responsibility for their own thoughts just because a scenario has changed. It doesn't matter if a scenario is completely analogous to reality, or deliberately restricted in its analogy – providing the scenario is self-coherent you are responsible for your rationalisations. Logic does not cease to function just because we are entertaining hypotheticals, in fact logic is used to measure the consistency of any hypothetical. That is it's function. Any self-coherent scenario might be true, but not all self-coherent scenarios are true. If your reasoning is inconsistent, then you are responsible for its inconsistency.
  • Morality is contingent on reality. If reality is different so is morality. I wouldn't be surprised to find inconsistency in such conditions and I wouldn't read much into it either.
  • I think there is more to be said about these hypotheticals from the perspective of psychology than there is from the perspective of philosophy. Not so much a matter of what would you do in scenario a, b, or c but rather why you would make decisions x, y and z and what factors are under consideration. There are rather large grey areas in each scenario, and in the real world it's going to be decided by all sorts of factors these thought experiments cannot account for (brain chemistry, what you had for breakfast, alert levels and who exactly is at risk in each scenario, and whether they are people you care about).

    I think the closest analogy I can make is the difference between string theory and observational astronomy. No matter how much string theorists assume supernovae brightness should be at a given distance, observation and experiment will show them to be wrong every single time.
    "Sometimes it's better to light a flamethrower than curse the darkness." ― Terry Pratchett
  • JonB wrote:
    Morality is contingent on reality. If reality is different so is morality. I wouldn't be surprised to find inconsistency in such conditions and I wouldn't read much into it either.

    And how do you know that your perceptions of reality actually reflect reality accurately? Considering that all concepts of reality are abstract and your own ideas about what reality is have likely changed in your own lifetime, and yet has what is real changed? And have you not based decisions on your own faulty abstractions?  Why then would it be any less a legitimate test of your rationalization to see how it holds up to yet another abstraction of reality?  Either you are consistent in your approach to determining what you think is moral, or you are not.
  • I think in a real situation, if they were holding the pliers or electrical terminal (or even perhaps doing this by proxy) people would be squeamish at the thought of putting another person through torture. However it is similarly likely that if you asked a person if it was okay for torture to occur to save a million people then the person who was sqeamish might say yes.

    the questions help find the line where the decisions are based on logical thought or instinct and feeling.
  • I think it all hinges on the probability factor.

    The scenario says the man has a 75% chance of revealing the location of the bomb if he is tortured.
    The quiz opens with a question whether or not you think torture is ever morally acceptable.

    The obvious contradiction (for the quiz taker, not the quiz) is when someone starts thinking along the lines of probability.

    If there was a 5% chance he would tell the truth about where the bomb is whilst under torture, few people would go ahead with it.

    If there was a 95% chance he would reveal the location, I suspect most people would say torture is justified in that case.

    The interesting bit to me is where is the threshold at which we go from a minority of people saying its ok to torture him, to a majority.
    "Sometimes it's better to light a flamethrower than curse the darkness." ― Terry Pratchett
  • Some_Guy wrote:
    Morality is contingent on reality. If reality is different so is morality. I wouldn't be surprised to find inconsistency in such conditions and I wouldn't read much into it either.
    And how do you know that your perceptions of reality actually reflect reality accurately? Considering that all concepts of reality are abstract and your own ideas about what reality is have likely changed in your own lifetime, and yet has what is real changed? And have you not based decisions on your own faulty abstractions?  Why then would it be any less a legitimate test of your rationalization to see how it holds up to yet another abstraction of reality?  Either you are consistent in your approach to determining what you think is moral, or you are not.
    To use the seed people example again, what I'm saying is that in a world where such a phenomenon existed moral norms would probably differ reflecting that phenomenon. It could be that an abundance of these seeds floating around planting themselves in your house would be commonly viewed as little more than a pest, while at the same time normal human pregnancies still have the same value as they do now. Thus, putting yourself in that situation, it is not inconsistent to say you wouldn't allow those seeds to grow but you think abortion is wrong.

    Same with any of these scenarios. The footballer thing would seem different to how it seems to us now if it actually happened because that would mean we exist in a world where such a blood infection is possible that can only be cured by having another person attached to you for 9 months. If that were so it would be factored in to social norms, and if someone felt the need to kidnap you to force the transfusion that could be because the legal norms did not cater for the needs of the infected. You might then have more sympathy for that person and support their right to kidnap you. Whatever you decide, it's not an analogous scenario to abortion.

    It's the opposite of thinking my perceptions reflect reality accurately - it's an admission that my perceptions are formed (to a great extent) by social circumstances that are already an abstraction of whatever reality exists. Therefore, if I were in a different reality, my perceptions would also change, and with them my moral standards (or at least the application of those standards).
  • Somewhere there's an alternate reality where you can answer hypothetical questions with ease.
  • JonB wrote:
    ...It's the opposite of thinking my perceptions reflect reality accurately - it's an admission that my perceptions are formed (to a great extent) by social circumstances that are already an abstraction of whatever reality exists. Therefore, if I were in a different reality, my perceptions would also change, and with them my moral standards (or at least the application of those standards).

    Are you telling me that you don't think being forced to mentally re-frame your idea of reality does not shed light on how you go about forming moral judgements?

    Just because you recognize that another abstraction of reality is unlikely to ever synchronize with your current view does not mean that your rational mechanisms for discerning moral judgements don't reflect the ones you use in your current view of reality.
  • Some_Guy wrote:
    Are you telling me that you don't think being forced to mentally re-frame your idea of reality does not shed light on how you go about forming moral judgements?
    That I don't think it does not? I think the scenarios we're talking about here fail to take into account how your idea of reality would change. They assume your thinking based on your current social reality would transfer directly to the new reality.
    Just because you recognize that another abstraction of reality is unlikely to ever synchronize with your current view does not mean that your rational mechanisms for discerning moral judgements don't reflect the ones you use in your current view of reality.
    The same rational mechanisms applied in that different reality could lead to results that look inconsistent because they are unavoidably evaluated from the standards of our actual social reality. You cannot extract someone's rational mechanisms from the answers given.

    Say I were to ask if you agreed with slavery, and you answered no. Then I were to ask if you lived on a plantation in the South of the US a few hundred years ago, would you keep slaves, the honest answer would be probably yes - because if you lived then it would be normal, and you would probably be able to rationalise having slaves using the same rational mechanisms that today tell you it's wrong. Back then it was possible, in a way that seems abhorrent now, to live a morally consistent life - loving your family, friends, being part of the community etc - and keep slaves. Then again, you may say yes for some other reason, but I wouldn't know. If I were then to evaluate this answer it would seem inconsistent with your first answer. But in fact there's no way of telling whether it is or not, and it doesn't tell me anything. I cannot evaluate your second answer from its own context.
  • JonB wrote:
    Some_Guy wrote:
    Are you telling me that you don't think being forced to mentally re-frame your idea of reality does not shed light on how you go about forming moral judgements?
    That I don't think it does not? I think the scenarios we're talking about here fail to take into account how your idea of reality would change. They assume your thinking based on your current social reality would transfer directly to the new reality.

    I thought you guys were in danger of talking past each other, but now I think we may have gotten somewhere.

    Surely they "fail" to take into account how your views would change because, yes, they're trying to find out about your thinking within your current social reality?

    I'm loathe to accuse anyone in a thread about philosophy of overthinking something, and that mightn't be quite the right phrase, but going into how your mindset might change in a world where there were seed people, for instance, seems like a hypothetical too far.
    Back then it was possible, in a way that seems abhorrent now, to live a morally consistent life - loving your family, friends, being part of the community etc - and keep slaves.

    Surely being able to think it's abhorrent now is effectively the same as being able to work through a hypothetical seed people scenario? If you can make a decision about the morality of slavery 100 years ago, you can make a decision about the morality about seed people in a hypothetical, and your answer will tell you something about your moral opinions in this reality.

    Don't know if I've explained myself at all well, but my spidey sense tingled when I read that para about slavery, and I think the bit I bolded is what is setting it off.

    I'm still great and you still love it.
  • Facewon wrote:
    If you can make a decision about the morality of slavery 100 years ago, you can make a decision about the morality about seed people in a hypothetical, and your answer will tell you something about your moral opinions in this reality.
    But you already know about those. You don't need the fantasy scenario to get to them. Have you really learned anything because I think slavery is bad today and I also think it was bad 100 (or a bit more than that) years ago? Or, more importantly, what have you learned if I say slavery is bad but I would have kept slaves in the past? Unless I was changing my thinking on a whim I must have found some way to rationalise that so it doesn't seem contradictory to me, but you have no access to that reasoning. You shouldn't therefore jump to any conclusions about inconsistency or whatever.

    To put it another way, if people weren't 'overthinking' it, it would be surprising to have any 'inconsistencies' at all.
  • JonB wrote:
    ... Or, more importantly, what have you learned if I say slavery is bad but I would have kept slaves in the past? ...
    TBH, my instinctive reaction is "quite a lot actually".

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!