What would be the harm?
November 27, 2018 at 9:58 am
(This post was last modified: November 27, 2018 at 10:09 am by Angrboda.)
In Sam Harris' book, The Moral Landscape, he posits that the necessary goal of morals is human well-being. This has been proposed as an objective basis of morals because it is in some sense quantifiable, even if imprecisely. Harris has been criticized for simply arbitrarily positing human well-being as the ultimate value, but I want to focus on a different aspect of such theories today. If one accepts that harm is the inverse of causing to exhibit well-being, then if one accepts that well-being is the metric by which we measure the good, then harm is definitely bad.
Many people suggest that our moral responses evolved to serve specific functions which correlate with improved survival of our species and thus need no explanation in their own terms. Those moral responses which enhanced survival flourished and became dominant, and those that didn't died out and are no longer represented among the gene pool. One rather salient moral that is believed to be explained this way is the taboo against incest. Not only do we feel that a sexual relationship between two siblings is morally wrong, we feel a certain revulsion in just thinking about the fact. It is hypothesized that we developed these two reactions to discourage sibling incest because such pairings have a much higher probability of yielding genetically handicapped offspring. Those who did not have these reactions bred unfit children, and the lesser fitness of such children was outcompeted by children of parents who had these revulsions and moral inclinations against sexually pairing with siblings.
Enter Jonathan Haidt, a prominent moral psychologist. He suggests that morals are essentially intuitive, that while we may develop reasons supporting the appropriateness of this or that specific moral rule or prohibition, the actual foundation of morals is not rational, but intuitive, and as a result, we can think up scenarios in which the two, reason and intuition, are at odds with one another. In particular, he tells of the following story he employed in an experiment aimed at demonstrating his theory. The story goes as follows.
Haidt asked his subjects whether it was okay for Julie and Mark to have sex. Most said that it was not, but when Haidt asked them why, they offered reasons that were already excluded by the story—for example, the dangers of inbreeding, or the risk that their relationship would suffer. When Haidt pointed out to his subjects that the reasons they had offered did not apply to the case, they often responded: “I can’t explain it, I just know it’s wrong.” Haidt refers to this as “moral dumbfounding.”
On Haidt's interpretation, while our belief is that reasons and rationality underpin our moral judgements, a story such as this one, where the reasons are systematically excluded, undermines that belief, as there are no reasons, ostensibly, for objecting to Julie and Mark's behavior.
I was talking to some people about the story last night, and it occurred to me that all was not necessarily as Haidt had suggested. While there are no harmful consequences in the specific case of Julie and Mark, if their behavior were to become an accepted norm, harm would occur in cases aside of the Julie's and Mark's in which the harmful consequences are avoided. We may be able to think of specific cases in which a moral is not actually averting any harm, but these are the exception, and the suspension of the rule when regularly practiced would result in harm.
So it appears on the surface of things that Haidt's conclusions are not necessarily as sound as they may at first appear. I bring this up because this raises a number of issues about using harm or well-being as a standard for morals. First of all, it seems to offer no direction as to whether the harm standard should be applied on a case by case basis, or whether aggregate cases should also be considered. This is similar to other classical moral dilemmas such as that concerning consequentialism and deontological ethics, wherein we are motivated toward a moral conclusion, but moral rules themselves are effective at reaching said moral conclusions, and so should take precedence over specific consequences. There is also the concern in Utilitarianism that the goods of someone else justify limits on my own good. The greatest good for the greatest number seems an enviable goal, but having posited that goal, it becomes problematic justifying that behaving consistently with that goal is necessarily moral. This is similar to the problem Harris has in positing well-being as the goal.
These are all interesting complications and worthy of discussion in their own right, but I want to raise another issue which I think is brought out by problems such as those with Haidt's conclusions about the Julie and Mark story. Viewed from our new perspective, it's clear that deciding what is moral in the case of sibling incest is dependent upon who we consider to be the necessary beneficiary of these morals. It's supposed that we may no longer be here much longer as a result of global warming, so enacting rules that benefit future generations may end up benefiting no one. At minimum, there appears to be a tension between what seems an obvious good, personal liberty, and standards rooted in harm based morals which impinge upon that personal liberty for the sake of others. In the case of Julie and Mark, if they intend to benefit others who might be harmed by their act, they must forego something which to them is a tangible and unproblematic good. The problem I see is that, even if you posit harm or well-being as the metric of morals in order to objectively ground morals, there are still questions needing to be settled which do not appeal to that objective grounding and can only be resolved subjectively or arbitrarily. It does no good to have a sound foundation if the building you construct on top of it is unsound. Do these concerns undermine the possibility of using harm or well-being as a foundation for morals?
A side question which is frequently brought up is the question of the "rightness" of evolved morals. In the case of Julie and Mark, and cases of sibling incest in general, we may eventually have a world in which readily available genetic counseling makes such concerns moot. It may at that point become desirable to genetically engineer humans to no longer feel revulsion and moral disgust at the prospect of incest. Such ideas opens up a Pandora's box of questions. Take the case of animal cruelty. I have argued that we have an aversion to animal cruelty, because if we lacked the empathy to avoid animal cruelty, we would lack the empathy to avoid cruelty to people, who are the actual beneficiaries of such feelings and behaviors. But then we have the question of what is an appropriate amount of empathy to have? Should we be empathetic toward cows, but not toward sea slugs? If we could engineer humans so that, in the aggregate, we have more or less empathy, would it be ethical to do so? The immediate objection to reducing the amount of empathy would be that it would lead to more tangible harm and suffering. But the reason we consider that harm to be morally relevant is because of our empathy. If we felt less empathy, we would perceive less harm being done, and consider a situation in which there was more harm as satisfactorily moral because our metric is based on our empathy, which we have genetically altered. Is a psychopath wrong because he lacks empathy? That doesn't seem very fair to the psychopath, who is oppressed simply because we have something he doesn't. That reduces to a might makes right argument, which is clearly immoral. If we did have the ability to genetically alter the amount of empathy we have as a species, what would the "right" amount of empathy be, and on what basis could you decide that?
Many people suggest that our moral responses evolved to serve specific functions which correlate with improved survival of our species and thus need no explanation in their own terms. Those moral responses which enhanced survival flourished and became dominant, and those that didn't died out and are no longer represented among the gene pool. One rather salient moral that is believed to be explained this way is the taboo against incest. Not only do we feel that a sexual relationship between two siblings is morally wrong, we feel a certain revulsion in just thinking about the fact. It is hypothesized that we developed these two reactions to discourage sibling incest because such pairings have a much higher probability of yielding genetically handicapped offspring. Those who did not have these reactions bred unfit children, and the lesser fitness of such children was outcompeted by children of parents who had these revulsions and moral inclinations against sexually pairing with siblings.
Enter Jonathan Haidt, a prominent moral psychologist. He suggests that morals are essentially intuitive, that while we may develop reasons supporting the appropriateness of this or that specific moral rule or prohibition, the actual foundation of morals is not rational, but intuitive, and as a result, we can think up scenarios in which the two, reason and intuition, are at odds with one another. In particular, he tells of the following story he employed in an experiment aimed at demonstrating his theory. The story goes as follows.
Quote:Julie and Mark are brother and sister. They are traveling together in France on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy making love, but they decide not to do it again. They keep that night as a special secret, which makes them feel even closer to each other.
Haidt asked his subjects whether it was okay for Julie and Mark to have sex. Most said that it was not, but when Haidt asked them why, they offered reasons that were already excluded by the story—for example, the dangers of inbreeding, or the risk that their relationship would suffer. When Haidt pointed out to his subjects that the reasons they had offered did not apply to the case, they often responded: “I can’t explain it, I just know it’s wrong.” Haidt refers to this as “moral dumbfounding.”
On Haidt's interpretation, while our belief is that reasons and rationality underpin our moral judgements, a story such as this one, where the reasons are systematically excluded, undermines that belief, as there are no reasons, ostensibly, for objecting to Julie and Mark's behavior.
I was talking to some people about the story last night, and it occurred to me that all was not necessarily as Haidt had suggested. While there are no harmful consequences in the specific case of Julie and Mark, if their behavior were to become an accepted norm, harm would occur in cases aside of the Julie's and Mark's in which the harmful consequences are avoided. We may be able to think of specific cases in which a moral is not actually averting any harm, but these are the exception, and the suspension of the rule when regularly practiced would result in harm.
So it appears on the surface of things that Haidt's conclusions are not necessarily as sound as they may at first appear. I bring this up because this raises a number of issues about using harm or well-being as a standard for morals. First of all, it seems to offer no direction as to whether the harm standard should be applied on a case by case basis, or whether aggregate cases should also be considered. This is similar to other classical moral dilemmas such as that concerning consequentialism and deontological ethics, wherein we are motivated toward a moral conclusion, but moral rules themselves are effective at reaching said moral conclusions, and so should take precedence over specific consequences. There is also the concern in Utilitarianism that the goods of someone else justify limits on my own good. The greatest good for the greatest number seems an enviable goal, but having posited that goal, it becomes problematic justifying that behaving consistently with that goal is necessarily moral. This is similar to the problem Harris has in positing well-being as the goal.
These are all interesting complications and worthy of discussion in their own right, but I want to raise another issue which I think is brought out by problems such as those with Haidt's conclusions about the Julie and Mark story. Viewed from our new perspective, it's clear that deciding what is moral in the case of sibling incest is dependent upon who we consider to be the necessary beneficiary of these morals. It's supposed that we may no longer be here much longer as a result of global warming, so enacting rules that benefit future generations may end up benefiting no one. At minimum, there appears to be a tension between what seems an obvious good, personal liberty, and standards rooted in harm based morals which impinge upon that personal liberty for the sake of others. In the case of Julie and Mark, if they intend to benefit others who might be harmed by their act, they must forego something which to them is a tangible and unproblematic good. The problem I see is that, even if you posit harm or well-being as the metric of morals in order to objectively ground morals, there are still questions needing to be settled which do not appeal to that objective grounding and can only be resolved subjectively or arbitrarily. It does no good to have a sound foundation if the building you construct on top of it is unsound. Do these concerns undermine the possibility of using harm or well-being as a foundation for morals?
A side question which is frequently brought up is the question of the "rightness" of evolved morals. In the case of Julie and Mark, and cases of sibling incest in general, we may eventually have a world in which readily available genetic counseling makes such concerns moot. It may at that point become desirable to genetically engineer humans to no longer feel revulsion and moral disgust at the prospect of incest. Such ideas opens up a Pandora's box of questions. Take the case of animal cruelty. I have argued that we have an aversion to animal cruelty, because if we lacked the empathy to avoid animal cruelty, we would lack the empathy to avoid cruelty to people, who are the actual beneficiaries of such feelings and behaviors. But then we have the question of what is an appropriate amount of empathy to have? Should we be empathetic toward cows, but not toward sea slugs? If we could engineer humans so that, in the aggregate, we have more or less empathy, would it be ethical to do so? The immediate objection to reducing the amount of empathy would be that it would lead to more tangible harm and suffering. But the reason we consider that harm to be morally relevant is because of our empathy. If we felt less empathy, we would perceive less harm being done, and consider a situation in which there was more harm as satisfactorily moral because our metric is based on our empathy, which we have genetically altered. Is a psychopath wrong because he lacks empathy? That doesn't seem very fair to the psychopath, who is oppressed simply because we have something he doesn't. That reduces to a might makes right argument, which is clearly immoral. If we did have the ability to genetically alter the amount of empathy we have as a species, what would the "right" amount of empathy be, and on what basis could you decide that?