Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Current time: April 25, 2024, 5:05 am

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
What would be the harm?
#1
What would be the harm?
In Sam Harris' book, The Moral Landscape, he posits that the necessary goal of morals is human well-being.  This has been proposed as an objective basis of morals because it is in some sense quantifiable, even if imprecisely.  Harris has been criticized for simply arbitrarily positing human well-being as the ultimate value, but I want to focus on a different aspect of such theories today.  If one accepts that harm is the inverse of causing to exhibit well-being, then if one accepts that well-being is the metric by which we measure the good, then harm is definitely bad.  

Many people suggest that our moral responses evolved to serve specific functions which correlate with improved survival of our species and thus need no explanation in their own terms.  Those moral responses which enhanced survival flourished and became dominant, and those that didn't died out and are no longer represented among the gene pool.  One rather salient moral that is believed to be explained this way is the taboo against incest.  Not only do we feel that a sexual relationship between two siblings is morally wrong, we feel a certain revulsion in just thinking about the fact.  It is hypothesized that we developed these two reactions to discourage sibling incest because such pairings have a much higher probability of yielding genetically handicapped offspring.  Those who did not have these reactions bred unfit children, and the lesser fitness of such children was outcompeted by children of parents who had these revulsions and moral inclinations against sexually pairing with siblings.

Enter Jonathan Haidt, a prominent moral psychologist.  He suggests that morals are essentially intuitive, that while we may develop reasons supporting the appropriateness of this or that specific moral rule or prohibition, the actual foundation of morals is not rational, but intuitive, and as a result, we can think up scenarios in which the two, reason and intuition, are at odds with one another.  In particular, he tells of the following story he employed in an experiment aimed at demonstrating his theory.  The story goes as follows.

Quote:Julie and Mark are brother and sister. They are traveling together in France on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy making love, but they decide not to do it again. They keep that night as a special secret, which makes them feel even closer to each other.

Haidt asked his subjects whether it was okay for Julie and Mark to have sex. Most said that it was not, but when Haidt asked them why, they offered reasons that were already excluded by the story—for example, the dangers of inbreeding, or the risk that their relationship would suffer. When Haidt pointed out to his subjects that the reasons they had offered did not apply to the case, they often responded: “I can’t explain it, I just know it’s wrong.” Haidt refers to this as “moral dumbfounding.”

On Haidt's interpretation, while our belief is that reasons and rationality underpin our moral judgements, a story such as this one, where the reasons are systematically excluded, undermines that belief, as there are no reasons, ostensibly, for objecting to Julie and Mark's behavior.

I was talking to some people about the story last night, and it occurred to me that all was not necessarily as Haidt had suggested. While there are no harmful consequences in the specific case of Julie and Mark, if their behavior were to become an accepted norm, harm would occur in cases aside of the Julie's and Mark's in which the harmful consequences are avoided. We may be able to think of specific cases in which a moral is not actually averting any harm, but these are the exception, and the suspension of the rule when regularly practiced would result in harm.

So it appears on the surface of things that Haidt's conclusions are not necessarily as sound as they may at first appear. I bring this up because this raises a number of issues about using harm or well-being as a standard for morals. First of all, it seems to offer no direction as to whether the harm standard should be applied on a case by case basis, or whether aggregate cases should also be considered. This is similar to other classical moral dilemmas such as that concerning consequentialism and deontological ethics, wherein we are motivated toward a moral conclusion, but moral rules themselves are effective at reaching said moral conclusions, and so should take precedence over specific consequences. There is also the concern in Utilitarianism that the goods of someone else justify limits on my own good. The greatest good for the greatest number seems an enviable goal, but having posited that goal, it becomes problematic justifying that behaving consistently with that goal is necessarily moral. This is similar to the problem Harris has in positing well-being as the goal.

These are all interesting complications and worthy of discussion in their own right, but I want to raise another issue which I think is brought out by problems such as those with Haidt's conclusions about the Julie and Mark story. Viewed from our new perspective, it's clear that deciding what is moral in the case of sibling incest is dependent upon who we consider to be the necessary beneficiary of these morals. It's supposed that we may no longer be here much longer as a result of global warming, so enacting rules that benefit future generations may end up benefiting no one. At minimum, there appears to be a tension between what seems an obvious good, personal liberty, and standards rooted in harm based morals which impinge upon that personal liberty for the sake of others. In the case of Julie and Mark, if they intend to benefit others who might be harmed by their act, they must forego something which to them is a tangible and unproblematic good. The problem I see is that, even if you posit harm or well-being as the metric of morals in order to objectively ground morals, there are still questions needing to be settled which do not appeal to that objective grounding and can only be resolved subjectively or arbitrarily. It does no good to have a sound foundation if the building you construct on top of it is unsound. Do these concerns undermine the possibility of using harm or well-being as a foundation for morals?

A side question which is frequently brought up is the question of the "rightness" of evolved morals. In the case of Julie and Mark, and cases of sibling incest in general, we may eventually have a world in which readily available genetic counseling makes such concerns moot. It may at that point become desirable to genetically engineer humans to no longer feel revulsion and moral disgust at the prospect of incest. Such ideas opens up a Pandora's box of questions. Take the case of animal cruelty. I have argued that we have an aversion to animal cruelty, because if we lacked the empathy to avoid animal cruelty, we would lack the empathy to avoid cruelty to people, who are the actual beneficiaries of such feelings and behaviors. But then we have the question of what is an appropriate amount of empathy to have? Should we be empathetic toward cows, but not toward sea slugs? If we could engineer humans so that, in the aggregate, we have more or less empathy, would it be ethical to do so? The immediate objection to reducing the amount of empathy would be that it would lead to more tangible harm and suffering. But the reason we consider that harm to be morally relevant is because of our empathy. If we felt less empathy, we would perceive less harm being done, and consider a situation in which there was more harm as satisfactorily moral because our metric is based on our empathy, which we have genetically altered. Is a psychopath wrong because he lacks empathy? That doesn't seem very fair to the psychopath, who is oppressed simply because we have something he doesn't. That reduces to a might makes right argument, which is clearly immoral. If we did have the ability to genetically alter the amount of empathy we have as a species, what would the "right" amount of empathy be, and on what basis could you decide that?
[Image: extraordinarywoo-sig.jpg]
Reply
#2
RE: What would be the harm?
Thus the difference between morality and legality.  Legality reserves for itself the right (and yes..by power or might - no less) to set norms which may, in some cases or instances, be opposed to a moral reading of that same situation. It's often the case that what we find morally questionable is legal, and what we find morally upright...illegal.

Ideally, we seek a legal situation that mirrors our moral appraisals as closely as is prudent for an institution, and prefer permissive institutions and legality over impermissive ones. The idea of democracy is a tool to approach that state of affairs. Nonetheless, both the tools and our systems are imperfect, and do not (nor likely ever will) have a one for one equivalence with our moral appraisals.

As with the specific example, I can see no reason (after objections are removed) that it would be immoral for julie and mark to bump uglies. That doesn't diminish the justification for it's status as broadly illegal - and perhaps our laws should contain provisions meant to test for those objections and grant exemption in specific cases.

I still wouldn't screw julie..if I were mark.

(the reason that we consider harm to be morally relevant, btw, from a realist pov, is -not- because we possess empathy...even though it's true that we do and it's true that our moral intuitions are certainly built upon that - you tied yourself into a knot on that one. Rejecting those evolved intuitions as the sole or deciding basis for an objective morality is how realism purports to escape the naturalistic fallacy. Yes, we have empathy, yes, it evolved for survival, yes, our apparatus is in the same position and yes, our intuitive notions of morality are much the same - but it does not need to be that way just because it evolved for that purpose or with that aim. Our empathy can mislead, what is good for our survival may not be morally good, our intuitions do not always match facts. )

*most of that was for the gallery, lol. Wink
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Reply
#3
RE: What would be the harm?
I don't think that's the issue at all. As mentioned, the conflict between consequentialist ethics and deontological ethics, as well as many aspects of utilitarianism, cause such problems to surface without ever even glancing at the question of law.
[Image: extraordinarywoo-sig.jpg]
Reply
#4
RE: What would be the harm?
They do, yes.  Nevertheless, the legal and moral case for incest are seperate affairs..and when we reduce the moral objections by removal what we are left with is the lingering sense of "ick".  This is not an objective position..though there can be a realists position laying atop or under it in the general. The presence of one does not necessarily diminish the foundation of the other.

Another way to put this, is that a person has both an objective appraisal -and- and opinion of the act.  A person has each in general and specific cases, and there's no requirement that they all be equivalent (particularly if the individual case is explicitly made to be different from the general case) nor is it a requirement that one influence the other.  

Mark and julie may not be in the wrong, but that doesn't mean that steve and sarah aren't.  Mark and julie and steve and sarah may all evade moral or legal objections..but that doesn't mean that I find it any less distasteful even if i can accept that they are not morally wrong and should not be penalized legally. I still think it's "ick"- and that isn't a moral fact or a moral intuition.
I am the Infantry. I am my country’s strength in war, her deterrent in peace. I am the heart of the fight… wherever, whenever. I carry America’s faith and honor against her enemies. I am the Queen of Battle. I am what my country expects me to be, the best trained Soldier in the world. In the race for victory, I am swift, determined, and courageous, armed with a fierce will to win. Never will I fail my country’s trust. Always I fight on…through the foe, to the objective, to triumph overall. If necessary, I will fight to my death. By my steadfast courage, I have won more than 200 years of freedom. I yield not to weakness, to hunger, to cowardice, to fatigue, to superior odds, For I am mentally tough, physically strong, and morally straight. I forsake not, my country, my mission, my comrades, my sacred duty. I am relentless. I am always there, now and forever. I AM THE INFANTRY! FOLLOW ME!
Reply
#5
RE: What would be the harm?
This (the idea of an aggregate harm from not enforcing the rules even though there might not be any harm in a specific case) seems similar to the idea of moral hazard in economics, which is the idea that if someone is insured against loss (protected from consequences), they will tend to expose themselves to greater investment risk, counting on their insurance to protect them from losses. What winds up happening is other people bearing the costs of your risky behavior, though in your particular case it might not come to that, more people taking those kinds of risks leads to more aggregate losses.
I'm not anti-Christian. I'm anti-stupid.
Reply
#6
RE: What would be the harm?
(November 27, 2018 at 9:58 am)Jörmungandr Wrote: In Sam Harris' book, The Moral Landscape, he posits that the necessary goal of morals is human well-being.  This has been proposed as an objective basis of morals because it is in some sense quantifiable, even if imprecisely.  Harris has been criticized for simply arbitrarily positing human well-being as the ultimate value, but I want to focus on a different aspect of such theories today.  If one accepts that harm is the inverse of causing to exhibit well-being, then if one accepts that well-being is the metric by which we measure the good, then harm is definitely bad.  

Many people suggest that our moral responses evolved to serve specific functions which correlate with improved survival of our species and thus need no explanation in their own terms.  Those moral responses which enhanced survival flourished and became dominant, and those that didn't died out and are no longer represented among the gene pool.  One rather salient moral that is believed to be explained this way is the taboo against incest.  Not only do we feel that a sexual relationship between two siblings is morally wrong, we feel a certain revulsion in just thinking about the fact.  It is hypothesized that we developed these two reactions to discourage sibling incest because such pairings have a much higher probability of yielding genetically handicapped offspring.  Those who did not have these reactions bred unfit children, and the lesser fitness of such children was outcompeted by children of parents who had these revulsions and moral inclinations against sexually pairing with siblings.

Enter Jonathan Haidt, a prominent moral psychologist.  He suggests that morals are essentially intuitive, that while we may develop reasons supporting the appropriateness of this or that specific moral rule or prohibition, the actual foundation of morals is not rational, but intuitive, and as a result, we can think up scenarios in which the two, reason and intuition, are at odds with one another.  In particular, he tells of the following story he employed in an experiment aimed at demonstrating his theory.  The story goes as follows.

Quote:Julie and Mark are brother and sister. They are traveling together in France on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy making love, but they decide not to do it again. They keep that night as a special secret, which makes them feel even closer to each other.

Haidt asked his subjects whether it was okay for Julie and Mark to have sex. Most said that it was not, but when Haidt asked them why, they offered reasons that were already excluded by the story—for example, the dangers of inbreeding, or the risk that their relationship would suffer. When Haidt pointed out to his subjects that the reasons they had offered did not apply to the case, they often responded: “I can’t explain it, I just know it’s wrong.” Haidt refers to this as “moral dumbfounding.”

On Haidt's interpretation, while our belief is that reasons and rationality underpin our moral judgements, a story such as this one, where the reasons are systematically excluded, undermines that belief, as there are no reasons, ostensibly, for objecting to Julie and Mark's behavior.

I was talking to some people about the story last night, and it occurred to me that all was not necessarily as Haidt had suggested.  While there are no harmful consequences in the specific case of Julie and Mark, if their behavior were to become an accepted norm, harm would occur in cases aside of the Julie's and Mark's in which the harmful consequences are avoided.  We may be able to think of specific cases in which a moral is not actually averting any harm, but these are the exception, and the suspension of the rule when regularly practiced would result in harm.

So it appears on the surface of things that Haidt's conclusions are not necessarily as sound as they may at first appear.  I bring this up because this raises a number of issues about using harm or well-being as a standard for morals.  First of all, it seems to offer no direction as to whether the harm standard should be applied on a case by case basis, or whether aggregate cases should also be considered.  This is similar to other classical moral dilemmas such as that concerning consequentialism and deontological ethics, wherein we are motivated toward a moral conclusion, but moral rules themselves are effective at reaching said moral conclusions, and so should take precedence over specific consequences.  There is also the concern in Utilitarianism that the goods of someone else justify limits on my own good.  The greatest good for the greatest number seems an enviable goal, but having posited that goal, it becomes problematic justifying that behaving consistently with that goal is necessarily moral.  This is similar to the problem Harris has in positing well-being as the goal.

These are all interesting complications and worthy of discussion in their own right, but I want to raise another issue which I think is brought out by problems such as those with Haidt's conclusions about the Julie and Mark story.  Viewed from our new perspective, it's clear that deciding what is moral in the case of sibling incest is dependent upon who we consider to be the necessary beneficiary of these morals.  It's supposed that we may no longer be here much longer as a result of global warming, so enacting rules that benefit future generations may end up benefiting no one.  At minimum, there appears to be a tension between what seems an obvious good, personal liberty, and standards rooted in harm based morals which impinge upon that personal liberty for the sake of others.  In the case of Julie and Mark, if they intend to benefit others who might be harmed by their act, they must forego something which to them is a tangible and unproblematic good.  The problem I see is that, even if you posit harm or well-being as the metric of morals in order to objectively ground morals, there are still questions needing to be settled which do not appeal to that objective grounding and can only be resolved subjectively or arbitrarily.  It does no good to have a sound foundation if the building you construct on top of it is unsound.  Do these concerns undermine the possibility of using harm or well-being as a foundation for morals?

A side question which is frequently brought up is the question of the "rightness" of evolved morals.  In the case of Julie and Mark, and cases of sibling incest in general, we may eventually have a world in which readily available genetic counseling makes such concerns moot.  It may at that point become desirable to genetically engineer humans to no longer feel revulsion and moral disgust at the prospect of incest.  Such ideas opens up a Pandora's box of questions.  Take the case of animal cruelty.  I have argued that we have an aversion to animal cruelty, because if we lacked the empathy to avoid animal cruelty, we would lack the empathy to avoid cruelty to people, who are the actual beneficiaries of such feelings and behaviors.  But then we have the question of what is an appropriate amount of empathy to have?  Should we be empathetic toward cows, but not toward sea slugs?  If we could engineer humans so that, in the aggregate, we have more or less empathy, would it be ethical to do so?  The immediate objection to reducing the amount of empathy would be that it would lead to more tangible harm and suffering.  But the reason we consider that harm to be morally relevant is because of our empathy.  If we felt less empathy, we would perceive less harm being done, and consider a situation in which there was more harm as satisfactorily moral because our metric is based on our empathy, which we have genetically altered.  Is a psychopath wrong because he lacks empathy?  That doesn't seem very fair to the psychopath, who is oppressed simply because we have something he doesn't.  That reduces to a might makes right argument, which is clearly immoral.  If we did have the ability to genetically alter the amount of empathy we have as a species, what would the "right" amount of empathy be, and on what basis could you decide that?

If any instinct for the right amount of empathy is genetically influenced, and if the gene(s) that exerted the influence were in fact primarily selected because of this influence and not for some other reason, such as the effects this same gene(s) May manifest in other areas, or the roulette of sexual combination or imperfect replication,  then the instinct would reflect particular environments and contingencies that population bearing the gene experienced over a period suffient for selection of this gene to take place.  The same would be reflected for a similar cause if the gene was selected via gene engineering rather than natural selection.

This necessarily makes this genetically driven behavioral influence relatively inflexible and difficult to tailor to achieve desired outcome in an complex environment unless the overall social and survival environment is relatively static, invariant from when the effect of the behavior thisbgene influened is assessed.
Reply
#7
RE: What would be the harm?
It seems to me that sexual morality is largely imposed by those looking to piggyback their DNA on the acts of others. A father, for example, quite jealously guards his daughter's sexuality, because if she breeds with some dumbass, his gene pool takes a hit, and thus his own genetic fitness has been affected by proxy.

I think this is pretty clearly seen in the differences in response to young girls' and young boys' sexuality-- a girl having sex too often too young is seen in much worse terms than a boy. When I was young and got a girl to spend the night, my father found out and started bragging to his friends-- "So. . . guess what the boy dragged home last night?" *wink wink nudge nudge*

This is the essence-- harm is defined not only rationally, but in terms of our ape instincts, and while we dance around it with various semantics, under the hood those will be there for tens of thousand of years, or until we genetically modify them out of the gene pool.
Reply
#8
RE: What would be the harm?
When you get sick of fucking your girlfriend/ boyfriend - you simply kick them to the curb and move on.


Now imagine he/ she is going to be at Thanksgiving dinner every year for the rest of your life.



See the problem??
Reply
#9
RE: What would be the harm?
(November 28, 2018 at 4:33 am)bennyboy Wrote: It seems to me that sexual morality is largely imposed by those looking to piggyback their DNA on the acts of others.  A father, for example, quite jealously guards his daughter's sexuality, because if she breeds with some dumbass, his gene pool takes a hit, and thus his own genetic fitness has been affected by proxy.

I think this is pretty clearly seen in the differences in response to young girls' and young boys' sexuality-- a girl having sex too often too young is seen in much worse terms than a boy.  When I was young and got a girl to spend the night, my father found out and started bragging to his friends-- "So. . . guess what the boy dragged home last night?"  *wink wink nudge nudge*

This is the essence-- harm is defined not only rationally, but in terms of our ape instincts, and while we dance around it with various semantics, under the hood those will be there for tens of thousand of years, or until we genetically modify them out of the gene pool.

You say that like it's a bad thing!

What's the alternative? And if there is no alternative, is it objective on its own terms?
[Image: extraordinarywoo-sig.jpg]
Reply
#10
RE: What would be the harm?
(November 28, 2018 at 8:58 am)Jörmungandr Wrote: You say that like it's a bad thing!

What's the alternative?  And if there is no alternative, is it objective on its own terms?

As I mentioned on my entrance into the previous thread about morality, we could view instinct as a highly variable but nevertheless objective reality. I'd argue that is because those instincts to some degree make our moral agency illusory. Sure, I could theoretically arrive at the idea that since all is just QM particles, it's okay if my daughter gets raped-- but I don't really have the choice of arriving at that idea, because my primate instincts are an expression of perhaps a million years of trial and error, and exercising control over my genetic descendency is part of that program for sure. What's more likely is that I'll rage about it, that other adult males will sympathise and also rage about it, and that we'll arrive at the idea that this behavior must not be tolerated. I'd say I had my first daughter for about an hour before I realized that I'd be able to kill for her if I had to; it wouldn't even be a hard decision to make. I literally looked at that little baby and thought that: "I will protect you until I die, and I pity the fool who thinks that there's a law strong enough, or an army big enough, to save him from me if he harms you."

Ultimately, the sense of harm must be instinctual, since all feelings are instinctual. Yes, we can develop complex layers of ideology or religion over that, but ultimately, they all derive from instincts about survival, about reproduction, and about genetic fitness-- or they serve as a response to knowledge about those instincts.
Reply



Possibly Related Threads...
Thread Author Replies Views Last Post
  If God exists but doesn't do anything, how would we know? And would it matter? TaraJo 7 3994 January 26, 2013 at 11:14 am
Last Post: DeistPaladin



Users browsing this thread: 1 Guest(s)