Okay, I will refrain from critique of your position in this post and ask only that you do me the favor of clarifying your position on certain points. However, I will quote your post out of order for the purpose of enhancing what I believe is your line of reasoning.
What is being leveraged and what is providing the leverage? I take it that, for you, a moral schema (framework for thinking ethically) starts with feelings of empathy and proceeds by applying reason to determine how people should act on those feelings. Is that correct?
Based on my understanding above, this means that people reason from feelings of empathy to estimate what would maximize personal benefit.
Some evolved traits that may have ceased to provide a reproductive advantage or may now ill-adapted to civilized societies. Similarly some traits may opportunistically confer advantages to a species in response to changes in environment.
I’m not sure why you think these facts are relevant.
I don’t think you will find anything to the effect of “sometimes it doesn’t work” in my demonstration. Is there a specific premise to which you are referring? Also you seem to be saying that people sometimes make bad choices. True, but I do not see the significance. The demonstration only concerns itself with whether or not reliance on empathy is a truly rational ground for an ethical system. What am I missing?
So are you in essence making the following argument?:
I value my own life.
I am a human being.
Therefore, I value at least one human being.
Other people are human beings like me.
Therefore, I value other people.
So are you in essence making the following argument?:
I am a human with emotions.
Some of my emotions feel bad.
Other people are humans and it is reasonable to infer that they have emotions, just like me, some of which feel bad.
I have knowledge of what makes me feel bad.
Therefore, given basic epistemological limits, I have knowledge of what makes other people feel bad.
From there I’m not entirely clear on how you would link what I presume are your two main lines of reasoning into a single coherent argument. I think you are trying to say something along the lines that you value not feeling bad and therefore you should value not making other people feel bad. Or something like that.
(August 17, 2017 at 10:42 am)Khemikal Wrote: Your criticisms of that concept or the tools we employ to leverage it do not speak to the validity of the concept as a moral schema or the unsuitability of the tools, but, rather, your dissatisfaction in how it was we arrived both with the concept, and at possessing apparatus capable of making use of it, in addition to our failure to uniformly leverage it.
What is being leveraged and what is providing the leverage? I take it that, for you, a moral schema (framework for thinking ethically) starts with feelings of empathy and proceeds by applying reason to determine how people should act on those feelings. Is that correct?
(August 17, 2017 at 10:42 am)Khemikal Wrote: The use of empathy as a moral barometer is often combined (either implicitly or explicitly) with a concept known as rational self interest, and with a process known as moral reasoning,…
Based on my understanding above, this means that people reason from feelings of empathy to estimate what would maximize personal benefit.
(August 17, 2017 at 10:42 am)Khemikal Wrote: Yes, we evolved to have an empathetic apparatus (and a rational apparatus..or, at least, an apparatus capable of reason, lol) in order to better our odds at survival and reproduction - add empathy and it's associated apparatus as well as our rational apparatus to the long list of things that evolved for one thing, but are now made better use of at another.
Some evolved traits that may have ceased to provide a reproductive advantage or may now ill-adapted to civilized societies. Similarly some traits may opportunistically confer advantages to a species in response to changes in environment.
I’m not sure why you think these facts are relevant.
(August 17, 2017 at 10:42 am)Khemikal Wrote: To the contention that our empathy does not extend uniformly, you're only saying, in this; "sometimes it doesn't work!". Yeah, no shit. You won't be able to find any moral schema which precludes the possibility of moral failure. If there's a way to get something wrong, you can always trust a human being to find it, amiright?
I don’t think you will find anything to the effect of “sometimes it doesn’t work” in my demonstration. Is there a specific premise to which you are referring? Also you seem to be saying that people sometimes make bad choices. True, but I do not see the significance. The demonstration only concerns itself with whether or not reliance on empathy is a truly rational ground for an ethical system. What am I missing?
(August 17, 2017 at 10:42 am)Khemikal Wrote: As to human dignity, from the foundation of rational self interest and in my own assessment, employing that empathetic ability.... the value of your life is -not- contingent upon your instrumental value to me. You -have- no instrumental value to me. It is, however, contingent upon at least one person thinking that at least one life has value (chiefly, me, thinking of my own) - and then rationally extending that valuation to any individual or organism which fits whatever criteria I use for that valuation. If the criteria were as a simple as "it;s a human life!" then, voila, and hey presto... despite your utter lack of utility to me, you also possess a human life - and so, human dignity.
So are you in essence making the following argument?:
I value my own life.
I am a human being.
Therefore, I value at least one human being.
Other people are human beings like me.
Therefore, I value other people.
(August 17, 2017 at 10:42 am)Khemikal Wrote: The ability to put ourselves in the place of another is an informative tool, that provides us insight with which we make moral considerations. We ask ourslves, "How would I feel if someone did that to me?" as a sort of hueristic for determining it's moral status....however, you;d be hard pressed to find a person who stops there - who couldn't offer an explanation of why such and such is wrong apart from how it gives them the bad feels.
So are you in essence making the following argument?:
I am a human with emotions.
Some of my emotions feel bad.
Other people are humans and it is reasonable to infer that they have emotions, just like me, some of which feel bad.
I have knowledge of what makes me feel bad.
Therefore, given basic epistemological limits, I have knowledge of what makes other people feel bad.
From there I’m not entirely clear on how you would link what I presume are your two main lines of reasoning into a single coherent argument. I think you are trying to say something along the lines that you value not feeling bad and therefore you should value not making other people feel bad. Or something like that.