RE: Anti-Utilitarianism
March 2, 2011 at 4:33 pm
(This post was last modified: March 2, 2011 at 4:55 pm by Edwardo Piet.)
(March 2, 2011 at 2:25 pm)theVOID Wrote: Firstly, this is only applicable to one form of Utilitarianism, Pleasure Utilitarianism.
No, it is also applicable to pain or any utility that requires consciousness.
Quote:Secondly, Why specifically do you think that we cannot evaluate happiness? That we have separate consciousness is NOT an argument.
To experience feeling happiness requires individual feelers of such happiness. Every feeler experiences happiness separately because their consciousnesses are separate. They cannot be aggregated together. 1000 people with 1 point of happiness is just 1000 people separately experiencing a happiness of 1. It is not 1000 times better because not one of them experiences more than a happiness of 1. None experiences a happiness of any more. There is no more experienced happiness at all because such happiness requires an individual conscious of it. Any happiness that is aggregated is illusory because no one is there to experience that additional aggregated happiness, everyone only experiences their own.
Quote:Thirdly, It's rather easy to see that you can calculate happiness in simple scenarios (how easy this is in the real world is a different issue) example: Bob, Tim and Sue are in a room, Bob and Sue are good friends, they both hate Tim, in return Tim hates them both equally, one of these people are going to be slapped, If Bob was slapped him and Sue are going to be unhappy, Tim is going to be happy, If Sue is slapped her and bob will be unhappy and Tim will be happy, If Tim is slapped both Bob and Sue will be happy but Tim will be unhappy - Which scenario is going to cause the most happiness?
It depends how unhappy or happy any of the individuals are. My point is that the happinesses of different individuals being aggregated together makes no sense because no one is there to experience such aggregated happiness: Such aggregated happiness doesn't exist.
If 10,000 people experience a pain of 1, say a pinprick, that doesn't equate to a pain of 10,000 because it makes absolutely no sense to aggregate the 10,000 people experiencing the pinprick pain of 1 together. Every single person experiences a pain of 1. Not one, none experience any more than that. This is the same with aggregating any emotion, it makes no sense when you aggregate different individuals with separate consciousnesses.
Quote:Fourthly, evaluating collective happiness is in no way treating the people like a single entity, this is the biggest straw man used against Utilitarianism, It is no more treating people like a single entity than evaluating their preferences for ANY issue is treating them like a single entity, after all Happiness is a manifestation of preference.It is exactly like doing that because happiness requires consciousness and consciousness requires individual consciousnesses which are separate. It makes no sense to aggregate them. As I explained above.
Quote:Just like we can meaningfully determine which political policy or fast food 'restaurant' is preferred by the most people we can determine which action will cause the most happiness.False analogy since that is not a matter where the fact of separate consciousness is relevant (since happiness is dependent on consciousness and such consciousness is a matter of separate consciousnesses and therefore separate happinesses that can't be added together in any meaningful way).
Quote:Case and point, take a group of 100 people, 80 who like olives and 20 who do not - the 80 who like olives dislike yoghurt and the 20 who dislike olives like yoghurt - they would all like a snack - You have two types of food you can give them, olives and yoghurt, you are only allowed to introduce 1 type of food into the room, which one will increase happiness for the most people?It's simply a matter of diplomacy and practicality to please the larger group, it is irrelevant to the matter of aggregating their happinesses. Let's say if we introduce the olives it will make one person with an olive allergy die very quickly guaranteed. Then it doesn't matter if there were an infinite amount of people who mildly disliked the yoghurts, it makes no sense to aggregate their dislike together, not one of them experience more than mild dislike, but the one person who will die from the olives is the only relevant thing. We prioritize things based on who suffer the most and who are the happiest and what is most practical and diplomatic, the aggregation of happinesses or preferences or sufferings makes no sense when they are all dependent on separate consciousnesses that can't actually be aggregated in reality.
Quote:I maintain that is complete bullshit.
I maintain your bullshit argument is a confusion and full of misunderstanding.
Quote:It's not really an aggregation as we are not trying to find a total, we are simply looking for a proportioned representation of preference.
You are not aggregating quantities of individual utility dependent on separate consciousnesses (e.g : emotions like pleasure and pain)? What are you doing then?
Quote:I am not against Consequentialism, I am a Consequentialist myself. But despite Utilitarianism's popularity - which is irrelevant anyway, an appeal to popularity is not an appeal to rationality - I see the above objection as undefeatable.
Quote:They're some of the most poorly formed, vague and poorly explained objections to anything in philosophyActually it makes complete sense. What use is there in aggregating utility that cannot be aggregated because such utility is dependent on separate consciousnesses?
Quote: they deny basic abilities of statistics to determine how opinions on an issue are proportioned.
Statistics based on an aggregation that makes no sense are false statistics.
Quote: Even if you deny that happiness is in any way a gauge of morality to deny that we can't evaluate happiness is a complete crock.We can evaluate the happiness of individuals. Adding them up as if two equally happy individuals, say, is twice as good makes no sense. Why would it be?