RE: Is evidentialism a dead philosophy?
April 20, 2014 at 10:55 am
(This post was last modified: April 20, 2014 at 11:19 am by Angrboda.)
(April 19, 2014 at 4:44 pm)bennyboy Wrote: That's a very interesting idea of a "right." Should the CyberBen 3000, if it can simulate all my behaviors, be extended all the rights that I enjoy? How about the right to "pursue happiness?" even though I suspect the CyberBen doesn't experience qualia, but only seems to?
I'm not gonna go there with you, but I will say a couple things. First, practically speaking, we'd never be able to ascertain whether they share "all" our behaviors, or even a significant portion of them. Second, our mental behaviors and their relation to external behaviors are also behaviors which would need to be duplicated if we're extending rights solely on similarity of behavior. The only way to determine that would be to first determine the causes of our mental behaviors, and only extend similar rights based on duplicating those causes. As with many hypotheticals in philosophy of mind, what that actually will look like may be something we can't even imagine at this point. Finally, as per our discussions previously, there may be reasons why we don't extend certain rights toward non-humans based on ethical principles or based on self-interest. Again, it depends on what rights are involved, how much of our behavior is duplicated, and how those behaviors are duplicated, much of which we simply can't guess at this time.
An interesting thought experiment suggests itself. Let's suppose the military developed a similar system to that of Amazon, only its purpose is to suggest suitable replacement parts or supplies on military contracts. At the beginning, it is only used to make suggestions about substitutions. A human double checks all recommendations at first. However, they find that the human overrules the machine's suggestion extraordinarily rarely, and as often as not, the human makes a mistake in overruling the machine. So in practice, the machine is as effective as the human at choosing substitutions. Should we eliminate the human from the loop and let the machine make the decisions directly? Why or why not? And remember, these are military contracts, so lives are at stake.
![[Image: extraordinarywoo-sig.jpg]](https://i.postimg.cc/zf86M5L7/extraordinarywoo-sig.jpg)