(April 19, 2014 at 11:14 am)rasetsu Wrote:(April 18, 2014 at 6:44 pm)bennyboy Wrote: The case of consciousness is a special one, right here, right now. That's because we look to certain behaviors as evidence supporting non-philosophical-zombiism, and because we are approaching the point at which man-made machines will be able to mimic those behaviors more and more effectively. Will this shift in available evidence change what philosophical assumptions we're willing to make? Are we just going to throw up our hands and extend "rights" to every physical system which can tug at our evolved heartstrings?
If you'd actually read my initial post, the argument was based on physical similarity, not behavioral similarity. But in answer to your question, yes we will likely accord rights based on similarity of behavioral capability. A robot soldier may not be accorded the right not to be shut off, but depending on its demonstrated ability to make appropriate decisions of who to kill and when, it may be afforded the autonomy to operate unsupervised in a war zone. The Amazon recommendation engine is an example of machine intelligence. We know, in general, how it arrives at its recommendations, but as to the specifics, its behavior is unknown and unpredictable. Yet the recommendation engine performs its job well enough that Amazon includes its decisions and recommendations along with human created promotional material. It has been granted the right to sell products for Amazon, based on its past behavior, even though we can't completely predict its future behavior.
That's a very interesting idea of a "right." Should the CyberBen 3000, if it can simulate all my behaviors, be extended all the rights that I enjoy? How about the right to "pursue happiness?" even though I suspect the CyberBen doesn't experience qualia, but only seems to?