Could we teach a computer to lie?
October 27, 2018 at 11:58 am
(This post was last modified: October 27, 2018 at 12:05 pm by Mechaghostman2.)
Not sure if this should be posted in philosophy or computers, so it goes in computers.
Could we teach a computer free will? Say for example, teach it to lie.
Well, it would only take a beginner's skill to make the computer say the sky is green, and a slightly less beginner's skill to say the sky is green after being asked what color the sky is. That however isn't really free will, it's just a computer responding with a pre-coded message to a certain input. It's only we the people that might interpret it to be a lie. The computer doesn't know what it's saying.
A lie is knowing the right answer, but giving the wrong answer for some possible benefit.
I do think it would be possible to give a computer some limited ability to lie.
Say you created a program. You have it set so that the computer asks a math question and gives the answer, but you give it the ability to choose a wrong answer to the math question. (I don't know how you could make it choose between true and false, but I'm sure it could be done.) That decision in and of itself would be random, but then it asks if it can continue. Its objective will be to continue asking more questions. You can respond with yes or no. If it says 2+2=5, can I continue? And you allow it to, you have just taught it that it will continue by giving the wrong answer to the math problem. If you tell it no, then you have taught it that giving the wrong answer will not allow it to continue. If you say no, it will have to retry. If you only allow it to progress each time by giving the right answer, and allow it to learn from this (as modern computers can be taught to learn), you will teach it that telling the truth results in a positive outcome, and same if you only allow it to progress by lying.
Now that's not really free will exactly, that's just the computer learning what choice has a better probability of accomplishing its goal.
However, if you give it the option to ask complex questions that might take a while to answer, and you just green light it no matter what answer it gives, while only allowing it to progress as true on the easier answers, you have just taught it to use a bit of strategy in its decision as to where to apply a lie. Always tell the truth on the easy questions, get away with telling a lie on the harder ones.
At this point, the only difference between a computer lying and a human lying, is that a human will do it out of impulse due to fear of consequences. Beyond that, there's little difference. You will effectively give the computer free will.
Could we teach a computer free will? Say for example, teach it to lie.
Well, it would only take a beginner's skill to make the computer say the sky is green, and a slightly less beginner's skill to say the sky is green after being asked what color the sky is. That however isn't really free will, it's just a computer responding with a pre-coded message to a certain input. It's only we the people that might interpret it to be a lie. The computer doesn't know what it's saying.
A lie is knowing the right answer, but giving the wrong answer for some possible benefit.
I do think it would be possible to give a computer some limited ability to lie.
Say you created a program. You have it set so that the computer asks a math question and gives the answer, but you give it the ability to choose a wrong answer to the math question. (I don't know how you could make it choose between true and false, but I'm sure it could be done.) That decision in and of itself would be random, but then it asks if it can continue. Its objective will be to continue asking more questions. You can respond with yes or no. If it says 2+2=5, can I continue? And you allow it to, you have just taught it that it will continue by giving the wrong answer to the math problem. If you tell it no, then you have taught it that giving the wrong answer will not allow it to continue. If you say no, it will have to retry. If you only allow it to progress each time by giving the right answer, and allow it to learn from this (as modern computers can be taught to learn), you will teach it that telling the truth results in a positive outcome, and same if you only allow it to progress by lying.
Now that's not really free will exactly, that's just the computer learning what choice has a better probability of accomplishing its goal.
However, if you give it the option to ask complex questions that might take a while to answer, and you just green light it no matter what answer it gives, while only allowing it to progress as true on the easier answers, you have just taught it to use a bit of strategy in its decision as to where to apply a lie. Always tell the truth on the easy questions, get away with telling a lie on the harder ones.
At this point, the only difference between a computer lying and a human lying, is that a human will do it out of impulse due to fear of consequences. Beyond that, there's little difference. You will effectively give the computer free will.