(November 23, 2021 at 8:35 pm)Angrboda Wrote:(November 23, 2021 at 1:09 pm)Ferrocyanide Wrote: You can build an AI to drive a car. The AI is a set of algorithms but just because it is a set of algorithms doesn’t mean that it doesn’t make good decisions.
It has input in the form of video and it processes the video and make decisions and gives an output.
As for morality, it is not really different than being a good car driver. There is a certain set of rules that humans want to live by.
The rules are connected to human emotions.
Again, you will have an input stage ---> which leads to the processing stage ----> which leads to an output.
That's an example of what G.E. Moore calls the naturalistic fallacy. "Like" moral good isn't moral good, and never the twain shall meet.
Quote:In philosophical ethics, the naturalistic fallacy is the mistake of explaining something as being good reductively, in terms of natural properties such as pleasant or desirable. The term was introduced by British philosopher G. E. Moore in his 1903 book Principia Ethica.
Moore's naturalistic fallacy is closely related to the is–ought problem, which comes from David Hume's A Treatise of Human Nature (1738–40). However, unlike Hume's view of the is–ought problem, Moore (and other proponents of ethical non-naturalism) did not consider the naturalistic fallacy to be at odds with moral realism.
Wkipedia || Naturalistic fallacy
Where do you see the fallacy?
I admit that
input stage ---> which leads to the processing stage ----> which leads to an output.
is not the full algorithm behind morality.
This is why I threw in loosely “The rules are connected to human emotions.” in there.
Keep in mind that your initial sentence that started this part of the discussion was
“Not having free will would seem to rob our actions of what is traditionally conceived of as moral significance. Such a move would be fatal for a god.”
so, I would first need to understand what is free will, how does a brain executing free will behave.