Posts: 28481
Threads: 525
Joined: June 16, 2015
Reputation:
90
RE: Machine Intelligence and Human Ethics
May 26, 2019 at 6:32 pm
(May 26, 2019 at 4:08 pm)BrianSoddingBoru4 Wrote: (May 26, 2019 at 3:37 pm)wyzas Wrote: Once the bot's know it's a suicide mission and tell us they don't want to go. Until then, it's just a mission, HAL.
Side note: Just watched a vid where a dog owner died(recently) and her will stipulated that her pet dog be cremated and placed/planted with her. They followed the will, dogs are still considered property. I'm not sure that we have advanced as much as people like to think.
Here is the story (not what I watched): http://vt.co/news/us/healthy-dog-is-euth...ead-owner/
But would the machine even have the right to refuse the mission? We're in the habit of compelling human beings to perform dangerous, life-threatening tasks, under threat of imprisonment if they refuse to comply.
Boru
Doubtful. We have a vast history of placing our self worth above what we consider lower life forms. (yeah, yeah, insert race card here)
Being told you're delusional does not necessarily mean you're mental.
Posts: 20476
Threads: 447
Joined: June 16, 2014
Reputation:
111
RE: Machine Intelligence and Human Ethics
May 26, 2019 at 7:17 pm
We've sent millions of people to their deaths in war and call them heroes.
Why can't these metal soldiers be our heroes also
We need to evolve first as a primitive violent species before we can preach about any sort of objective morality to tin cans.
No God, No fear.
Know God, Know fear.
Posts: 19789
Threads: 57
Joined: September 24, 2010
Reputation:
85
RE: Machine Intelligence and Human Ethics
May 27, 2019 at 12:28 am
1. Arbitrarily high Intelligence does not automatically or inevitably lead to sentience. Sentience as we know it seems to be the result of a particular set of neurological circuitry developed not for any specific evolutionary need, but as an property that emerged from that particular ways in which a host of evolutionary needs are met. It is altogether unclear to me how many different fundamental ways there can be for intelligence to emerge. But it seems to me arbitrary improvements to processing speeds Nd problem solving capacity does not lead in the right general direction of what we call sentience.
2. Ethics is ultimately a lie and a compromise designed to sell a general necessity to those who do not perceive the necessity or do not personally benefit form it. What is the necessity for giving arbitrarily high artificial intelligence some semblance of personhood?
Posts: 7392
Threads: 53
Joined: January 15, 2015
Reputation:
88
RE: Machine Intelligence and Human Ethics
May 27, 2019 at 5:16 am
(This post was last modified: May 27, 2019 at 5:34 am by I_am_not_mafia.)
Don't worry about it. The current fad with machine learning is based on sorting through large amounts of static data harvested from our computer infrastructure in society. There is virtually no funding for the kind of AI you need for autonomous embodied robots that act like living agents. We need an entirely different approach for the latter.
Add to that there's also very little progress in the other technologies for sentient, sapient conscious embodied AI such as the actual robotics and sensors.
We first need to know what the technology will look like before questions of morality become relevant. And only until then can we start to answer them.
Don't get taken in by all the hype regarding Artificial Intelligence. 99% of it is bullshit used to get funding and to promote individual's careers as 'visionaries' and 'fuiturists' without having to deliver anything.
The AI you see now is weak AI when what you are referring to is strong AI and there has been virtually no progress in that. Except we now have a new name for it, AGI (Artificial General Intelligence). But that's about it.
What I can tell you though is that the kind of AI we would need would have to solve the same problems as animals, namely maintaining homoeostasis and being an autonomous agent embodied in an environment as part of a sensorimotor loop. There has been virtually no progress in this research since the 90's except for the work done by Boston Dynamics. The action controller will require engineering self-organising systems and this is what I research myself (peer-reviewed publications in scientific journals and conferences). But this is an extremely difficult area that very few people have any idea about how to even start developing. People haven't even started asking the right questions yet. I don't have much idea myself and I've been actively researching this area for over the last 20 years. But at least I know the right questions now.
Posts: 13901
Threads: 263
Joined: January 11, 2009
Reputation:
82
RE: Machine Intelligence and Human Ethics
May 27, 2019 at 7:26 am
(May 26, 2019 at 3:37 pm)wyzas Wrote: Side note: Just watched a vid where a dog owner died(recently) and her will stipulated that her pet dog be cremated and placed/planted with her. They followed the will, dogs are still considered property. I'm not sure that we have advanced as much as people like to think.
Here is the story (not what I watched): http://vt.co/news/us/healthy-dog-is-euth...ead-owner/
That is not a dog lover but someone that thinks like a bronze age tribal leader.
You can fix ignorance, you can't fix stupid.
Tinkety Tonk and down with the Nazis.
Posts: 7392
Threads: 53
Joined: January 15, 2015
Reputation:
88
RE: Machine Intelligence and Human Ethics
May 27, 2019 at 9:03 am
(May 26, 2019 at 3:37 pm)wyzas Wrote: Side note: Just watched a vid where a dog owner died(recently) and her will stipulated that her pet dog be cremated and placed/planted with her. They followed the will, dogs are still considered property. I'm not sure that we have advanced as much as people like to think.
Here is the story (not what I watched): http://vt.co/news/us/healthy-dog-is-euth...ead-owner/
Yeah I often make the point in these discussions, let's start considering our ethical attitudes towards animals who we are abusing right now and we know for sure have a capacity to suffer, before we start wondering about some sci-fi scenario that may never happen.
Posts: 19789
Threads: 57
Joined: September 24, 2010
Reputation:
85
RE: Machine Intelligence and Human Ethics
May 27, 2019 at 10:20 am
First, let’s define “suffer”. Mind “Knowing it when you see it” makes you, not the alleged sufferer, central.
Posts: 7392
Threads: 53
Joined: January 15, 2015
Reputation:
88
RE: Machine Intelligence and Human Ethics
May 27, 2019 at 10:33 am
(This post was last modified: May 27, 2019 at 10:36 am by I_am_not_mafia.)
(May 27, 2019 at 10:20 am)Anomalocaris Wrote: First, let’s define “suffer”. Mind “Knowing it when you see it” makes you, not the alleged sufferer, central.
Conscious aversive emotional response to persistent sensory stimuli in an embodied agent that cannot be easily minimised by any actions available to it.
Memories of such a state or learned associations can also be considered suffering if they evoke the same aversive response in the agent.
Working definitions of the terms I am using:
Consciousness used here requires a sense of self as an agent embodied in an environment that it can sense and act within. This can be as simple as the agent sensing the internal state of its body and agent controller and being able to act to alter it.
Emotion is assumed here to be a counter balance to cognition whereby it narrows the range of behaviours or actions available to the agent, probably using some widespread modulatory effect. Whereas cognition on the contrary widens the range of possible behaviours or actions.
Posts: 29910
Threads: 116
Joined: February 22, 2011
Reputation:
159
RE: Machine Intelligence and Human Ethics
May 27, 2019 at 11:11 am
If we have few qualms about killing a chimpanzee, it's unclear where the ethics to deny killing or sacrificing an intelligence that isn't human would come from.
Posts: 9538
Threads: 410
Joined: October 3, 2018
Reputation:
17
RE: Machine Intelligence and Human Ethics
May 27, 2019 at 11:28 am
(May 27, 2019 at 11:11 am)Jörmungandr Wrote: If we have few qualms about killing a chimpanzee, it's unclear where the ethics to deny killing or sacrificing an intelligence that isn't human would come from.
You would see more qualms if the machine in question is an upright biped with an observable face. Anthromorphism does the rest.
We should build all machines to resemble a cockroach. Then no one would object to destroying it at our convenience.
|