Posts: 1164
Threads: 7
Joined: January 1, 2014
Reputation:
23
Trolley problem: 2035 version
May 27, 2015 at 9:56 am
(This post was last modified: May 27, 2015 at 10:05 am by JuliaL.)
Hypothetical scenario:
Congratulations! Your DARPA funded project is a complete success!
You have discovered and fully characterized two neural subnets in the Allen catalog of the human brain.
1) A sub-ganglion whose configuration and activity fully characterize and implement gender beliefs and behavior.
2) A network which implements unique religious faith.
Fellow researcher, "Bob" has developed nano scale robots which are able to manipulate neurons at the molecular level. Applying these to the above networks gives a capability of fully determining future beliefs and behavior. Correctly used, this process is fully controlled and without deleterious side effects (impossible, I know, but this is my hypothetical.) For example:
1) You can fully convert homosexual desire and behavior to stereotypically heterosexual.
2) You can convert any true believer in any religious faith to any other or remove any religious bias whatsoever, leaving the person atheist.
Is it ethically acceptable to use the nanobots to make the above modifications?
So how, exactly, does God know that She's NOT a brain in a vat?
Posts: 19644
Threads: 177
Joined: July 31, 2012
Reputation:
92
RE: Trolley problem: 2035 version
May 27, 2015 at 10:40 am
nah.... It's so much more fun to laugh at creationists.
However, I see few people laughing at gays...
Can you also use the same sort of nanobots to cure cancer? That would be soooo much more helpful.
Posts: 5706
Threads: 67
Joined: June 13, 2014
Reputation:
69
RE: Trolley problem: 2035 version
May 27, 2015 at 10:45 am
(This post was last modified: May 27, 2015 at 10:46 am by Jenny A.)
And here I was waiting to decide if I should defect the missile set to destroy planet X with several billion inhabitants, to spaceship Y with only a thousand people on board. . .
So getting away from the trolley and turning to mind altering bots. . .
No, altering someone's mind to change their beliefs and or desires is a terrible idea. Using robots makes it no less horrible than if it were done in a "liberation camp." The only moral use I can see for such a bot is voluntary change on the part of the person whose mind is to be altered. And even then, I would think serious counseling would be in order before use.
If there is a god, I want to believe that there is a god. If there is not a god, I want to believe that there is no god.
Posts: 198
Threads: 5
Joined: April 30, 2015
Reputation:
4
RE: Trolley problem: 2035 version
May 27, 2015 at 10:50 am
I do think the world would be a better place without religion but only because that world would require the people in it to have been enlightened enough and rational enough to leave their religion behind on their own, artificially manipulating someone's mind or enforcing a ban on religions would defeat the purpose
“The larger the group, the more toxic, the more of your beauty as an individual you have to surrender for the sake of group thought. And when you suspend your individual beauty you also give up a lot of your humanity. You will do things in the name of a group that you would never do on your own. Injuring, hurting, killing, drinking are all part of it, because you've lost your identity, because you now owe your allegiance to this thing that's bigger than you are and that controls you.” - George Carlin
Posts: 9147
Threads: 83
Joined: May 22, 2013
Reputation:
46
RE: Trolley problem: 2035 version
May 27, 2015 at 10:51 am
If I make everyone exactly like me, and I still fucking hate humanity, then what will that say?
I'd rather not find out.
Posts: 5706
Threads: 67
Joined: June 13, 2014
Reputation:
69
RE: Trolley problem: 2035 version
May 27, 2015 at 10:51 am
This does raise a question about homosexuality though. Sexual orientation is not a choice now, but such a bot would make it a choice. That raises no real ethical dilemma for me, as I don't see anything immoral same gender sex. But, there are people, mostly Christian I imagine, who excuse homosexuality only because it is not a personal choice, or who condemn it because they think it is a personal choice. Such a bot might set gay rights back decades.
If there is a god, I want to believe that there is a god. If there is not a god, I want to believe that there is no god.
Posts: 3541
Threads: 0
Joined: January 20, 2015
Reputation:
35
RE: Trolley problem: 2035 version
May 27, 2015 at 10:52 am
(May 27, 2015 at 9:56 am)JuliaL Wrote: [...]
Is it ethically acceptable to use the nanobots to make the above modifications?
Yup, totally fine. But then again - I'm evil.
"The fact that a believer is happier than a skeptic is no more to the point than the fact that a drunken man is happier than a sober one." - George Bernard Shaw
Posts: 23918
Threads: 300
Joined: June 25, 2011
Reputation:
151
RE: Trolley problem: 2035 version
May 27, 2015 at 10:56 am
(This post was last modified: May 27, 2015 at 10:57 am by Whateverist.)
I'm not clear whether you are asking:
1) Is it permissible for me to do for my own reasons?
2) Is it permissible for me to share the technology with a person who wishes for one of these outcomes?
To 1 I would answer no. To 2 I would want a legally binding disclaimer signed and witnessed as well as a suitable fee for the service.
(Of course, Bob and I would probably both have to sign off on it.)
Posts: 19789
Threads: 57
Joined: September 24, 2010
Reputation:
85
RE: Trolley problem: 2035 version
May 27, 2015 at 11:19 am
(This post was last modified: May 27, 2015 at 11:20 am by Anomalocaris.)
(May 27, 2015 at 9:56 am)JuliaL Wrote: Hypothetical scenario:
Congratulations! Your DARPA funded project is a complete success!
You have discovered and fully characterized two neural subnets in the Allen catalog of the human brain.
1) A sub-ganglion whose configuration and activity fully characterize and implement gender beliefs and behavior.
2) A network which implements unique religious faith.
Fellow researcher, "Bob" has developed nano scale robots which are able to manipulate neurons at the molecular level. Applying these to the above networks gives a capability of fully determining future beliefs and behavior. Correctly used, this process is fully controlled and without deleterious side effects (impossible, I know, but this is my hypothetical.) For example:
1) You can fully convert homosexual desire and behavior to stereotypically heterosexual.
2) You can convert any true believer in any religious faith to any other or remove any religious bias whatsoever, leaving the person atheist.
Is it ethically acceptable to use the nanobots to make the above modifications?
In most cases, no. But never say never.
Let's say in 2036, a Mars sized rouge planet is detected heading towards earth on a collision course. It is vastly beyond human capacity to avert the big splat. There is nowhere near that we can escape to. It take essentially the wholehearted effort of the whole mankind to build a generation ship that might allow a small human seedling population to depart on a long trip to survive. That's humanity's only chance. But half the population would rather pray for salvation at end time. I would say send in the
Nanobots.
BTW, there is an answer to a more classical trolley car problem in there somewhere.
Posts: 3395
Threads: 43
Joined: February 8, 2015
Reputation:
33
RE: Trolley problem: 2035 version
May 27, 2015 at 11:35 am
(This post was last modified: May 27, 2015 at 12:14 pm by Pyrrho.)
First of all, this is nothing like the trolley problem, or, at least, is not in any obvious way like the trolley problem.
Second, your problem is really more complicated than people seem to be taking it. The serious replies so far seem to regard using the nanobots as a violation of personal autonomy. [Edit: I see that while I was writing this, someone else has posted something bringing up the issue in this post.] However, there is more at play here.
You don't say how the nanobots are administered, so I will just make up something convenient for explaining why it is more complicated than just a matter of personal autonomy.
Suppose we can just disperse the nanobots into the air, and they will seek out hosts. Now, imagine spraying a bunch of these nanobots over an area with religious strife, like Syria and Iraq. Do you think that if everyone in Syria and Iraq suddenly became atheists, that they would kill each other less? Or suppose we got them to Saudi Arabia, do you think that they would stop religious oppression if everyone suddenly became an atheist?
My point is, this is not merely an issue of personal autonomy, but also about how these things will affect others. I am not saying (yet) that these things should be done if they were possible, but I am saying that one should consider more than just the issue of personal autonomy. Is, for example, the personal autonomy of a religious murderer more important than the lives of his victims?
So, you really have a hypothetical issue that has a good deal of complexity to it.
"A wise man ... proportions his belief to the evidence."
— David Hume, An Enquiry Concerning Human Understanding, Section X, Part I.
|