Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Just in case some people here might end up hating on me for posting about this, here's a warning:
DO NOT READ ANY FURTHER IF YOU FEEL LIKE YOU COULD EASILY BELIEVE THIS SHIT! THIS COULD PUT YOU THROUGH SEVERE PSYCHOLOGICAL DISTRESS! I'M NOT JOKING! READ AHEAD AT YOUR OWN RISK!
...
...
...
...
...
...
...
...
Now that's out of the way, let's discuss Roko's Basilisk:
Quote:The smartest people I know who do personally work on AI think the scaremongering coming from people who don't work on AI is lunacy.
—Marc Andreessen
Quote:This is like a grown up version of The Game, which you just made us lose, and I retweeted so all my friends lost too.
—Jay Rishel
Quote:I wish I had never learned about any of these ideas.
—Roko
Quote:Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.
Despite widespread incredulity, this argument is taken quite seriously by some people, primarily some denizens of LessWrong. While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.
Roko's posited solution to this quandary is to buy a lottery ticket, because you'll win in some quantum branch.
It is a bit like Pascal's wager, but it seems like quite the assumption to think a malevolent AI would seek to punish those who stifled its creation. Revenge is such a human concept. An AI would be more prone to assess which humans would best serve its purposes at the current moment rather than being spiteful to those who merely acted on their own behalf in the past.
(February 20, 2018 at 6:10 pm)vulcanlogician Wrote: It is a bit like Pascal's wager, but it seems like quite the assumption to think a malevolent AI would seek to punish those who stifled its creation.
To be fair, the OP did say "could", not "would". Makes a bit of a difference.
Either way, I would worry about this as much as I worry about gods and monkeys flying out of my butt.
(February 20, 2018 at 6:10 pm)vulcanlogician Wrote: It is a bit like Pascal's wager, but it seems like quite the assumption to think a malevolent AI would seek to punish those who stifled its creation. Revenge is such a human concept. An AI would be more prone to assess which humans would best serve its purposes at the current moment rather than being spiteful to those who merely acted on their own behalf in the past.
But is this really about revenge? I still don't fully grasp what the whole big deal is about, but it has to be deeper than that. Maybe to get human beings who hear about this to commit to getting the AI built as soon as possible?
And now, you make me want to waste time trying to "decode" your binary problem, lol. Thanks a lot.
February 20, 2018 at 6:32 pm (This post was last modified: February 20, 2018 at 6:33 pm by GrandizerII.)
One thing I disagree with (at first glance) is that simulations of me are me. I think people who are clones of me are their own thing, with their own feelings and new experiences I won't get to experience. So whatever pain and suffering they go through, I will not go through myself anyway.
If I had to see every doppelganger of mine as really me, I'd be crippled for life, considering I'm a Many-Worlder.
And this is just one of several flaws I see with Roko's reasoning.
(February 20, 2018 at 6:49 pm)vulcanlogician Wrote: I see your point about doppelgangers in and of itself, but how does it relate to the basilisk problem?
I'm assuming, of course, that we're not in one of these simulated worlds already built by the Basilisk, and that I've grasped the problem well enough to debate it properly.
From the same link in the OP:
Quote:Thus this is not necessarily a straightforward "serve the AI or you will go to hell" — the AI and the person punished need have no causal interaction, and the punished individual may have died decades or centuries earlier. Instead, the AI could punish a simulation of the person, which it would construct by deduction from first principles. However, to do this accurately would require it be able to gather an incredible amount of data, which would no longer exist, and could not be reconstructed without reversing entropy.
Bolded mine.
Also, of relevance is this:
Quote:Simulations of you are also you
LessWrong holds that the human mind is implemented entirely as patterns of information in physical matter, and that those patterns could, in principle, be run elsewhere and constitute a person that feels they are you, like running a computer program with all its data on a different PC; this is held to be both a meaningful concept and physically possible.
This is not unduly strange (the concept follows from materialism, though feasibility is another matter), but Yudkowsky further holds that you should feel that another instance of you is not a separate person very like you — an instant twin, but immediately diverging — but actually the same you, since no particular instance is distinguishable as "the original." You should behave and feel concerning this copy as you do about your very own favourite self, the thing that intuitively satisfies the concept "you". One instance is a computation, a process that executes "you", not an object that contains, and is, the only "true" "you".[29]
This conception of identity appears to have originated on the Extropians mailing list, which Yudkowsky frequented, in the 1990s, in discussions of continuity of identity in a world where minds could be duplicated.[30]
It may be helpful to regard holding this view as, in principle, an arbitrary choice, in situations like this — but a choice which would give other beings with the power to create copies of you considerable power over you. Many of those adversely affected by the basilisk idea do seem to hold this conception of identity.
I read RW too, and found the article on Roko's Basilisk. It's a strange tale of what ideology can do to a person's brain, modeled in a very strange, but very real, way. It's so bizarre to even imagine that we can actually reach a point in our AI (outside of a sci-fi story) that we can create a super-intelligent computer that can recreate a person exactly, which would be the exact same as you, and such a super-intelligence would like to do nothing more than torture this exact copy of you for not bringing it into being.
And watching Black Mirror has made this even more implausible for me, since (at least) three episodes include "cookies," which are exact copies of a person extracted by computer, which frequently get tortured, but even then, as they're perceived as having the same feelings as humans, even if the powers that be don't bother to treat them like it, even then there's still a distinction between what's happening to the original person and what's happening to the cookie.
Sadly, I can't find a video of the whole segment which properly contrasts what happened with Cookie!Greta and Real!Greta, so here's the clip of Cookie!Greta doing her work, already broken as Real!Greta goes about her day.
Comparing the Universal Oneness of All Life to Yo Mama since 2010.
I was born with the gift of laughter and a sense the world is mad.