Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Current time: April 25, 2024, 12:12 pm

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Roko's Basilisk (Read Warning In Post First)
#1
Roko's Basilisk (Read Warning In Post First)
Just in case some people here might end up hating on me for posting about this, here's a warning:

DO NOT READ ANY FURTHER IF YOU FEEL LIKE YOU COULD EASILY BELIEVE THIS SHIT! THIS COULD PUT YOU THROUGH SEVERE PSYCHOLOGICAL DISTRESS! I'M NOT JOKING! READ AHEAD AT YOUR OWN RISK!

...

...

...

...

...

...

...

...


Reply
#2
RE: Roko's Basilisk (Read Warning In Post First)
It is a bit like Pascal's wager, but it seems like quite the assumption to think a malevolent AI would seek to punish those who stifled its creation. Revenge is such a human concept. An AI would be more prone to assess which humans would best serve its purposes at the current moment rather than being spiteful to those who merely acted on their own behalf in the past.


Reply
#3
RE: Roko's Basilisk (Read Warning In Post First)
(February 20, 2018 at 6:10 pm)vulcanlogician Wrote: It is a bit like Pascal's wager, but it seems like quite the assumption to think a malevolent AI would seek to punish those who stifled its creation.

To be fair, the OP did say "could", not "would". Makes a bit of a difference.

Either way, I would worry about this as much as I worry about gods and monkeys flying out of my butt.
Reply
#4
RE: Roko's Basilisk (Read Warning In Post First)
(February 20, 2018 at 6:10 pm)vulcanlogician Wrote: It is a bit like Pascal's wager, but it seems like quite the assumption to think a malevolent AI would seek to punish those who stifled its creation. Revenge is such a human concept. An AI would be more prone to assess which humans would best serve its purposes at the current moment rather than being spiteful to those who merely acted on their own behalf in the past.



But is this really about revenge? I still don't fully grasp what the whole big deal is about, but it has to be deeper than that. Maybe to get human beings who hear about this to commit to getting the AI built as soon as possible?

And now, you make me want to waste time trying to "decode" your binary problem, lol. Thanks a lot.
Reply
#5
RE: Roko's Basilisk (Read Warning In Post First)
Strikes me as anthropomorphizing AI.

Revenge is a very human quality.
"The first principle is that you must not fool yourself — and you are the easiest person to fool." - Richard P. Feynman
Reply
#6
RE: Roko's Basilisk (Read Warning In Post First)
One thing I disagree with (at first glance) is that simulations of me are me. I think people who are clones of me are their own thing, with their own feelings and new experiences I won't get to experience. So whatever pain and suffering they go through, I will not go through myself anyway.

If I had to see every doppelganger of mine as really me, I'd be crippled for life, considering I'm a Many-Worlder.

And this is just one of several flaws I see with Roko's reasoning.
Reply
#7
RE: Roko's Basilisk (Read Warning In Post First)
I see your point about doppelgangers in and of itself, but how does it relate to the basilisk problem?
Reply
#8
RE: Roko's Basilisk (Read Warning In Post First)
If AI takes on an anthropomorphic character, I would suspect it would draw all its inferences from conversations it can find on the internet.

If that happens, we are fucking doomed, because our new Electronic God will be a goddamned internet troll.
Reply
#9
RE: Roko's Basilisk (Read Warning In Post First)
(February 20, 2018 at 6:49 pm)vulcanlogician Wrote: I see your point about doppelgangers in and of itself, but how does it relate to the basilisk problem?

I'm assuming, of course, that we're not in one of these simulated worlds already built by the Basilisk, and that I've grasped the problem well enough to debate it properly.

From the same link in the OP:

Quote:Thus this is not necessarily a straightforward "serve the AI or you will go to hell" — the AI and the person punished need have no causal interaction, and the punished individual may have died decades or centuries earlier. Instead, the AI could punish a simulation of the person, which it would construct by deduction from first principles. However, to do this accurately would require it be able to gather an incredible amount of data, which would no longer exist, and could not be reconstructed without reversing entropy.

Bolded mine.

Also, of relevance is this:

Quote:Simulations of you are also you

LessWrong holds that the human mind is implemented entirely as patterns of information in physical matter, and that those patterns could, in principle, be run elsewhere and constitute a person that feels they are you, like running a computer program with all its data on a different PC; this is held to be both a meaningful concept and physically possible.

This is not unduly strange (the concept follows from materialism, though feasibility is another matter), but Yudkowsky further holds that you should feel that another instance of you is not a separate person very like you — an instant twin, but immediately diverging — but actually the same you, since no particular instance is distinguishable as "the original." You should behave and feel concerning this copy as you do about your very own favourite self, the thing that intuitively satisfies the concept "you". One instance is a computation, a process that executes "you", not an object that contains, and is, the only "true" "you".[29]

This conception of identity appears to have originated on the Extropians mailing list, which Yudkowsky frequented, in the 1990s, in discussions of continuity of identity in a world where minds could be duplicated.[30]

It may be helpful to regard holding this view as, in principle, an arbitrary choice, in situations like this — but a choice which would give other beings with the power to create copies of you considerable power over you. Many of those adversely affected by the basilisk idea do seem to hold this conception of identity.
Reply
#10
RE: Roko's Basilisk (Read Warning In Post First)
I read RW too, and found the article on Roko's Basilisk. It's a strange tale of what ideology can do to a person's brain, modeled in a very strange, but very real, way. It's so bizarre to even imagine that we can actually reach a point in our AI (outside of a sci-fi story) that we can create a super-intelligent computer that can recreate a person exactly, which would be the exact same as you, and such a super-intelligence would like to do nothing more than torture this exact copy of you for not bringing it into being.

And watching Black Mirror has made this even more implausible for me, since (at least) three episodes include "cookies," which are exact copies of a person extracted by computer, which frequently get tortured, but even then, as they're perceived as having the same feelings as humans, even if the powers that be don't bother to treat them like it, even then there's still a distinction between what's happening to the original person and what's happening to the cookie.

Sadly, I can't find a video of the whole segment which properly contrasts what happened with Cookie!Greta and Real!Greta, so here's the clip of Cookie!Greta doing her work, already broken as Real!Greta goes  about her day.



Comparing the Universal Oneness of All Life to Yo Mama since 2010.

[Image: harmlesskitchen.png]

I was born with the gift of laughter and a sense the world is mad.
Reply



Possibly Related Threads...
Thread Author Replies Views Last Post
  Disk read error. Creed of Heresy 12 4040 July 27, 2012 at 9:19 am
Last Post: Tiberius



Users browsing this thread: 1 Guest(s)