Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Just in case some people here might end up hating on me for posting about this, here's a warning:
DO NOT READ ANY FURTHER IF YOU FEEL LIKE YOU COULD EASILY BELIEVE THIS SHIT! THIS COULD PUT YOU THROUGH SEVERE PSYCHOLOGICAL DISTRESS! I'M NOT JOKING! READ AHEAD AT YOUR OWN RISK!
...
...
...
...
...
...
...
...
Now that's out of the way, let's discuss Roko's Basilisk:
Quote:The smartest people I know who do personally work on AI think the scaremongering coming from people who don't work on AI is lunacy.
—Marc Andreessen
Quote:This is like a grown up version of The Game, which you just made us lose, and I retweeted so all my friends lost too.
—Jay Rishel
Quote:I wish I had never learned about any of these ideas.
—Roko
Quote:Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.
Despite widespread incredulity, this argument is taken quite seriously by some people, primarily some denizens of LessWrong. While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.
Roko's posited solution to this quandary is to buy a lottery ticket, because you'll win in some quantum branch.