(July 4, 2022 at 5:55 pm)The Grand Nudger Wrote: The objection then, it seems..is not whether the victim machine is conscious - but it's motivation.
I agree that motivated conscious entities are dangerous. Just look at us.
I'd argue that there's no such thing as danger without qualia. You might have evolved systems that avoid or approach each other in various ways, just because that's the way things have evolved. But it doesn't, so far as I can tell, matter without sentience.
AI systems might soon be capable of much more complex processing AND behavior than we are, i.e. they will be more evolved than we are. But even if AI develop tendencies to maintain "self," and therefore act as though their existence matters-- will it really?