Does it make sense to speak of "Universal Consciousness" or "Universal Intell...
May 27, 2014 at 2:13 am
(This post was last modified: May 27, 2014 at 2:37 am by Rampant.A.I..)
(May 27, 2014 at 1:44 am)Chas Wrote: Philosophy does not give us answers, it only helps us ask better questions.
I think this is the most succinct summary of why abstract constructs within the mind do not give us knowledge, they give us hypotheticals.
If the hypotheticals are testable, we can then form knowledge.
Conducting thought experiments is valuable insofar as it trains us to better think about difficult things, and form better hypothesis.
So, to the agnostic, while I can appreciate the position being argued for the possibility of, you and I both know the only rational thing to do lacking data is to withhold judgement.
The video I posted of Lt. Commander Data was a scene where he begins talking to himself; and only then realizes he's having an experience he's only witnessed in other conscious beings.
That, to me, is the definition of self-awareness, and I believe the issue people have raised with the term.
A being could be aware of the self, and many animals seem to be. This is reinforced by experimentation.
But human self-awareness is not only the awareness of the self, but the ability to assign meaning, qualify the experience of selfdom, and contemplate what it is to be self-aware.
Until we construct or encounter an AI consciousness like Data, we won't know if metacognition is possible for an artificially complex brain, though there's absolutely no reason to assume it isn't.
I myself believe self-aware consciousness is somewhat of an illusion produced to provide a narrative with the evolutionary advantage of better decision-making. It's a belief I've held for some time, and philosophy of mind, neuroscience, evolutionary psychology, and the Youarenotsosmart podcast affirm this belief.
In my opinion, building a "Chat Bot" complex enough to pass the Turing test would be indistinguishable from an AI capable of metacognition, but have no idea if this would imply metacognitive ability through sheer complexity, or simply blur the distinction until it ceases to make conceptual sense.
We may make inferences, but without data (pun intended) or even solid theories based on data to back them up, they're shots in the dark.