Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Current time: September 28, 2024, 6:10 pm

Poll: Will artificial intelligence ever achieve true sentience?
This poll is closed.
There are good reasons to think this will never happen.
11.11%
3 11.11%
I can't prove it but absolutely not. The idea of artificial sentience is absurd..
11.11%
3 11.11%
There is no telling what the future may hold. It's a coin flip.
14.81%
4 14.81%
Yes, smart machines are trending in that direction already.
44.44%
12 44.44%
Absolutely yes and I can describe to you the mechanisms which make it possible in principle.
7.41%
2 7.41%
Other. (Please explain.)
11.11%
3 11.11%
Total 27 vote(s) 100%
* You voted for this item. [Show Results]

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Will AI ever = conciousness or sentience?
#51
RE: Will AI ever = conciousness or sentience?
(November 29, 2012 at 3:16 pm)Ryantology Wrote: I think, in a practical sense, we will probably duplicate the effect (or create a convincing facsimile of it) by chance before we understand it in that level of detail, however.

I of course think you are conflating expert processing with something much more subjective in nature. Outward appearances will never provide adequate support for the existence of subjective states. It would be a much easier task to program a machine to fool a human observer than it would be to create the conditions where a contemplation program corresponds to anything near what we ourselves mean by contemplation.

(November 29, 2012 at 3:16 pm)Ryantology Wrote: I think, at the point where an AI tells us that it is conscious, and can convince a majority of people that its thought processes are independent and unique, we have to start giving them the benefit of the doubt (as we do naturally to every other person we encounter) and call them 'conscious'.

Fooling a human observer is beside the point, though an interesting challenge for AI in its own right.
Reply
#52
RE: Will AI ever = conciousness or sentience?
I'm too ignorant on the topic to provide an informed opinion, but I've enjoyed tagging along. Something else I will add to my 'needs delving into' list.

I'll just state that I like Z's comment regarding the potential of consciousness being an emergent property of the brain, but think it as yet lacks explanatory power for the purpose of AI and the like.

Regarding the pursuit of framing the question, I'd like to share an article by Massimo Pigliucci in response to an interview about the complexity of the internet and its potential for 'waking up'. I'll paste part of his conclusion. Thought it was apropos.

http://rationallyspeaking.blogspot.com/2...ernet.html

Quote: ...let us stress the point once more: neither complexity per se nor computational ability on its own explain consciousness. Yes, conscious brains are complex, and they are capable of computation, but they are clearly capable of something else (to feel what it is like to be an organism of a particular type), and we still don’t have a good grasp of what is missing in our account of consciousness to explain that something else. The quest continues...
Reply
#53
RE: Will AI ever = conciousness or sentience?
(November 29, 2012 at 3:52 pm)whateverist Wrote: Fooling a human observer is beside the point, though an interesting challenge for AI in its own right.

In many ways the human mind seems programmed to spot difference. Even if AI initially seems to be able to 'fool' an observer, the trick might quickly be discovered. Remember when this film was first shown people ran screaming convinced the train was about to knock them down.

http://www.youtube.com/watch?v=BxeLacGat-c
Reply
#54
RE: Will AI ever = conciousness or sentience?



There's an interesting notion I happened upon recently, that one hemisphere of the brain is good at picking out differences and new things, the other concentrates on preserving the status quo in terms of the mind's representation and understanding of the world. (I don't recall the source offhand.) I'm a bit skeptical of such a global conclusion, but it would add some legs to the concept of cognitive dissonance and simultaneously go a long way toward explaining why perseverance of belief is such a bedrock feature of our cognitive biases. (Above and beyond evolutionary reasons, which I won't go into.)


[Image: extraordinarywoo-sig.jpg]
Reply
#55
RE: Will AI ever = conciousness or sentience?
The long post by Apophenia was very good, and provides some serious problems when it comes to thinking about non-human, inorganic (AI) consciousness. First:

Quote: Nonetheless, consciousness, in my view, is a result of specific processes and their supporting processes, and that without them, or something functionally equivalent, you will not have a machine possessed of consciousness and sentience, just an intelligent machine.

This boundary between intelligence and consciousness, as pointed out above by others, is a definitional problem. I'm not sure I followed A's discussion later, but the works critiquing introspection suggest that there are problems inherent in using the brain to understand the brain.

Quote:In a nutshell, if consciousness is a physical process of the brain, based on known physical processes, then it cannot have the properties which it thinks it does have.

In that context, the above quote makes sense.

However, "problems" does not mean that there is no method we have to reliably locate consciousness in the brain--or some combination of the brain and the nervous system. I don't want to put words in A's mouth because I confess I couldn't follow the discussion of temporality and location fully. Most neuroscience I've stumbled my way through recently makes most of the key aspects of what we'd recognize as 'consciousness' directly linked to the material development of that organ behind your eyes--e.g., theory of mind.

What A's post really triggers, in my mind, is a potential problem related to defining consciousness by using the human brain, which we don't fully understand anyway, as a kind of litmus for what a non-human form of it would look like. In a sense, you might end up with "no true Scotsman" fallacies in which nothing ever fits our own anthropocentric views of what consciousness should look like. On top of all this, as I said, there are methodological issues in using the brain to understand the brain.

In a way, identifying AI would be like finding extra-terrestrial life. From a biological standpoint, this problem is a lot easier to understand. If we assume extra-terrestrial life will look like what we already know, we may not find anything anytime soon...

In looking for AI, I just imagine a perpetually moving target in which humans reject all candidates due to their inability to replicate what we think consciousness should be--a standard that, in all probability, we'd probably fail to meet ourselves if we really understood how our brains work.

Z
I'm always in search for faith-free spaces. Let's make them, enlarge them, and enjoy them!
Bertrand Russell quotes!
Americans United for the Separation of Church and State -- if you haven't joined their Facebook page, do so by all means.
Reply
#56
RE: Will AI ever = conciousness or sentience?
(November 28, 2012 at 1:28 am)whateverist Wrote: On another website I visit it seems most people think our machines will be joining us as sentient beings in their own right any day now. I'd like to poll our larger group to find out what is the prevailing opinion here.

Can machines even be said to think? What counts as thinking? If the execution of decision trees is thinking then indeed they already do 'think'. A program that can diagnose diseases strikes me as very intelligent, but its intelligence of course reflects that of its programmer.

I'm not convinced that machines are or ever will be up to all the tasks me might describe as thinking, but I'll concede that at least some 'thinking' tasks can be accomplished by machines.

Even so, is there any reason to think a program will ever experience subjective states or become self aware or have an identity crisis or be said to exhibit wisdom that does not directly reflect that of its programmer? I see that as a very different question than asking whether a machine could be programmed in such a way as to fool us into thinking these things are going on. I'm skeptical to the point of finding the idea aburd.

I can't make an air tight argument against the possibility but I don't believe it is or ever will be possible. What do you think?

As long as the machine follows the programming laid down by the programmer, I agree, it cannot be considered intelligent or sentient. But, if a machine is created with the capacity to override and write its own programs, then yes, it would become intelligent and sentient and the extent to which it can write its own programs would reflect the level of sentience it has.

For example, consider the example of a diagnostic machine which is fed the names of all known diseases, their symptoms and treatments into it. Now, if the machine becomes capable of adding new entries or reclassifying the previous ones then it is displaying intelligence or sentience.
Reply
#57
RE: Will AI ever = conciousness or sentience?
(November 30, 2012 at 8:17 pm)genkaus Wrote: As long as the machine follows the programming laid down by the programmer, I agree, it cannot be considered intelligent or sentient. But, if a machine is created with the capacity to override and write its own programs, then yes, it would become intelligent and sentient and the extent to which it can write its own programs would reflect the level of sentience it has.

For example, consider the example of a diagnostic machine which is fed the names of all known diseases, their symptoms and treatments into it. Now, if the machine becomes capable of adding new entries or reclassifying the previous ones then it is displaying intelligence or sentience.

I think such a program has the capacity to perform the task of medical diagnosis more thoroughly and accurately than any human, and may well be able to access all the latest most relevant statistical data by way of the cloud. So in that sense I would say it is highly intelligent and potentially to a degree exceeding our own for the task for which it has been programmed. It isn't clear to me how its capacity to update and integrate new data, though highly intelligent, would ever amount to sentience.

I suspect I'm more skeptical because I play no computer games and so don't spend much time in virtual environments. Of course, this is a virtual environment but I'm not the only human here .. or am I? Confusedhock:
Reply
#58
RE: Will AI ever = conciousness or sentience?
(November 30, 2012 at 8:17 pm)genkaus Wrote: For example, consider the example of a diagnostic machine which is fed the names of all known diseases, their symptoms and treatments into it. Now, if the machine becomes capable of adding new entries or reclassifying the previous ones then it is displaying intelligence or sentience.

That would not work for me as a definition of sentience. You could set up a programme that grouped objects by various characteristics and could look for new connections and be able to assess new objects.

As far as I can see the only way of telling if a thing has a self, is seeing whether it is selfish.
Reply
#59
RE: Will AI ever = conciousness or sentience?
@ the OP.

Yes.

Why not? If life came from "non-life" (or life that is so lifeless that it's virtually and practically "non-life") then why can't conscious and sentient biological life develop into and become sentient and conscious non-biologically mechanical life (and I don't mean mechanical in a "bad" way... mechanisms are more complicated than that. After all, suppose a paradoxical mechanism developed that allowed a mechanism that was both free and orderly?)? There's sexism and there's racism and there's species-ism but how about a new "label"(label (not that there will or won't be more to come)) for a perhaps currently unlabelled but already existent (perhaps) bias/prejudice/dogmatic attitude... or to put it less negatively: Something that perhaps isn't currently understood yet and sadly misleads us into unconsciously avoiding our true potential in this(these) world(s)/universe(s). What should we "label" this problem that there seems to seem to be to me to me to me? Should we label it positively, negatively or neutrally? Many people seem to seem to (at least in my view) see neutrality itself as hostile... but is it? How can it be if it's truly neutral? And can't it just as easily be friendly if it really can be hostile despite the fact that it's by definition neither? You can't have it both ways and have such balance (unless some supernatural/super-natural miracle (or perhaps logical paradox) was formed (or is forming)).

I would be very happy if some person or persons commented on my point of view, even if I don't get to respond to them because I would, frankly, and honestly, really just want to do my bit and make my mark by giving this stuff of thought some thought and pass on my message in a realistic way that hopefully moves enough sentient beings close enough to the ideal. And I do hope for some more minds trying to connect with my personal interpretation with their own personal interpretation.
Reply
#60
RE: Will AI ever = conciousness or sentience?
(December 1, 2012 at 5:46 am)DoubtVsFaith Wrote: @ the OP.

Yes.

Why not? If life came from "non-life" (or life that is so lifeless that it's virtually and practically "non-life") then why can't conscious and sentient biological life develop into and become sentient and conscious non-biologically mechanical life (and I don't mean mechanical in a "bad" way... mechanisms are more complicated than that. After all, suppose a paradoxical mechanism developed that allowed a mechanism that was both free and orderly?)?

Interesting. I certainly believe we have on this planet seen a progression of inorganic to preorganic to organic and, somewhere along the way, sentience. Hardware and software start off as inorganic but what is preventing them from proceeding in the same way? (I'm not sure whether software starts off as inorganic, virtual or both?) I've got nothing in principle to offer against it. So at this level I have to concede the possibility. The transformation -if it comes- would be no more remarkable than the progression of life. You may have just budged me. Let me sleep on it.

(November 29, 2012 at 1:55 pm)apophenia Wrote: First of all, let me say that I'm not suggesting (as has been done elsewhere if not here), that if you have a sufficiently complex system of computation, perhaps performing a specific set of functions that allows it to map inputs to outputs (behaviors) in certain ways that you will have what we call consciousness or sentience. It requires a complex computational machine, yes, but a complex computational machine with the right "program" (much as I hate to use that word in this context, as it misleads). This is a first fundamental distinction which needs to be made. Again, with caveats regarding the chosen metaphor, there are those who believe that consciousness is (largely) the result of the specific "software" that our brains run, and that the hardware is effectively irrelevant, and therefore it can indeed be duplicated on another, non-biological platform. Then there are those, like Searle, who argue that the specific nature of our biological hardware is essential to the genesis of consciousness (though in what way he does not say). There is a further camp which argues that not only is it the specific hardware that is essential, but that consciousness depends on properties of that hardware and their functions which science does not yet understand or appreciate adequately (quantum consciousness, microtubules and Crick's resonance hypothesis being examples).

I believe that consciousness results from a certain configuration of computational processes, but that there is nothing unique to the computational abilities of its putative host, the human brain, which make it uniquely capable of carrying out these processes. Nonetheless, consciousness, in my view, is a result of specific processes and their supporting processes, and that without them, or something functionally equivalent, you will not have a machine possessed of consciousness and sentience, just an intelligent machine. Consciousness and sentience are special in that they are distinct, identifiable kinds, but they are not special in that they require a specific hardware host, or even an exactly equivalent program.

We might actually agree about some things. (I won't tell if you don't.) The second bolding (all mine of course) comes very close to what I would say too. I think the special sauce comes from the supporting systems. I think it is something intrinsic to our organic heritage that gives us an affinity for certain taste and scent sensations, movements, music and visual arts. Associations between understandings play a part but must be grounded in the 'flavor' the sensation has for us. That flavor in the sensations is more than a list of the component chemicals. I don't need to check the label to know how to respond. I respond by grocking the sensation itself. Then associations and thoughts come in to thicken the soup of experience. I suspect our smart machines will always be puzzled by this.


(November 29, 2012 at 1:55 pm)apophenia Wrote: One observation I would make is that consciousness, popularly conceived, and as I experience it myself, does not exist in the physical world.

That's okay. I'm kind of bi- myself when it comes to dualism/monism. It's fine to call them like you see them and let the chips fall where they may.

[I'm not done with this post of yours but am still chewing.]

(November 30, 2012 at 8:38 am)DoktorZ Wrote: This boundary between intelligence and consciousness, as pointed out above by others, is a definitional problem.

Yes but it isn't an arbitrary boundary. Intelligence is the possession of the relevant information with the capacity to apply it appropriately to achieve desired outcomes. As "appropriately" approaches "optimally" we move from smart to smarter. But consciousness includes loads of special sauce. In addition to having the intelligence to perform smartly, to be conscious/sentient we should also care about those outcomes. So long as the programmer determines the preferred outcome .. [Free will alert, free will alert .. abort, abort.] That was close.

I'm appreciating hearing your thoughts on this and have more reaction to this post. But in the interest of having a life that is more than virtual, I need to get outside.
Reply



Possibly Related Threads...
Thread Author Replies Views Last Post
  Uploading Conciousness to Computer AFTT47 26 8437 January 29, 2015 at 3:50 pm
Last Post: Faith No More
Shocked The burden of proof relating to conciousness, free choice and rationality marx_2012 107 36067 December 6, 2014 at 12:40 am
Last Post: robvalue
  Sentience and Love BrokenQuill92 6 1636 March 23, 2014 at 6:50 pm
Last Post: bennyboy
  conciousness justin 18 3897 February 24, 2013 at 7:28 pm
Last Post: ManMachine
  Sentience Captain Scarlet 17 5599 December 29, 2010 at 7:51 am
Last Post: Edwardo Piet



Users browsing this thread: 1 Guest(s)