Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Current time: November 27, 2024, 1:55 am

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Pleasure and Joy
RE: Pleasure and Joy
(September 5, 2013 at 1:01 am)genkaus Wrote:
(September 5, 2013 at 12:49 am)bennyboy Wrote: Fine. I will refine my statement. The problem is that your criteria don't prove that something is actually experiencing. They only outline the particular behaviors which you are willing to assume indicate actual experience.

Except I'm not "willing to assume" anything - my knowledge here is based on the criteria you set out.

I know that I experience (a position held by you as well, with regards to yourself).
I know that some of my specific behavior is necessarily the result of my experience.
Therefore, I know that such behavior indicates actual experience.

No assumption necessary.
Awwww. . . you missed my awesome lecture on B.F. Skinner by posting while I was typing! But here's the criteria as I actually set them out.

-I know that I experience.
-I know that based on my experience, I do certain behaviors.
-I know that I have seen other people do similar behaviors.
-I therefore find it pragmatic to assume that those people experience as I do, though there's no way for me to know for sure.

-edit-
We are editing past each other. Hopefully this will get us caught up:

Quote:Yeah, I don't think so. Skinner's basic premise - of using reward/punishment model - would work only if the entity is capable of subjective experience. I can kick my car when it sputters or I can take it to a car wash when it works fine - neither will affect its future 'behavior'.
No, it's purely a mathematical process; if you think of it as an evolutionary model of behavior, it makes sense. If you're interested, I can outline how it works, or even make a simple program to show how it works.
Reply
RE: Pleasure and Joy
(September 5, 2013 at 1:09 am)bennyboy Wrote: But here's the criteria as I actually set them out.

-I know that I experience.
-I know that based on my experience, I do certain behaviors.
-I know that I have seen other people do similar behaviors.
-I therefore find it pragmatic to assume that those people experience as I do, though there's no way for me to know for sure.

Your last statement is incorrect.

-I know that I experience.
-I know that based on my experience, I do certain behaviors.
-I know that I have seen other people do similar behaviors.
-I therefore know that those people experience as I do.

That would be the correct conclusion because the first three statements would provide sufficient evidence for it to be.

(September 5, 2013 at 1:09 am)bennyboy Wrote: No, it's purely a mathematical process; if you think of it as an evolutionary model of behavior, it makes sense. If you're interested, I can outline how it works, or even make a simple program to show how it works.

Go ahead then.
Reply
RE: Pleasure and Joy
Hmmmm. . . programming it will take much work and time. I will do it because I think it will be fun. But start with youtube:



Reply
RE: Pleasure and Joy
(September 5, 2013 at 1:36 am)bennyboy Wrote: Hmmmm. . . programming it will take much work and time. I will do it because I think it will be fun. But start with youtube:




Great. Now, here are the two relevant points of this explanation as they relate to an entity's capacity to experience.

First of all, the neural network is describes as three distinct components - input nodes, hidden nodes and output nodes. A fair description - once the human neural network is simplified. What it doesn't acknowledge, is the complexity created by nodes simultaneously fitting different categories. For example, output nodes may serve as input nodes for another neural network. The chain of Input Node --> Hidden Node A --> Hidden Node B --> Output Node may serve as an input node for the same network.

Secondly, the 'desirable' outputs have been externally imposed. As of now, based on our subjective experience, we regard certain outputs as 'desirable' and assign weights and mathematical functions for back-processing accordingly. At this stage, the neural network itself does not have the layer of nodes required to process and back-process the output nodes and relevant functions and alter them.

These two factors would be required for making a sentient Cyberboy 2000. Its that additional level of neural networking present in humans and absent in the current generation of computers that makes us capable of experience and not them. Should the Cyberboy have that network - as evidenced by it assigning and reassigning preference to different outputs - then it would be capable of experience.

Now, before you go off on a tangent about making assumptions let's make this clear once more by a simpler explanation.

We have input nodes, a black box/hidden network that processes signals from the input nodes and output nodes. We know that there are different categories of input and different categories of output - so the logical conclusion is that the black-box has subsections in it to process different inputs and provide different outputs. At this level, there is no evidence of subjective awareness or experience. Further, other black-boxes are completely inaccessible to this one. The internal functions of other black-boxes cannot serve as input to this one - only their outputs can. Within the context of 'mind-existent words', this black-box is 'you', the input nodes are 'your senses' and output nodes are 'your behavior' or 'physiological changes'.

As it happens, the internal working of the black-box is not a complete mystery to the black-box itself. This need not be a necessary situation. The black-box could've just as easily functioned with the preset code of input-processing-output. But as it happens, there is a separate section within the black-box that treats its internal working as input, processes it and gives specific output. In 'mind-existent terms', we call this process subjective awareness or sentience or experience. Since it cannot receive the internal working of another black-box as input, we cannot experience anyone else's subjective awareness. However, given this input, we can figure out the specific output of this section. And if we see the same output from other black-boxes, the reasonable conclusion is that it has a similar section within it. Your counter-argument that this is an assumption does not work if we've identified as that section to be necessary for that specific output.

Specific to the given example, the output of this section is assignment of 'desirable' attribute to other outputs, i.e giving relative weightage to other outputs . In the given neural network, the desirable attribute - which results in changes in the neural network - is preset. There is no section of the network that assigns it as an output and can therefore alter it. Since it is preset, that 'preference' would be evident from the start and the network won't be able to assign preference to any unconsidered outputs. But if it is the result of another section of the network, it would be alterable. And this distinction would be the basis to conclude whether or not an entity is capable of experience.
Reply
RE: Pleasure and Joy
It's late, but I recognize you put a lot of working into your last post, so let me lead with a couple comments.. Consider this post an aside to the other stuff we've been talking about:

1) As far as I know, there's no "section" of the brain responsible for consciousness, though there are a few that seem to be required. To use a physical perspective on it right now, it really does seem to be the flow of information in the brain that gets experienced, i.e. no "seat" of consciousness. Correct me if I'm wrong, but that's how I remember it.

2) The reason specific outputs are stipulated is that ANNs are very slow, especially if the results have to be matched and punished or rewarded by actual humans. However, if an internet-based project could get enough attention (e.g. a million users judging the artistic merit of a visual output 10 times per day), it might be possible to train a very complex system in a reasonable time. But as far as the mechanism goes, it doesn't "know" if its outputting visual information, or digitized sound, or whatever. So long as the end user can map the output to any kind of hardware, the training process can result in any desired kind of result.

3) I'm not sure this backward propagation method is the best way to apply weights. I have in mind a simulated genetic model, where you'd spawn 1000 instances of an ANN, and discard (i.e. kill) each time the 80% of instances with the worst results. There would be no actual learning in that case, though-- just refinement through selection. But the system RUNNING all those simulations could certainly be said to be learning, I think, since thanks to the billions of aborted ANNs it would end up with an ouput that matched requirements.

In this model, the "survival" of the virtual organism depends on the statistical relationship of its output to its environment, which (and this is the important part) could CHANGE.
Reply
RE: Pleasure and Joy
(September 5, 2013 at 6:16 am)bennyboy Wrote: It's late, but I recognize you put a lot of working into your last post, so let me lead with a couple comments.. Consider this post an aside to the other stuff we've been talking about:

1) As far as I know, there's no "section" of the brain responsible for consciousness, though there are a few that seem to be required. To use a physical perspective on it right now, it really does seem to be the flow of information in the brain that gets experienced, i.e. no "seat" of consciousness. Correct me if I'm wrong, but that's how I remember it.

Point of concern - I'm not sure if incidentally referring to 'consciousness' here instead of 'experience' or if you see them as synonymous or if you are actually talking about consciousness - but I do see them as distinct concepts. This would require a deeper explanation of my views.

Awareness is a phrase which trivially refers to one entity being aware of something else. It can be regarded as something as simple as thermostat being aware of the temperature. For example, in your ANN example, the network is aware of whatever the inputs represent. I know this would seem very counter-intuitive and nothing like what we talk about when we refer to human awareness and here's why.

There are many possible levels and types of awareness. From the lowly thermostat to the Venus Flytrap being aware of the insect on it. That is environmental awareness. Then there is internal awareness - such as being aware of changes in physiology. The subjective awareness is yet another level where the entity is aware of what goes on inside itself.

I believe all these forms of awareness combined are what we call 'consciousness'. When we refer to a human being as a conscious being, we assume that he is aware of his external environment, his own physiology, his thoughts, his sensations, his memory, his imagination, his intelligence and so on and on and on. The problem is, we understand consciousness intuitively - which is why we often take it as a given and as one whole entity and don't even think of figuring out at what level of awareness is required for consciousness to exist. As a result we have philosophies from reductive materialism, which states there is no such thing as consciousness to pan-psychism which says everything has consciousness. Ofcourse, subjective awareness or experience is a significant part of this consciousness but there is more to consciousness than just that.

And this would explain why we can't point to a section of brain responsible for either consciousness or subjective experience. Like consciousness, there are different layers of subjective awareness - from awareness of awareness of a pin prick (not a typo - the repetition is intentional) to awareness of the most complicated collection of ideas. The 'section' here would be determined by function - not location.

(September 5, 2013 at 6:16 am)bennyboy Wrote: 2) The reason specific outputs are stipulated is that ANNs are very slow, especially if the results have to be matched and punished or rewarded by actual humans. However, if an internet-based project could get enough attention (e.g. a million users judging the artistic merit of a visual output 10 times per day), it might be possible to train a very complex system in a reasonable time. But as far as the mechanism goes, it doesn't "know" if its outputting visual information, or digitized sound, or whatever. So long as the end user can map the output to any kind of hardware, the training process can result in any desired kind of result.

Interestingly enough, I think that recursively treating these outputs as inputs may result in a faster "learning curve".


(September 5, 2013 at 6:16 am)bennyboy Wrote: 3) I'm not sure this backward propagation method is the best way to apply weights. I have in mind a simulated genetic model, where you'd spawn 1000 instances of an ANN, and discard (i.e. kill) each time the 80% of instances with the worst results. There would be no actual learning in that case, though-- just refinement through selection. But the system RUNNING all those simulations could certainly be said to be learning, I think, since thanks to the billions of aborted ANNs it would end up with an ouput that matched requirements.

In this model, the "survival" of the virtual organism depends on the statistical relationship of its output to its environment, which (and this is the important part) could CHANGE.

If you are doing that, you might as well add in the capacity to randomly spawn additional neural networks within a particular ANN. Given that, it may not be surprising if it automatically ends up generating an ANN with experiential capacity.
Reply
RE: Pleasure and Joy
(September 5, 2013 at 9:12 am)genkaus Wrote: Interestingly enough, I think that recursively treating these outputs as inputs may result in a faster "learning curve".
I think in the case of colors, that's exactly how it would work. But when it comes to things like recognizing animals and playing 20 questions (my personal standard for AI), I'm not sure how you could do that. Hmmmmm.

Quote:If you are doing that, you might as well add in the capacity to randomly spawn additional neural networks within a particular ANN. Given that, it may not be surprising if it automatically ends up generating an ANN with experiential capacity.
Intuitively, I would imagine that the closer you could come to simulating brain function, the most efficiently the system would be able to learn (and retain learning) in a complex environment. After that, you'd drop the physical constraints of humanity, and end up with something smarter than all humans.

But back to the philosophy-- even if I could program a computer to learn as humans do, and output responses with the same degree of predictability/inpredictability for any context, I'm still not confident that it would really be exeriencing the redness of an apple as redness.

I have thought of a way in which I might be convinced, though. If you could map the output of such a device TO the human brain, and end up with an extended awareness, then that could be a start.
Reply
RE: Pleasure and Joy
Genkaus's arguement in summary:

I am sentient and engage in certain behaviors.
Other is behaving in a certain way.
Thus Other is sentient.

or

I am human and can run.
Fido can run.
Thus Fido is human.
Reply
RE: Pleasure and Joy
(September 5, 2013 at 8:11 pm)ChadWooters Wrote: Genkaus's arguement in summary:

I am sentient and engage in certain behaviors.
Other is behaving in a certain way.
Thus Other is sentient.

or

I am human and can run.
Fido can run.
Thus Fido is human.
No no. "Fido is human" is a scientific hypothesis, and "Fido can run" is evidence. Big Grin
Reply
RE: Pleasure and Joy
(May 8, 2013 at 2:11 pm)Harris Wrote: There is a wish in every person to be respectful.

I found this amusing coming from someone that vomited a wall of text.
Reply



Possibly Related Threads...
Thread Author Replies Views Last Post
  The pursuit of pleasure vs the pursuit of intelligence MattMVS7 11 3111 October 8, 2014 at 6:04 am
Last Post: Violet



Users browsing this thread: 1 Guest(s)