If I had to encapsulate my preliminary feelings on this, it'd be that Chad and FtR and making assumptions about the nature of truth and knowledge as if they aren't controversial axioms.
Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Current time: November 22, 2024, 1:00 pm
Thread Rating:
Is There a Point To Living a Moral Life?
|
RE: Is There a Point To Living a Moral Life?
October 23, 2013 at 2:04 am
(This post was last modified: October 23, 2013 at 2:14 am by FallentoReason.)
(October 22, 2013 at 10:50 pm)MindForgedManacle Wrote:(October 22, 2013 at 9:47 pm)FallentoReason Wrote: @MFM Strawman. I wasn't talking about the nature of truth, let alone truth itself. I was simply talking about the event where an agent confesses that they believe something e.g. the proposition that "Rowan Atkinson is funny". As it stands, naturalism can't account for such a thing to be possible purely from a physical p.o.v. genkaus Wrote:Anyway, to say that assigning and interpreting meaning is a form of data processing is not to say that all forms of data processing amount to assigning or interpreting meaning. I'd agree. Quote:When you say "conscious entity" - which level of consciousness are you talking about? Any entity can be conscious without being self-aware or sentient. It's a tricky thing to define for sure. I'd say the entity needs to show emotions/feelings/instincts. Quote:In college, we used to work with two or three different softwares where one would automatically pass on its output data onto the other for processing and so on, and all we had to do was see the final results. As far as the intermediate outputs were concerned, we never became aware of them. The only entities conscious of them were the next programs in line. Those programs - according to predefined categories - handles the job of interpreting results and assigning meanings. Clearly, the conscious entity at the level of engineer is not necessarily required. "The only entities conscious of them were the next program". I love the word choice here. It's clear as day that you're begging the question. Anyways, to the above I shrug my shoulders. Where one program ends and the other starts is an arbitrary boundary. What if we had a Mega Program that contained the algorhythm of all three? Your non-issue would dissolve and we would be at square one. Bottom line here is that it doesn't matter how many programs or how long the algorhythms are, you still have a whole bunch of physical causal relations that take something in and spit something out without ever having to give it meaning. "It is the mark of an educated mind to be able to entertain a thought without accepting it" ~ Aristotle
(October 23, 2013 at 2:04 am)FallentoReason Wrote: It's a tricky thing to define for sure. I'd say the entity needs to show emotions/feelings/instincts. Given that an entity can be a conscious entity without having sentience - why do you consider the existence of emotions/feeling/instincts as necessary for assigning meaning? (October 23, 2013 at 2:04 am)FallentoReason Wrote: "The only entities conscious of them were the next program". Am I? Or am I using it in such a manner that it doesn't beg the question? The word conscious normally is used to describe either biological or obviously self-aware entities - but the limits of the word are not defined. The simplest explanation of consciousness - without any dualistic baggage - would be "X is conscious of Y when some information from Y is received and processed by X". The behavior of the programs fits this category. (October 23, 2013 at 2:04 am)FallentoReason Wrote: Anyways, to the above I shrug my shoulders. Where one program ends and the other starts is an arbitrary boundary. What if we had a Mega Program that contained the algorhythm of all three? Your non-issue would dissolve and we would be at square one. Actually, in that case the issue would be compounded. If we have a mega-program the sections of which are exchanging information with each other, then I could make an argument that that program has a degree of self-awareness. (October 23, 2013 at 2:04 am)FallentoReason Wrote: Bottom line here is that it doesn't matter how many programs or how long the algorhythms are, you still have a whole bunch of physical causal relations that take something in and spit something out without ever having to give it meaning. But they do have to give it meaning - that was my point. The user is not aware of intermediate outputs but there is a set of abstract categories which can assign meaning to them and that is done by the programs. (October 23, 2013 at 2:34 am)genkaus Wrote:(October 23, 2013 at 2:04 am)FallentoReason Wrote: It's a tricky thing to define for sure. I'd say the entity needs to show emotions/feelings/instincts. There's two questions in one here. Firstly, I'd say that sentience is necessary for the entity to be conscious e.g. I don't consider a flower to be conscious. Secondly, of course sentience has nothing to do with meaning. I don't know if you're conflating the two on purpose or not, but I'll give you the benefit of the doubt. We were simply delving into what consciousness is here, not meaning etc. Quote:(October 23, 2013 at 2:04 am)FallentoReason Wrote: "The only entities conscious of them were the next program". Is a soccer ball concious of my foot when the soccer ball moves as a result of causal relations ("data processing", since all data processing can be reduced to causal relations) between my foot and it? Quote:(October 23, 2013 at 2:04 am)FallentoReason Wrote: Anyways, to the above I shrug my shoulders. Where one program ends and the other starts is an arbitrary boundary. What if we had a Mega Program that contained the algorhythm of all three? Your non-issue would dissolve and we would be at square one. Then clearly the amount of programs is trivial, so let's go back to just examining one program. Why would you assume causal relations within the algorithm of one program equates to the program being self-aware? What is it about electrons moving through copper that tells you these electrical states that make up the algorithm are "self-aware"? And on another note... I'm curious... would you feel sorry for a computer if you chopped it in half with a chainsaw? Why? Why not? Quote:(October 23, 2013 at 2:04 am)FallentoReason Wrote: Bottom line here is that it doesn't matter how many programs or how long the algorhythms are, you still have a whole bunch of physical causal relations that take something in and spit something out without ever having to give it meaning. Intermediate outputs are just as trivial as the boundaries you assigned between a potential Mega-Program, since said outputs are dependent on where these trivial boundaries are. How does the program give *anything* meaning? How does copper wire with electrons running through it make meaning of an electrical state in the motherboard that represents the e.g. bending moments in the structure? We're still at square one: physical things somehow being *about* something else. "It is the mark of an educated mind to be able to entertain a thought without accepting it" ~ Aristotle
Just to touch on the computer intelligence thing a bit I notice that the conversation is all kinda traditional IT. We're talking about PC's/minis/maintdrames as if we are still in the 1980's. That is not where its at. One the one hand we have neural networks and robot mice that are navigating mazes - and learning whilst on the other hand we are shoving really totally different devices into our pockets - namely phones and tablets.
The difference between the phone/tablet and the computer is one of senses that we have given them. My phone knows its orientation, knows where it is (GPS), has vision, hearing, a magnetic sense I lack, touch sensitivity (less than mine) and no sense of smell (thankfully - its in my back jeans pocket afterall). Now the combination of those sense makes these devices capable of some quite remarkable things - things that largely, on the face of it, mimic intelligence. This is really scary when you first see it. I will give you an example, which is real and happened to me whilst using my Nexus 7. The Nexus comes with Google Now. There was a lot of kinda vague hype about Google Now - the more you use it the more useful it becomes but without being exactly clear how. Anyway I had had the Nexus for about 2 months - using Google Now on and off. At first it gave me the weather conditions and bits of news. Then, somehow, it figured out I support Manchester United and it started giving me related news on them. Now that was slightly disconcerting - as I hadn't told it I support MU. Then one day about 7 in the evening I flicked up Google Now and it said: "You are 11 minutes away from Despina's Ballet lesson, which is in half and hour." It also showed a map, showing my current location (home) and the location of the Ballet school with a suggested route. I have to say I absolutely froze. I re-read the message several times. I think I even said aloud "How the fuck do you know that?" I hadn't even told it I had a daughter never mind her name, that she did ballet, when or where the lessons were. This gave every appearance - to me in that moment - of intelligence. Now once I had calmed down I figured it out: "Despina's Ballet lesson" is a repeated entry in my diary. Once, and just once, whilst waiting for her I logged onto the net (via my phone's wifi hotspot function) on my Nexus. The Nexus obviously took a GPS reading at that point and related the diary entry to the GPS location. Plain sailing from then on, clever, but not intelligent.....yet. What this story tells me is that whether or not devices actually gain understanding its going to get very difficult to know. They will certainly get ever better at mimicking it. This, of course, means on the flipside that they may gain real understanding without us realizing it. Now that is a scary thought....
Kuusi palaa, ja on viimeinen kerta kun annan vaimoni laittaa jouluvalot!
(October 23, 2013 at 3:00 am)FallentoReason Wrote: There's two questions in one here. Firstly, I'd say that sentience is necessary for the entity to be conscious e.g. I don't consider a flower to be conscious. Secondly, of course sentience has nothing to do with meaning. I don't know if you're conflating the two on purpose or not, but I'll give you the benefit of the doubt. We were simply delving into what consciousness is here, not meaning etc. Something seems to have gotten lost here. We are talking about meaning. You made the statement "A conscious entity is required to assign meaning". My question to this was "At what level its consciousness must be for it to assign meaning". Your reply was "It needs to show emotions/feelings/instincts." - which indicates a sentience level of consciousness. To which I asked "why that specific level?" Now, you are saying that sentience is required for the entity to be conscious and that sentience is irrelevant to assigning meaning? Secondly, why wouldn't you regard plants as conscious? A sunflower seems to conscious of sun's position. A Touch-me-not seems to be conscious of when someone touches it. A Venus Flytrap is conscious of when a fly has entered it. What's the difference between this and what you call conscious behavior? (October 23, 2013 at 3:00 am)FallentoReason Wrote: Is a soccer ball concious of my foot when the soccer ball moves as a result of causal relations ("data processing", since all data processing can be reduced to causal relations) between my foot and it? You are making the same mistake again. Assignment of meaning can be reduced to data processing - that does not mean all data processing result in assignment of meaning. Similarly, data processing can be reduced to causal relations - that does not mean all causal relations result in data processing. Which is why the soccer ball is not conscious of your foot. (October 23, 2013 at 3:00 am)FallentoReason Wrote: Then clearly the amount of programs is trivial, so let's go back to just examining one program. I don't assume that causal relations within an algorithm equal self-awareness. Self-awareness is a specific form of data-processing where the processes within the entity serve as input for yet other functions within it. If such a mechanism exists, then I would regard it as self-aware. As for feeling sorry for the compute - no I wouldn't. But then I don't feel sorry for the chicken that is chopped in half for my dinner - so don't read anything into it. (October 23, 2013 at 3:00 am)FallentoReason Wrote: Intermediate outputs are just as trivial as the boundaries you assigned between a potential Mega-Program, since said outputs are dependent on where these trivial boundaries are. Except those boundaries are defined by the scope of each software - so, not trivial at all. (October 23, 2013 at 3:00 am)FallentoReason Wrote: How does the program give *anything* meaning? How does copper wire with electrons running through it make meaning of an electrical state in the motherboard that represents the e.g. bending moments in the structure? We're still at square one: physical things somehow being *about* something else. The same way conscious entities assign meaning - which you seem to accept. Abstract categories are stored in our neural pathways. Abstract categories are stored as specific electrical states in hardware. Our sensory data (neural impulses) is processed according to these categories within neural circuitry representing the brain function. Computational data (electrical) is processed according to these categories within electrical circuitry representing a program. Data gets reclassified (assigned meaning) as a result. Data gets reclassified (assigned meaning) as a result. The biggest difference between these two is that while humans are capable of developing their on abstract categories, computers have to be pre-programmed with them - for now. (October 23, 2013 at 2:04 am)FallentoReason Wrote: Strawman. I wasn't talking about the nature of truth, let alone truth itself. I was simply talking about the event where an agent confesses that they believe something e.g. the proposition that "Rowan Atkinson is funny". As it stands, naturalism can't account for such a thing to be possible purely from a physical p.o.v. Actually, you were. If you're saying that the brain on naturalism is incapable of having beliefs, which you specifically noted in an earlier post on page 26 that it was because "How can *atoms* be about things?", which is about the nature of truth (i.e correspondence).
genkaus, MFM:
I'm really enjoying our discussion. However, as of today, I've become rather emotionally unstable due to some stuff in the real world. I'm afraid to say that I won't have the time or mental capacity to continue our discussion further. I really hope we can touch base very soon though. I don't know why I'm informing people on the internet ha... I think that's my emotions making me act all weird. But I just wanted to let you guys know why I won't post a reply. "It is the mark of an educated mind to be able to entertain a thought without accepting it" ~ Aristotle
Then why is a conscious entity needed at all?
|
« Next Oldest | Next Newest »
|
Users browsing this thread: 5 Guest(s)