Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Current time: November 25, 2024, 10:10 pm

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Is There a Point To Living a Moral Life?
RE: Is There a Point To Living a Moral Life?
If I had to encapsulate my preliminary feelings on this, it'd be that Chad and FtR and making assumptions about the nature of truth and knowledge as if they aren't controversial axioms.
Reply
RE: Is There a Point To Living a Moral Life?
(October 22, 2013 at 10:50 pm)MindForgedManacle Wrote:
(October 22, 2013 at 9:47 pm)FallentoReason Wrote: @MFM

Naturalists don't believe anything: how do brain states represent the proposition "naturalism is true"? You would be assuming that said brain states are *about* the proposition "naturalism is true", but that would be assigning meaning to something physical. Thus, a naturalist having *any* belief begs the question.

Firstly, your point essentially brings up Plantinga's Evolutionary Argument Against Naturalism. Hence, I can easily falsify your objection there by adopting the coherence theory of truth. Since your objection rests upon the assumption that truth is 'out there' (i.e a correspondence between an assertion and a given state of affairs), it becomes inapplicable once I adopt the coherence theory, which does not see truth as such.

As for how I would resolve that *apparent* problem if the correspondence theory was in fact 'true', I don't think such is too hard. Firstly, I don't see the problem in a brain state being about the proposition 'naturalism is true' voids it. Is the correspondence theory of truth's assertion that 'truth is that which corresponds to a given state of affairs of reality' true, or is it circular? That's what your question seems equivalent to, to me, and just as misguided. On naturalism and acceptance of the correspondence theory of truth, all such a proposition would mean that it is in fact the case that the proposition 'Naturalism is true' contains neither a contradiction and accurately represents the brain's perceived reality and experience that naturalism is true.

Strawman. I wasn't talking about the nature of truth, let alone truth itself. I was simply talking about the event where an agent confesses that they believe something e.g. the proposition that "Rowan Atkinson is funny". As it stands, naturalism can't account for such a thing to be possible purely from a physical p.o.v.

genkaus Wrote:Anyway, to say that assigning and interpreting meaning is a form of data processing is not to say that all forms of data processing amount to assigning or interpreting meaning.

I'd agree.

Quote:When you say "conscious entity" - which level of consciousness are you talking about? Any entity can be conscious without being self-aware or sentient.

It's a tricky thing to define for sure. I'd say the entity needs to show emotions/feelings/instincts.

Quote:In college, we used to work with two or three different softwares where one would automatically pass on its output data onto the other for processing and so on, and all we had to do was see the final results. As far as the intermediate outputs were concerned, we never became aware of them. The only entities conscious of them were the next programs in line. Those programs - according to predefined categories - handles the job of interpreting results and assigning meanings. Clearly, the conscious entity at the level of engineer is not necessarily required.

"The only entities conscious of them were the next program".

I love the word choice here. It's clear as day that you're begging the question.

Anyways, to the above I shrug my shoulders. Where one program ends and the other starts is an arbitrary boundary. What if we had a Mega Program that contained the algorhythm of all three? Your non-issue would dissolve and we would be at square one. Bottom line here is that it doesn't matter how many programs or how long the algorhythms are, you still have a whole bunch of physical causal relations that take something in and spit something out without ever having to give it meaning.
"It is the mark of an educated mind to be able to entertain a thought without accepting it" ~ Aristotle
Reply
RE: Is There a Point To Living a Moral Life?
(October 23, 2013 at 2:04 am)FallentoReason Wrote: It's a tricky thing to define for sure. I'd say the entity needs to show emotions/feelings/instincts.

Given that an entity can be a conscious entity without having sentience - why do you consider the existence of emotions/feeling/instincts as necessary for assigning meaning?

(October 23, 2013 at 2:04 am)FallentoReason Wrote: "The only entities conscious of them were the next program".

I love the word choice here. It's clear as day that you're begging the question.

Am I? Or am I using it in such a manner that it doesn't beg the question?

The word conscious normally is used to describe either biological or obviously self-aware entities - but the limits of the word are not defined. The simplest explanation of consciousness - without any dualistic baggage - would be "X is conscious of Y when some information from Y is received and processed by X". The behavior of the programs fits this category.



(October 23, 2013 at 2:04 am)FallentoReason Wrote: Anyways, to the above I shrug my shoulders. Where one program ends and the other starts is an arbitrary boundary. What if we had a Mega Program that contained the algorhythm of all three? Your non-issue would dissolve and we would be at square one.

Actually, in that case the issue would be compounded. If we have a mega-program the sections of which are exchanging information with each other, then I could make an argument that that program has a degree of self-awareness.


(October 23, 2013 at 2:04 am)FallentoReason Wrote: Bottom line here is that it doesn't matter how many programs or how long the algorhythms are, you still have a whole bunch of physical causal relations that take something in and spit something out without ever having to give it meaning.

But they do have to give it meaning - that was my point. The user is not aware of intermediate outputs but there is a set of abstract categories which can assign meaning to them and that is done by the programs.
Reply
RE: Is There a Point To Living a Moral Life?
(October 23, 2013 at 2:34 am)genkaus Wrote:
(October 23, 2013 at 2:04 am)FallentoReason Wrote: It's a tricky thing to define for sure. I'd say the entity needs to show emotions/feelings/instincts.

Given that an entity can be a conscious entity without having sentience - why do you consider the existence of emotions/feeling/instincts as necessary for assigning meaning?

There's two questions in one here. Firstly, I'd say that sentience is necessary for the entity to be conscious e.g. I don't consider a flower to be conscious. Secondly, of course sentience has nothing to do with meaning. I don't know if you're conflating the two on purpose or not, but I'll give you the benefit of the doubt. We were simply delving into what consciousness is here, not meaning etc.

Quote:
(October 23, 2013 at 2:04 am)FallentoReason Wrote: "The only entities conscious of them were the next program".

I love the word choice here. It's clear as day that you're begging the question.

Am I? Or am I using it in such a manner that it doesn't beg the question?

The word conscious normally is used to describe either biological or obviously self-aware entities - but the limits of the word are not defined. The simplest explanation of consciousness - without any dualistic baggage - would be "X is conscious of Y when some information from Y is received and processed by X". The behavior of the programs fits this category.

Is a soccer ball concious of my foot when the soccer ball moves as a result of causal relations ("data processing", since all data processing can be reduced to causal relations) between my foot and it?

Quote:
(October 23, 2013 at 2:04 am)FallentoReason Wrote: Anyways, to the above I shrug my shoulders. Where one program ends and the other starts is an arbitrary boundary. What if we had a Mega Program that contained the algorhythm of all three? Your non-issue would dissolve and we would be at square one.

Actually, in that case the issue would be compounded. If we have a mega-program the sections of which are exchanging information with each other, then I could make an argument that that program has a degree of self-awareness.

Then clearly the amount of programs is trivial, so let's go back to just examining one program.

Why would you assume causal relations within the algorithm of one program equates to the program being self-aware? What is it about electrons moving through copper that tells you these electrical states that make up the algorithm are "self-aware"?

And on another note... I'm curious... would you feel sorry for a computer if you chopped it in half with a chainsaw? Why? Why not?

Quote:
(October 23, 2013 at 2:04 am)FallentoReason Wrote: Bottom line here is that it doesn't matter how many programs or how long the algorhythms are, you still have a whole bunch of physical causal relations that take something in and spit something out without ever having to give it meaning.

But they do have to give it meaning - that was my point. The user is not aware of intermediate outputs but there is a set of abstract categories which can assign meaning to them and that is done by the programs.

Intermediate outputs are just as trivial as the boundaries you assigned between a potential Mega-Program, since said outputs are dependent on where these trivial boundaries are.

How does the program give *anything* meaning? How does copper wire with electrons running through it make meaning of an electrical state in the motherboard that represents the e.g. bending moments in the structure? We're still at square one: physical things somehow being *about* something else.
"It is the mark of an educated mind to be able to entertain a thought without accepting it" ~ Aristotle
Reply
RE: Is There a Point To Living a Moral Life?
Just to touch on the computer intelligence thing a bit I notice that the conversation is all kinda traditional IT. We're talking about PC's/minis/maintdrames as if we are still in the 1980's. That is not where its at. One the one hand we have neural networks and robot mice that are navigating mazes - and learning whilst on the other hand we are shoving really totally different devices into our pockets - namely phones and tablets.

The difference between the phone/tablet and the computer is one of senses that we have given them.

My phone knows its orientation, knows where it is (GPS), has vision, hearing, a magnetic sense I lack, touch sensitivity (less than mine) and no sense of smell (thankfully - its in my back jeans pocket afterall).

Now the combination of those sense makes these devices capable of some quite remarkable things - things that largely, on the face of it, mimic intelligence. This is really scary when you first see it.

I will give you an example, which is real and happened to me whilst using my Nexus 7. The Nexus comes with Google Now. There was a lot of kinda vague hype about Google Now - the more you use it the more useful it becomes but without being exactly clear how.

Anyway I had had the Nexus for about 2 months - using Google Now on and off. At first it gave me the weather conditions and bits of news. Then, somehow, it figured out I support Manchester United and it started giving me related news on them.

Now that was slightly disconcerting - as I hadn't told it I support MU.

Then one day about 7 in the evening I flicked up Google Now and it said:

"You are 11 minutes away from Despina's Ballet lesson, which is in half and hour."

It also showed a map, showing my current location (home) and the location of the Ballet school with a suggested route.

I have to say I absolutely froze. I re-read the message several times. I think I even said aloud "How the fuck do you know that?"

I hadn't even told it I had a daughter never mind her name, that she did ballet, when or where the lessons were.

This gave every appearance - to me in that moment - of intelligence.

Now once I had calmed down I figured it out:

"Despina's Ballet lesson" is a repeated entry in my diary. Once, and just once, whilst waiting for her I logged onto the net (via my phone's wifi hotspot function) on my Nexus.

The Nexus obviously took a GPS reading at that point and related the diary entry to the GPS location.

Plain sailing from then on, clever, but not intelligent.....yet.


What this story tells me is that whether or not devices actually gain understanding its going to get very difficult to know. They will certainly get ever better at mimicking it. This, of course, means on the flipside that they may gain real understanding without us realizing it. Now that is a scary thought....
Kuusi palaa, ja on viimeinen kerta kun annan vaimoni laittaa jouluvalot!
Reply
RE: Is There a Point To Living a Moral Life?
(October 23, 2013 at 3:00 am)FallentoReason Wrote: There's two questions in one here. Firstly, I'd say that sentience is necessary for the entity to be conscious e.g. I don't consider a flower to be conscious. Secondly, of course sentience has nothing to do with meaning. I don't know if you're conflating the two on purpose or not, but I'll give you the benefit of the doubt. We were simply delving into what consciousness is here, not meaning etc.

Something seems to have gotten lost here. We are talking about meaning.

You made the statement "A conscious entity is required to assign meaning".

My question to this was "At what level its consciousness must be for it to assign meaning".

Your reply was "It needs to show emotions/feelings/instincts." - which indicates a sentience level of consciousness.

To which I asked "why that specific level?"

Now, you are saying that sentience is required for the entity to be conscious and that sentience is irrelevant to assigning meaning?


Secondly, why wouldn't you regard plants as conscious? A sunflower seems to conscious of sun's position. A Touch-me-not seems to be conscious of when someone touches it. A Venus Flytrap is conscious of when a fly has entered it. What's the difference between this and what you call conscious behavior?

(October 23, 2013 at 3:00 am)FallentoReason Wrote: Is a soccer ball concious of my foot when the soccer ball moves as a result of causal relations ("data processing", since all data processing can be reduced to causal relations) between my foot and it?

You are making the same mistake again.

Assignment of meaning can be reduced to data processing - that does not mean all data processing result in assignment of meaning.

Similarly, data processing can be reduced to causal relations - that does not mean all causal relations result in data processing.

Which is why the soccer ball is not conscious of your foot.

(October 23, 2013 at 3:00 am)FallentoReason Wrote: Then clearly the amount of programs is trivial, so let's go back to just examining one program.

Why would you assume causal relations within the algorithm of one program equates to the program being self-aware? What is it about electrons moving through copper that tells you these electrical states that make up the algorithm are "self-aware"?

And on another note... I'm curious... would you feel sorry for a computer if you chopped it in half with a chainsaw? Why? Why not?

I don't assume that causal relations within an algorithm equal self-awareness. Self-awareness is a specific form of data-processing where the processes within the entity serve as input for yet other functions within it. If such a mechanism exists, then I would regard it as self-aware. As for feeling sorry for the compute - no I wouldn't. But then I don't feel sorry for the chicken that is chopped in half for my dinner - so don't read anything into it.


(October 23, 2013 at 3:00 am)FallentoReason Wrote: Intermediate outputs are just as trivial as the boundaries you assigned between a potential Mega-Program, since said outputs are dependent on where these trivial boundaries are.

Except those boundaries are defined by the scope of each software - so, not trivial at all.

(October 23, 2013 at 3:00 am)FallentoReason Wrote: How does the program give *anything* meaning? How does copper wire with electrons running through it make meaning of an electrical state in the motherboard that represents the e.g. bending moments in the structure? We're still at square one: physical things somehow being *about* something else.

The same way conscious entities assign meaning - which you seem to accept.

Abstract categories are stored in our neural pathways.
Abstract categories are stored as specific electrical states in hardware.

Our sensory data (neural impulses) is processed according to these categories within neural circuitry representing the brain function.
Computational data (electrical) is processed according to these categories within electrical circuitry representing a program.

Data gets reclassified (assigned meaning) as a result.
Data gets reclassified (assigned meaning) as a result.

The biggest difference between these two is that while humans are capable of developing their on abstract categories, computers have to be pre-programmed with them - for now.
Reply
RE: Is There a Point To Living a Moral Life?
(October 23, 2013 at 2:04 am)FallentoReason Wrote: Strawman. I wasn't talking about the nature of truth, let alone truth itself. I was simply talking about the event where an agent confesses that they believe something e.g. the proposition that "Rowan Atkinson is funny". As it stands, naturalism can't account for such a thing to be possible purely from a physical p.o.v.

Actually, you were. If you're saying that the brain on naturalism is incapable of having beliefs, which you specifically noted in an earlier post on page 26 that it was because "How can *atoms* be about things?", which is about the nature of truth (i.e correspondence).
Reply
RE: Is There a Point To Living a Moral Life?
genkaus, MFM:

I'm really enjoying our discussion. However, as of today, I've become rather emotionally unstable due to some stuff in the real world. I'm afraid to say that I won't have the time or mental capacity to continue our discussion further. I really hope we can touch base very soon though.

I don't know why I'm informing people on the internet ha... I think that's my emotions making me act all weird. But I just wanted to let you guys know why I won't post a reply.
"It is the mark of an educated mind to be able to entertain a thought without accepting it" ~ Aristotle
Reply
RE: Is There a Point To Living a Moral Life?
Then why is a conscious entity needed at all?
Reply
RE: Is There a Point To Living a Moral Life?
All this talk of 'morals'...

If this is appropriate... http://atheistforums.org/thread-21598.html
Reply



Possibly Related Threads...
Thread Author Replies Views Last Post
  Moral Law LinuxGal 7 777 November 8, 2023 at 8:15 am
Last Post: The Grand Nudger
  Where does the belief that seeds die before they turn into a living plant come from? FlatAssembler 17 1893 August 3, 2023 at 10:38 am
Last Post: The Grand Nudger
  what is the point? Drich 123 11133 September 19, 2020 at 11:04 am
Last Post: downbeatplumb
  In UK atheists considred more moral than theists. downbeatplumb 254 36822 September 20, 2018 at 5:08 pm
Last Post: Minimalist
  The joys of living in the bible belt mlmooney89 38 8951 August 8, 2017 at 7:35 pm
Last Post: Chad32
  Serious moral question for theist. dyresand 30 8377 September 1, 2015 at 10:13 am
Last Post: Crossless2.0
  Why is Faith/Belief a Moral Issue? Rhondazvous 120 28778 August 21, 2015 at 11:14 am
Last Post: Rhondazvous
  Recap - A moral question for theists dyresand 39 8860 July 15, 2015 at 4:14 pm
Last Post: Crossless2.0
  A moral and ethical question for theists dyresand 131 21788 July 15, 2015 at 7:54 am
Last Post: ignoramus
  How can a book that tells you how to treat slaves possibly be valid moral guide là bạn điên 43 13346 July 11, 2015 at 11:40 am
Last Post: SteelCurtain



Users browsing this thread: 3 Guest(s)