RE: Proving What We Already "Know"
June 27, 2022 at 6:21 pm
(This post was last modified: June 27, 2022 at 6:47 pm by bennyboy.)
(June 27, 2022 at 10:18 am)The Grand Nudger Wrote: Well, it's sort of an academic question..... but, as it turns out, the content of the thread really doesn't have much to do with that. It's a framing device. We "know" that we're experiencing so that we can say that other things only look like they're experiencing, and are not really experiencing...and if anything (like...say, science) says "hey, maybe you're not doing what you think you are" or "hey, maybe this thing -is- doing what you think it's not" we reject that as wrong.
Translated for bluntness - the question in thread is more along the lines of "how can we maintain a poorly formed view in contradiction to repeated observation in order to avoid some negatively weighted possible consequence of correcting that view?" The answer to that, is easily. We're extremely talented in this specific regard. We manage to do that even when we do correct our factual views. Fine, you're a person..but..like, a fraction of one. Or maybe you're a biological person but a legal unperson. Or maybe you're a biological and legal person but that doesn't mean -I- have an obligation to treat you as one. On and on and on.
Seems to me you've been projecting. Every example I give about context-in-truth sends you on a spiral of motivation-questioning and sleuthing. You state I'm very clearly worried about this or that, or afraid of inclusion or exclusion of that-or-the-other, which I'm fairly clearly not. You, on the other hand, are very much worried that I'm sneakily trying to undermine your world view, when in fact I'm trying to undermine ALL "knowledge" that is stated out of context, and to consider what one would need to bridge contexts and properly generalize such truths.
The example you just complained about described a generalization: "Seems like me, so likely feels like me" to a specific context that may not match, "seems like me, but maybe not actually like me." The danger of the generalization (i.e. the "knowledge") is that if you ignore the change in context and continue anything that SEEMS humanoid, you may make very serious decisions that impact real people. I don't care that much about AI robots (yet), but it's an example of "knowledge" which will need a stronger foundation, sooner rather than later.
I even threw you, specifically, a bone: an example of an overzealous QM zealot insisting that QM was the only good way to think about anything, on the basis that QM is the closest approximation of Reality™. He "knows" that his position must be true, since no other view of the universe better explains how light travels through slits, or why electronics fail at small scale. But I'd expect you, as an experienced botanist of a pragmatic type, to have issue with that view. I'd expect you to say something like, "That's all very fine and well, but what does QM say about where I should snip this new bud, or what I should plant when we have a dry spell?"
And to the same person, with the same "knowledge," I'd say, "That's very fine and well, but what does that really tell us about why we experience qualia, and how we should live our lives?"