RE: In your opinion what causes christians to believe in Jesus
May 8, 2025 at 1:34 am
(This post was last modified: May 8, 2025 at 1:37 am by Thumpalumpacus.)
(May 7, 2025 at 10:55 pm)John 6IX Breezy Wrote:(May 7, 2025 at 10:27 pm)Thumpalumpacus Wrote: [citation needed]
For what? It'll just be noise to you:
https://digitalcommons.usf.edu/cgi/viewc...t=numeracy
https://digitalcommons.usf.edu/cgi/viewc...t=numeracy
From your link:
Quote:Self-assessment measures of competency are blends of an authentic self-assessment signal that researchers seek to measure and random disorder or "noise" that accompanies that signal. In this study, we use random number simulations to explore how random noise affects critical aspects of selfassessment investigations: reliability, correlation, critical sample size, and the graphical representations of self-assessment data. We show that graphical conventions common in the self-assessment literature introduce artifacts that invite misinterpretation.
Is that a deconstruction of DKE, or is that you admitting that your own noise clutters your radar screen?
Or is it you trying to clutter my own radar? Moving on:
Quote:[...]
In practice, measuring self-assessment accuracy is not simple. Obtaining meaningful results that have quantitative significance requires attention to the construction of the measuring instruments. The paired instruments must address a common construct; they must be capable of acquiring reliable data, and the investigators must acquire enough data before they can produce a contribution characterized by reproducible results. Unfortunately, investigators can still graph the data acquired while ignoring these fundamentals, and they can make convincing interpretations of the resulting patterns.
Several graphical conventions unique to the self-assessment literature generate artifact patterns that are easy to mistake as offering meaningful portrayals of self-assessment.
These difficulties contribute to the current situation when “…it remains unclear whether people generally perceive their skills accurately or inaccurately” (Zell and Krizan 2014, p. 111).
Okay, so we don't have sufficient data-resolution to actually figure out whether or not self-assessment is accurate.
Quote: [...] the power of spreadsheets [...]
lol
Quote:Contradicting this position are two positions that consider self-assessed competence as meaningful and measurable. One of these positions holds that people tend toward overconfidence in their abilities, with many being “unskilled and unaware of it.” This view arises from findings that identify the leastproficient performers as those with the most over-inflated self-assessments (Kruger and Dunning 1999; Ehrlinger et al. 2008; Bell and Volckmann 2011).
The other position holds that self-assessment ratings, overall, reflect the competence that people usually can demonstrate. This position arises when researchers consider relationships between measures of self-assessed competence and actual competence as significant (Nuhfer and Knipp 2006; Favazzo et al.
2014).
And here's the nub: self assessment has an inbuilt bias, because we all want to think we're more competent than we are. The difference between overrating oneself and suffering from D-K is that those under the spell of D-K are much less likely to admit error precisely because you -- errr, they -- consider themselves experts. Those of us who understand our limitations work within them under the cognizance that we may well be wrong.
I have not seen one bit of that understanding from you, of a possibility you might be wrong, throughout this thread. I've admitted error in here. You? Nah.
Quote:Data acquired that are unreliable or obtained from misaligned instruments are likely to be mostly noise. Before we could begin graphing or further studying paired measures, we needed to confirm that both of our instruments collected data that revealed a signal and thus were distinct from pure noise. If such were not the case, our study could not have progressed further. In studies of self-assessment, this is particularly necessary because a position already exists that argues that
human self-assessments are mostly random noise.
This part is really crap. Assuming that self-assessments are random, without exploring the issue of internal bias, defeats the purpose of investigating and critiquing DKE. That assumption gets baked into the result.
Quote:The pattern in Figure 2A reveals that people who self-assessed on the KSSLCI that they would do poorly on the SLCI did tend to score lower, and those who self-assessed that they would do well, as a whole, scored higher. To be sure, this general trend had many exceptions. The 𝑟 of .60 reveals that the relationship between self-assessed competency and demonstrated competency does not permit prediction of one from the other at the level of individual participants.
Which, by definition, means that if an individual is narcissistic or megalomaniacal that would be an outlier on this study and not considered useful as a result, given that those personalities are much in the minority -- like many other mental issues, including DKE.
Quote:Figure 4 employs the convention of Figure 3 to provide a synopsis of the full scatterplot (Fig. 2A). Whereas Figure 3 portrays a case in which overestimation greatly exceeds underestimation of abilities, Figure 4 shows only a modest difference between overestimation and underestimation.
So there's still an overestimation of oneself's competence, not as much as D-K estimated, but perhaps maps to other mental issues where one's ability to assess oneself accurately might skew results. Got it.
In short -- using a graphing/mapping argument that's best utilized in analyzing the average person is probably not a useful tool for addressing the outliers, especially since you're treating people as statistics rather than individuals. I could be wrong, and yeah, there was some jargon that went whoosh over me, but "know-it-all" is a common phrase for a reason.
![[Image: Avwo0Ay.gif]](https://i.imgur.com/Avwo0Ay.gif)