(August 24, 2025 at 4:03 am)Paleophyte Wrote:(August 23, 2025 at 10:24 pm)GrandizerII Wrote: I don’t understand what you mean by “failing to show any of their work”? The data has been linked to (though you may have to pay to access some of it, like they did), the methodology is described in Sections 4 and 5, the code they used for analysis is linked to, results are reported and analysed, and there are tables and diagrams and an appendix.
I mean, sure, they don’t show the statistical tests in full detail, but that’s standard, and there may be some word limit imposed anyway.
Again, any researcher who is suspicious can test their findings using the same data and details they have provided.
If I have to burrow into their code to figure out how they did their math then they're doing it wrong. Worse, that code is only a small proportion of their stats. Sure, their data is available online. You have to pay for access, which is unfortunate but common. The difficulty is that there are things like "adjusted average" that their conclusions hinge on and no indication on how that average is adjusted. When you do something pivotal like that you need to explain it explicitly in the text, and that just isn't there. Something along the lines of "The average GRE scores were adjusted for pre-enrollment scores using the following common and well-worn methods... because of these reasons..." and if they aren't standard methods (which they get excited about telling you in the foortnotes? Really? I have to dig into the footnotes to find that this is pioneering work in the field?!?) then you'll want to include a lot more justification and detail. You'll especially want to do that if you are reporting surprising results because other authors are going to want to know exactly how your math got to this unexpected outcome. Finding philosophy at the front of the pack in almost every score tested should have triggered a more thorough review of the stats and a much more thorough description of the steps between raw data and inferences. I don't need to see it done out long-hand, that's ridiculous, but I do need to know what tools they're using and why. Simply listing the names of the software is insufficient.
I doubt that there's a word limit issue since this was a pretty brief paper as is. It's possible that they were writing for philosophers, but then they need to include these details in an appendix somewhere. You also have some pretty major biases in the data that are never addressed. Not once do they discuss the fact that all of the data are self-reported, which ought to have been a huge issue. Perhaps I'm missing something?
Paleophyte, I think part of your beef with the study is that you're coming at it from a "hard science" perspective (perhaps because it's a quantitative study). But as you're aware, this is a social science study primarily written for philosophers (with the more technical stuff written for social scientists). So of course, it's not going to read like a physics study or anything like that.
I just had a look at the code, and it looks to me like it contains all the work required for analysis, including what you have been asking about. What are you suggesting is missing there?
The first two tables in the appendix contain some important stats such as intercepts and CI intervals and p-values. Now I have no idea why all of this was not included in the main text (because as someone who majored in psychology, I did have to report on this stuff in the main text itself, even if in the form of tables and such), but I don't really know what the write-up requirements are for "philosophy/social science" studies like this.
They do, however, point out (at the end of Section 2) how confounds are controlled for using "covariate adjustment", and they later mention in Section 5 what models overall were used by their analyses.
Causal inference research is not exactly new. What's uncommon, as the first footnote says, is the application of causal inference techniques to the study of the effects of philosophical education.
As for the results themselves, they're actually not that surprising ... or even that impressive when you remember that the baseline differences were adjusted for in this study. It has been known for quite some time that there is a noticeable correlative relationship going on between undertaking philosophical study and logical/verbal reasoning. They just hadn't been able to establish a causal relationship until now.
To be clear, the study does not show that philosophy majors are more intelligent than other majors. Rather, it shows that undergraduate philosophy boosts certain intellectual abilities and virtues higher than other undergraduate studies. Not all intellectual abilities/virtues, just some of them.
The point of this study was to show that what they had suspected for quite a while (thanks to observations and prior studies) is in fact the case.
By the way, it is not true that all data provided to them was self-reported. The standardized test scores themselves are not self-reports. The scores on the Habits of Mind and Pluralistic Orientation scales are self-reports, but as stated in one of the paragraphs from the study itself:
Quote:In short, these two different kinds of measures have complementary strengths and weaknesses. Standardized tests are “objective” in the sense that they are immune to reporting biases. They also capture a broad range of important abilities but might be thought to reflect a relatively thin conception of good thinking. On the other hand, self-reports can capture dispositions like curiosity and open-mindedness, that seem to be important aspects of good thinking. But these are less “objective” in the aforementioned sense. Given these relative advantages and disadvantages, we would ideally like to see converging evidence from both kinds of measures. That is, although either result would be interesting in its own right, evidence that studying philosophy improves both test scores and self-reported intellectual dispositions would provide particularly strong evidence that the discipline makes people better thinkers.