(August 23, 2025 at 10:24 pm)GrandizerII Wrote:(August 23, 2025 at 9:38 pm)Paleophyte Wrote: We'll have to agree to differ. IMO, that's a pretty disreputable junk study. Their biggest sins are explaining their methodology only in passing, if at all, and failing to show any of their work. To me, that's just another baseless opinion
I don’t understand what you mean by “failing to show any of their work”? The data has been linked to (though you may have to pay to access some of it, like they did), the methodology is described in Sections 4 and 5, the code they used for analysis is linked to, results are reported and analysed, and there are tables and diagrams and an appendix.
I mean, sure, they don’t show the statistical tests in full detail, but that’s standard, and there may be some word limit imposed anyway.
Again, any researcher who is suspicious can test their findings using the same data and details they have provided.
If I have to burrow into their code to figure out how they did their math then they're doing it wrong. Worse, that code is only a small proportion of their stats. Sure, their data is available online. You have to pay for access, which is unfortunate but common. The difficulty is that there are things like "adjusted average" that their conclusions hinge on and no indication on how that average is adjusted. When you do something pivotal like that you need to explain it explicitly in the text, and that just isn't there. Something along the lines of "The average GRE scores were adjusted for pre-enrollment scores using the following common and well-worn methods... because of these reasons..." and if they aren't standard methods (which they get excited about telling you in the foortnotes? Really? I have to dig into the footnotes to find that this is pioneering work in the field?!?) then you'll want to include a lot more justification and detail. You'll especially want to do that if you are reporting surprising results because other authors are going to want to know exactly how your math got to this unexpected outcome. Finding philosophy at the front of the pack in almost every score tested should have triggered a more thorough review of the stats and a much more thorough description of the steps between raw data and inferences. I don't need to see it done out long-hand, that's ridiculous, but I do need to know what tools they're using and why. Simply listing the names of the software is insufficient.
I doubt that there's a word limit issue since this was a pretty brief paper as is. It's possible that they were writing for philosophers, but then they need to include these details in an appendix somewhere. You also have some pretty major biases in the data that are never addressed. Not once do they discuss the fact that all of the data are self-reported, which ought to have been a huge issue. Perhaps I'm missing something?