Unfortunately, I agree with this assessment. I even think it applies to medical science as well. Not the effect size problem as much, but that most of it is pointless due to a large collection of methodological problems, statistical problems and sample collection problems. I read an article that looked at quality assessment of 2500 systematic review studies and 6% of those studies were of good quality and around 10 % was of moderate quality. This used the GRADE assessment tool, which, if done poorly, can be misused. Nonetheless, even if a small portion of those studies is overestimated or underestimated, does not diminish the severity of the problem.
I get the "small effect sizes can still matter on large populations" excuse from mostly only social scientists, and it's so very naive and annoying, as they rarely exclude the possibility of those effect sizes being due to problems with their methods, with the statistics or even what sample got collected. The claim is extraordinary as they can only point to a small subset of examples where it seems that interventions with small effect sizes seem to matter in large populations, and have no reliable evidence to support this notion on every occasion they point to this, when their small effect sizes should also matter. Sure, small effect sizes can sometimes matter, but only when the best possible body of evidence has excluded the more likely alternative hypotheses and explanations, when the effects replicate reliably and so on. This rarely happens as replications are still very rare in psychology, and exploring many possible alternative explanations to exclude alternatives also rarely happens. Which means that the claim that their small effect sizes of their pet-theories are on a majority basis, either a naive assumption or a self-serving excuse. And I think Furgeson mentions that here as well. And this shows something very annoying in psychological science, and that is a lack of genuine skepticism about findings. Which is both sad as skepticism should be an important building block of science (maybe even a pillar).
Unfortunately, I agree with this assessment. I even think it applies to medical science as well. Not the effect size problem as much, but that most of it is pointless due to a large collection of methodological problems, statistical problems and sample collection problems. I read an article that looked at quality assessment of 2500 systematic review studies and 6% of those studies were of good quality and around 10 % was of moderate quality. This used the GRADE assessment tool, which, if done poorly, can be misused. Nonetheless, even if a small portion of those studies is overestimated or underestimated, does not diminish the severity of the problem.
I get the "small effect sizes can still matter on large populations" excuse from mostly only social scientists, and it's so very naive and annoying, as they rarely exclude the possibility of those effect sizes being due to problems with their methods, with the statistics or even what sample got collected. The claim is extraordinary as they can only point to a small subset of examples where it seems that interventions with small effect sizes seem to matter in large populations, and have no reliable evidence to support this notion on every occasion they point to this, when their small effect sizes should also matter. Sure, small effect sizes can sometimes matter, but only when the best possible body of evidence has excluded the more likely alternative hypotheses and explanations, when the effects replicate reliably and so on. This rarely happens as replications are still very rare in psychology, and exploring many possible alternative explanations to exclude alternatives also rarely happens. Which means that the claim that their small effect sizes of their pet-theories are on a majority basis, either a naive assumption or a self-serving excuse. And I think Furgeson mentions that here as well. And this shows something very annoying in psychological science, and that is a lack of genuine skepticism about findings. Which is both sad as skepticism should be an important building block of science (maybe even a pillar).