But people are so allergic to what they misunderstand as post-hoc power, they fail to consider anything that their study might be able to inform anything beyond statistical significance.
Comments
Log in with your Bluesky account to leave a comment
. . . demonstrations of the effect, so p-values are rarely the basis of judgement. (Or “87% of these 250 correlations are p<.01,” which is really description, not NHST.)
I think it’s important that guidelines be flexible to meet the real methods scientists use.
Lots of people vaguely know "post hoc power analysis bad" without knowing what that actually is. We *just* had this happen w/ a reviewer who accused us of doing post hoc power analysis when what we'd done were sensitivity power analyses for secondary data (in the prereg) to report MDES.
It wasn't the only sign of poor statistical education from this reviewer unfortunately. Had a discussion w/ postdoc 1st author about how to do diplomatic reviewer education. "Thank you for the opportunity to further clarify the nature of the power analysis" etc.
Comments
Most of my work isn’t much based on p-values (when I report r=.95 with N=125, I don’t report the p-value; I discuss why it was observed).
I’ve very rarely published anything “important” w/o 3 independent . . .
I think it’s important that guidelines be flexible to meet the real methods scientists use.
Data analysis serves methods.