Profile avatar
drewhalbailey.bsky.social
education, developmental psychology, research methods at UC Irvine
74 posts 1,471 followers 269 following
Regular Contributor
Active Commenter

Our paper "A fragmented field" has just been accepted at AMPPS. We find it's not just you, psychology is really getting more confusing (construct and measure fragmentation is rising). We updated the preprint with the (substantial) revision, please check it out. osf.io/preprints/ps...

I have seen lots of higher ed talks and papers in the last 10 years convincingly demonstrating that just making some cutoff (getting into a more selective college or major, not taking remedial classes) helps the marginal student. Great to see an emerging consensus. (1/2)

For every cause, x, there is some group of people (often disproportionately people who study x) who think the effects of x are way bigger than they are. Therefore, I think we are doomed to read (or worse, make) "Yeah, but the effect of x is small" takes forever.

IN MEMORY OF LYNN FUCHS The field of special education lost a visionary and beloved leader with the passing of Lynn Fuchs on May 7, 2025. Her absence leaves a profound void—not only in our scholarly community, but in the hearts of all who had the privilege of knowing her. [ click reading below ]

Thanks to everybody who chimed in! I arrived at the conclusion that (1) there's a lot of interesting stuff about interactions and (2) the figure I was looking for does not exist. So, I made it myself! Here's a simple illustration of how to control for confounding in interactions:>

Is there a name for the fallacy that, because things are different from each other, one cannot compare them? (If not, I propose the “apples and oranges fallacy”) @stefanschubert.bsky.social

Incredibly excited to have this finally come out! Model evaluation should be about comparisons, so we have a metric that puts comparisons in predictive performance on a common scale. I can’t make a thread about this better than @crahal.com, so I’ll let him take it away.

Surreal read of the day: a paper using USAID-funded and now terminated Demographic & Health Surveys to count the huge number of lives saved by the now frozen US PEPFAR program to fight HIV, co-authored by current US admin’s nominee to lead cuts in health research jamanetwork.com/journals/jam...

After a long wait, the working paper for the Many-Economists Project: The Sources of Researcher Variation in Economics. We had 146 teams perform the same research three times, each time with less freedom. What source of freedom leads to different choices and results? papers.ssrn.com/sol3/papers....

A clear and compelling read on IES. I hope policymakers pay attention to this. There is a very strong bipartisan case to be made for continuing to fund the development, evaluation, and syntheses of evaluations of educational programs.

Check out my amazing colleague, collaborator and leader of the Playful Learning Landscapes work in Orange County: news.uci.edu/2025/02/07/u...

New essay on NIH and indirect costs: goodscience.substack.com/p/indirect-c...

Free million dollar idea: food truck that sells mapo tofu and cornbread.

Cool new review from @drewhalbailey.bsky.social that’s well worth the read! www.annualreviews.org/content/jour...

Do you know a US-based researcher who wants to update their meta-analysis skills? #MATI2025 is accepting applications for our one-week training workshop in Chicago from July 28th – August 1st. Apply by March 2nd: www.meta-analysis-training-institute.com/application-...

As I am now handling papers at AEJ:Policy again, I want to encourage authors of papers that identify partial equilibrium effects to consider (in a rigorous manner) how they relate to policy effects or general equilibrium effects.

"A study of federally funded research projects in the United States estimated that principal investigators spend on average about 45% of their time on administrative activities related to applying for and managing projects rather than conducting active research" www.pnas.org/doi/10.1073/...

Academics, let's make 2025 the year where we are more explicit and honest about our causal aims and interpretations Using, for instance, the terms "risk factor" or "associated with a decrease in" is not a clever way to avoid the issue

Overpromising on results to get a program running is ubiquitous but backfires in the long run. Figuring out how to best communicate the need for good programs and evaluations despite unrealistic views of what a worthwhile program is likely to accomplish is an important area for further work.

A few papers I think worth reading. Mostly open access. Causal inference is hard: www.nature.com/articles/s41...

Please repost! We are hiring an assistant professor (W1) of Educational Psychology. We are looking forward to working with you!! uni-tuebingen.de/en/faculties...

Policymakers adjust their views of policy effectiveness when an experiment shows disappointing results but also show reduced demand for experiments. Also in response, the public supports experiments but reduces trust in the implementing institutions www.nber.org/papers/w33239