Profile avatar
rohitpojha.bsky.social
Director & Associate Professor, JPS Health Network Center for Epidemiology & Healthcare Delivery Research | Causal inference • Prediction • Evidence synthesis
26 posts 950 followers 82 following
Regular Contributor
Conversation Starter
comment in response to post
Interesting situation. Perhaps the journal or Editorial Board has a policy to help guide?
comment in response to post
Just like the phrase, “…results should be interpreted cautiously.” As if results should ever be interpreted recklessly.
comment in response to post
I’m with you and advocate for further inquiry. I also agree that inference requires multiple sources, but the question is whether some studies are even useful for informing policy? Savitz wrote a nice article about the need for policy-relevant research. academic.oup.com/aje/article/...
comment in response to post
Sometimes the available studies are so flawed or do not address the question of interest well enough that meta-analysis is unwarranted and no amount of sensitivity analysis or post hoc remedies can redeem. In such cases, greater value to provide guidance about how to improve the quality of studies.
comment in response to post
A meta-analysis is only as good as the included studies. Most studies in this meta-analysis were riddled with selection, exposure and outcome misclassification, and confounding biases. In addition, the standardized mean difference is problematic for meta-analysis. pubmed.ncbi.nlm.nih.gov/38761102/
comment in response to post
So nice to see ideas statisticians established years ago about prediction models making a resurgence in other contexts. #StatsSky www.jclinepi.com/article/S089...
comment in response to post
Agree. Such analyses are conditional on knowing when the outcome (mortality) occurred and have little practical value for informing practice change. We cannot intervene after the outcome already occurred.
comment in response to post
No problem. Another consideration is that two things that seemingly occur simultaneously may be related to a known or unknown common cause rather than a bidirectional effect. You’re thus searching for a common cause or some time-varying effect. DAGs can encode either and help clarify assumptions.
comment in response to post
Is the relation truly bidirectional, meaning simultaneous causation (which seems unusual)? Or is the relation time-varying, which creates a feedback loop? If the latter, DAGs can encode time-varying relations.
comment in response to post
Any time I see that phrase, I remember Rothman’s article, “Writing for Epidemiology”: “…avoid the bromide that a given finding should be interpreted cautiously. It implies that other interpretations are reckless.” journals.lww.com/epidem/citat...
comment in response to post
I heard about this idea from @katymilkman.bsky.social on her Choiceology podcast. I believe the issue is illusion of explanatory depth. If I recall correctly, this episode discusses: open.spotify.com/episode/4lqz...
comment in response to post
This paper may fit your criteria: Tutorial on Directed Acyclic Graphs pmc.ncbi.nlm.nih.gov/articles/PMC...
comment in response to post
Agree. The word association has been used across description, prediction, and causal inference studies. Most often without much information about the actual intent.
comment in response to post
Likely an overestimated effect. Adherence-adjusted estimation is not straightforward and often incorrect. This article nicely summarizes some issues in adherence-adjusted estimation: pmc.ncbi.nlm.nih.gov/articles/PMC...
comment in response to post
Hi! I’m an epidemiologist with interest in the application of methods for causal inference, prediction, and evidence synthesis. I’m an embedded researcher within a safety-net health system. Most of my work is in the context of medications, healthcare interventions, and health policy.
comment in response to post
Bradford Hill’s list is probably not the best argument for judging causality. He specifically stated that the list comprised viewpoints and none were necessary nor sufficient for causation. Nice article here that summarizes the missed lessons of Bradford Hill: pmc.ncbi.nlm.nih.gov/articles/PMC...
comment in response to post
Can you clarify what you mean by “inputs as further training data?” My understanding is that the underlying model is pre-trained (the P in GPT), but that outputs can be fine-tuned to user preferences. This fine-tuning is not the same as training because the underlying model is unchanged.