Profile avatar
johnsakaluk.bsky.social
He/him/his. Social psychologist at Western University. Work: #rstats, #psychometrics, #dyadic data, #MetaAnalysis, #closerelationships, Sexuality Fun: All things cured, fermented, roasted, seared, smoked, shaken, stirred, and swizzled.
120 posts 1,556 followers 472 following
Active Commenter
comment in response to post
Congrats to Bill! (and thanks for posting for those of us who are not attending!)
comment in response to post
Totally understand. But then I think the trouble is with the curricula the APA approves, and the extent to which they do/don't change (and perhaps we have agree more there). Some space for innovation in current curriculum, but IMO, the transformation you seek isn't feasible without reform from APA.
comment in response to post
Agree project-based exposure is best hope, but even then, how much discretionary research time do clinical students have? TBC: agree on importance, and am entirely unsurprised by the enthusiasm from students. But IMO, if clinical is to add, I think it needs to find places to subtract/condense.
comment in response to post
Respectfully disagree that this isn't difficult. IME, students need a fair amount of time to engage with the idea (readings, discussions) + navigating the new technologies (e.g., OSF, choosing btw the various prereg templates). Proseminars with demos/evangelizing won't cut it.
comment in response to post
As someone who has a toe dipped in the clinical pool, and has taught a number of clinical grads in my stats courses, I'll just say: I'm sympathetic to the challenge these programs face, because accredited coursework is already so packed, and open science additions would be nontrivial.
comment in response to post
comment in response to post
8/8. So, if you’re feeling down about these attacks, I understand—I feel that way too. But just remember that they’re not attacking because your work doesn’t matter; they’re attacking *precisely* because it does. So, get some rest, connect with your people, and keep doing it.
comment in response to post
2/8. Let me start with a recent example. The President’s “Border Czar” was recently furious because *checks notes* people have learned too much about their rights, which kept his team from exploiting them. Think about that. Knowing your rights is considered a threat. www.yahoo.com/news/trump-b...
comment in response to post
comment in response to post
Wow--hadn't but this seems very spot-on. Will throw it on my reading list!
comment in response to post
Also remain a big fan of the wisdom in this piece (which has lots to offer re: psychological interventions) by @ijzerman.bsky.social, @neillewisjr.bsky.social, @debruine.bsky.social and others: www.nature.com/articles/s41...
comment in response to post
Appreciate the plug. FWIW, while I still think the credibility angle remains important, I think it has to be embedded alongside synthesized metrics corresponding to other kinds of quality-control features (e.g., inclusivity)--not an easy task. If interested: www.nature.com/articles/s44...
comment in response to post
The Bruhpocalypse
comment in response to post
There's also a public GitHub repo where you can find the source for all slide sets and worksheets. About 25% of the class material has been ported so far. github.com/wilkelab/SDS...
comment in response to post
14. There’s some other smaller fixes included, but those are the big additions in dySEM 1.1.1. We hope y’all enjoy them (and don’t mind the deprecating of outputModel()). We’ve got some really exciting plans for dySEM in 2025, and I’m lucky enough to have Omar’s continued help pushing it forward!
comment in response to post
13. outputConstraintTab() streamlines this to a one-liner. Feed it the constrained model w/ rejected level of invariance, and it will do all of this behind the scenes, and return an (immediately interpretable) tibble of Langrange multiplier tests, to identify specific sources of noninvariance.
comment in response to post
12. Normally, addressing this in {lavaan} is a rigmarole. You’d first use lavTestScore() to get the Lagrange multiplier tests of each constraint, then need to use partable() to decipher cryptic lhs and rhs parameter labels for interpretation of noninvariance. It's... not pretty.
comment in response to post
11. Then there is outputConstraintTab(): we are stoked on this function. Say you find some level(s) of invariance fail. Great. What now? Which specific variables are the culprits? And for which measurement parameter estimate(s) is there detectable noninvariance?
comment in response to post
10. TBVC: not saying I know better than Bentler or Steiger et al. (I don’t); error-inflation potential may be very real. But it’s unclear to what degree, and how that plays out with dyadic data. And so I think parsimony-first + inclusive modeling strategy are reasonable alternative values.
comment in response to post
9. {lavaan}'s choice reflects wisdom in Bentler (2000) re: possible non-independence/inflation of error rates when testing sequence with totally invariant model first—a claim based on investigation by Steiger et al. (1985), but I have been unable to identify the (dis)confirming Monte Carlo details.
comment in response to post
8. First, all else equal, why spend df you don’t have to estimate more measurement model parameters. But more substantively, in some designs (e.g., data sets of romantic dyads with LGBTQQIA+) this sequencing tacitly promotes the exclusion of certain kinds of dyads and dyad members.
comment in response to post
7. By default, {lavaan} + anova() (i.e., lavTestLRT()) tests invariance model sequencing such that least parsimonious model (configural) is baseline—a feature without override. I increasingly think in some contexts there is good reason to start with most constrained dyadic invariance model first.
comment in response to post
6. outputInvarCompTab() allows traditional frequentist nested model comparisons in dyadic invariance testing, but sequenced from most parsimonious model (residual + intercept + loading invariant) to least (merely configurally invariant). A quick explanation of “why?”:
comment in response to post
5. We also have two new outputters that we are excited about: outputInvarCompTab(), and outputConstraintTab(). outputInvarCompTab() facilitates (IMO) an improved dyadic invariance testing sequence. outputConstraintTab() will help you get more analytic detail out of dyadic invariance testing.
comment in response to post
4. All tabling outputters will henceforth return tibbles, by default. A younger me thought it was sensible to return formatted tables (e.g., with {gt}). Current me recognizes people can do that if they want, but they should have autonomy to use {ggplot}, or a different tabling package, etc.
comment in response to post
3. We have therefore deprecated outputModel()—which was asked to do too much—and split its functionality into two more specific available functions: outputParamTab() and outputParamFig(). The former is suited for generating tables of output, the latter for path diagrams.
comment in response to post
2. What’s changed? We felt some functions were too tasked with doing too many different things, with increasingly busy argumentation. We also wanted to ensure “outputters” returned model results in a way to ensure users could use output however they want (e.g., tables, plots, etc.)
comment in response to post
Just as a total hypothetical, but were you to come to a sexuality-focused conference (like CSRF, SSSS, or SSTAR), my suspicion is you'd be a *very* popular conversation partner re: where/how/why sexual function fits into HITOP
comment in response to post
My suspicion is folks like Lori Brotto and Morag Yule (just to name two who I know, with a clinical psych background, and interest in evidence-based treatment of sexual problems) would have some things to say. And then there are many more experienced clinicians outside of clinical psych umbrella..
comment in response to post
Obvs. not a clinician but I work around those in sexual functioning work, and this framing also stood out to me. Wondering who else y'all've chatted w/ that works in this space re: this framing? I'm sure everyone feels their area is special, but I do think sexual functioning is a bit particular.
comment in response to post
Legit the voice of my inner monologue when I've reached breaking point with something, and say to myself: "enough" 😂😅
comment in response to post
I tell them (until they are very comfortable navigating reproducible analytic environments) to avoid any write.-related functions (lest they overwrite their raw data with something mistake-ridden)
comment in response to post
I see that it's intended more for python and java script and other folks, but any experience/reflections on its performance for R pkg development?
comment in response to post
PS: if folks are worried about this in their own classes and would benefit from knowing what strategies I used to determine AI use, flip me a DM (would rather not post publicly)
comment in response to post
Take-away for me: at-home coding assignments are dead. The temptation of chatGPT assistance (beyond their comprehension) is (IMO) too tempting. I will probably have to return to in-class/timed coding assessments, which students will hate me for, but seems only way to get credible assessments.
comment in response to post
The hard reality is you're not gonna solve racism by timing how fast people press F or J. The field may be slowly recovering from that, but it still lives under a hangover of elevated expectations
comment in response to post
I think team size somehow promotes risk mitigation. 1-2 people can capitalize on uniquenesses to larp into a riskier output type and/or make sudden, drastic changes to how they implement something, and they only have to answer to themselves/make peace with their own losses.
comment in response to post
I don't think so, but your remark is, I think, quite telling about just how widespread this phenomenon might be
comment in response to post
Take a Big/Important/Complex problem that can be tackled with scientific products, and IME, there is a curvilinear relationship between Team Size and the propensity that the correct/only possible way to scientifically tackle said problem is to write a paper.
comment in response to post
At the same time, I think "Teams --> Better, Impactful Science" is given too much immediate assumed truth value. I dont believe in the lone genius myth, but IMO larger teams *in academe* (perhaps part of management issues) prone to over-democratization that stymies risk-taking/innovation.
comment in response to post
Yes, yes, yes to all of this. I was among those threatened by Coyne, and will legitimately sleep (and use this space) more peacefully now.
comment in response to post
Of course, YMMV. I think both that A) I am a strange supervisory cat; and B) I have had a "messier" early-career stage that shapes my (post-hoc) views of grad student selection.
comment in response to post
I think there's some heavy growth that occurs when one walks away--even tentatively--and chooses to come back. That clarity of purpose seems to help to boost focus, initiative, resilience (and many other qualities) in programs later.
comment in response to post
3) When I look back at all students at all levels, the feature that stands out as unifying the students I've worked best with is psychological maturity. Often (not always) this involves students who have had a taste of the "real world"--worked, gap year, career change, etc.--and come back.
comment in response to post
Now? If it's not heavily psychometric, dyadic, and/or research synthesis in flavour, I'm probably not going there. Not an indictment on the student, their ideas, or interests; just a recognition that I don't feel adequately positioned to support them.
comment in response to post
2) I used to approach supervision as it was my duty to facilitate students pursuing their unique interests, and I've learned the hard way that (for me) that's not the way. I'm less able to help identify what's important or not, heat-check bad plans, identify efficiencies, etc.