Profile avatar
gidon-frischkorn.bsky.social
SNF Ambizione Fellow @ the Cognitive Psychology Lab, University of Zurich. Working on Psychometrics, Cognitive Modeling & Individual Differences. Co-Developer of bmm: R package for Bayesian Measurment Models: https://github.com/venpopov/bmm
43 posts 1,287 followers 1,013 following
Prolific Poster
Active Commenter
comment in response to post
Bottom line: Various SEM can fit well yet tell different stories — highlighting the model-specification challenge. Best fix from my perspective? Refine indicators using theoretically grounded parameters from formal measurement models: doi.org/10.1016/j.in...
comment in response to post
The original paper claimed that N-Back and Complex Span tasks measure different aspects of WMC and that N-Back tasks show stronger relationships with fluid reasoning. Some re-analysis showed the a single factor model fits better to the data and shows a correlation of .97 between the WMC & Gf.
comment in response to post
Yes definitely, the relevant code is here: github.com/venpopov/bmm... have a look at line 269 ff. Basically, you have to convert your variables collecting the response frequencies of the different response options into a matrix and store this matrix as a single variable in your data frame.
comment in response to post
Nice, the problem of passing aggregated data for a multinomial reponse is something we solved already for the Memory measurement model that is included in the development version of bmm. So, it sounds like that should work for the SDT models, too.
comment in response to post
In particular, we also aimed to implement SDT models with both Gaussian and Gumbel_min noise, for many of the reasons that you have outlined in your paper. So it would be super nice to make these models as accessible as possible to broaden their application.
comment in response to post
Super nice work! I saw that you implemented all models using custom families in brms, so I wondered if you would be interested in including these implementation in the bmm package. We have already thought about some SDT implementations, but have not yet finalized the implementations.
comment in response to post
Anyone working on measurement models for cognitive processes, feel free to reach out to discuss possibilities to include further models into the package! Some information on how to add new models can be found in the Developer Notes: venpopov.github.io/bmm/dev/dev-...
comment in response to post
Look out for updates to the packages! It already includes the Signal Discrimination Model by @koberauer.bsky.social and the developer version available via GitHub, also contains the Memory Measurement Model by @koberauer.bsky.social and @lewan.bsky.social.
comment in response to post
From my perspective there are two sides to this coin: 1) does the formal model fit the observed data. If yes, the model is valid. But then you can ask 2) variation in which parameters of the model is causally responsible for variation in the observed data. This is a matter of degree and not binary.
comment in response to post
The first project is available as a preprint: osf.io/sbyqt/ But we are still arguing with reviewers about the need to provide traditional evidence for validity, e.g. correlations with other inhibition measures (aka “convergent validity”) that themselves have an unknown validity status as I see it.
comment in response to post
I am currently working together with @koberauer.bsky.social on two projects exploring how investigating validity would work this way.
comment in response to post
And even if there is a formal model, it will likely contain several parameter. Thus, any indicator will capture variance from multiple processes contributing to variation in the indicator.
comment in response to post
This would require formal models of the processes supposed to cause variation in a measurement and how they translate into observed behavior. And I would say this needs to be something more than an IRT model (which is more of a statistical model from my perspective)
comment in response to post
In case of validity, I am very sympathetic to Boorsboom’s definition. And based on this perspective there is a more complicated issue with respect to quantifying validity. Following the Borsboom definition, validity is the fit of a theory to a measurement of the processes proposed by the theory.
comment in response to post
With respect to reliability, I would say yes, probably more precisely there is one reliability (the ratio of true score variance to total variance) and different sets of assumptions (including generalizability theory) that make certain statistical estimates proper estimators for it.
comment in response to post
If you are working with cognitive measurement models and are interested to get your model into the bmm package, feel free to reach.
comment in response to post
It is important to note that the bmm package will not remain limited to measurement models for visual working memory. We are already working on implementing: - models for memory tasks with categorical responses - signal detection models - ez-Versions of models for Reaction time data
comment in response to post
The revision contains: - updated examples reflecting the final (and mostly stable) syntax for model fitting - an extensive parameter recovery simulation comparing subject-wise ML estimation with Bayesian Hierarchical estimation - tips how to use parameters from bmm in individual difference studies
comment in response to post
Thanks for creating this! If there is space, it would be great if you can add me :-)
comment in response to post
Awesome collection, thanks a lot for the effort ☺️ would you mind adding me?