Profile avatar
singmann.bsky.social
Associate Professor at UCL Experimental Psychology; math psych & cognitive psychology; statistical and cognitive modelling in R; German migrant worker in UK
158 posts 972 followers 1,007 following
Regular Contributor
Active Commenter
comment in response to post
Thanks for reading and your feedback. Even if you are not convinced by Gumbel-min, maybe the other main take away is that the evidence for the Gaussian assumption is actually rather weak. It's main argument is tradition, which isn't such a great argument if you think about it.
comment in response to post
While Bayesian thinking is powerful, it is not the only broad formal account on which one can build a theory of memory. You might want to have a look at our previous paper (Kellen et al., 2021) showing how SDT is a member of the class of random utility models: singmann.org/download/pub...
comment in response to post
We reanalyse several data sets that combine multiple tasks, specifically yes/no with either m-AFC or ranking tasks. See section "Predictive Benchmarking" (pp 43 to 47). The relevant papers are: Jang et al. (2009), Smith & Duncan (2004), Kellen et al. (2012) and Kellen et al. (2021)
comment in response to post
I agree there should be some commonality across tasks, and our tests do compare yes-no, forced-choice, and ranking. But they target multi-feature stimuli like words or faces. Single-feature tasks like colour recall may well align with Gaussian models--it's just not our focus (yet).
comment in response to post
Thanks!
comment in response to post
Not sure how such a task would even look like in the context of recognition-memory as covered in the paper.
comment in response to post
We do address this issue a bit in section "Why Minima?" on pages 24 to 25. Additionally, I know that Ven Popov has a working computational model that produces Gumbel-min evidence distributions. So for more details you might have to wait for his manuscript or email him.
comment in response to post
Interesting, I should check your solution out then. Maybe it comes with a nicer interface than my solution.
comment in response to post
The code is freely available (github.com/singmann/gum...), so feel free to include it in your package. I think the main question is how to hand-over a multinomial response to brms. It is trivial if data is disaggreagted, but less so with aggregated data that can speed-up fitting considerably.
comment in response to post
A main point of our manuscript is that this idea is not actually supported by the data. The Gumbel-min model (i.e., g') provides a pretty much equal account to the data as the UV-Gaussian model and performs clearly better in terms of out-of-sample predictions. And Gumbel-min assumes equal variances!
comment in response to post
One issue with the simulations in the paper you linked is that the data generating model is the unequal-variance SDT model. This makes it somewhat unsurprising that a measure based on the unequal-variance SDT model (i.e., d_a) wins against other measures that cannot accommodate asymmetric ROCs.
comment in response to post
The Gaussian model can only predict invariance for both tasks or increase for both tasks, but not the differential behaviour that is shown in the data.
comment in response to post
Because the Gumbel-min model implies a unique behavioural principle that is beautifully confirmed by the data: invariance to choice set-size expansion for a detect new task, while simultaneously predicting increased accuracy with choice set size for a detect old task. bsky.app/profile/sing...
comment in response to post
Yes & we discuss some shortcomings of d_a. As shown below, d_a does not permit an ordering of participants according to performance (d' and g' do). We also compare Type I error rates for g', d', and d_a for real H/FA-pairs where only response bias differs, only g' maintains 5% Type I errors (pp. 51)
comment in response to post
Thanks. One thing I learned in this project is how much better some models run in brms if you formulate the whole model in log-probability space and use the ..._logit_lpmf functions (See appendix C). It's like a brms cheat code against divergent transitions.
comment in response to post
In short: SDT doesn't have to be Gaussian. Gumbel-min SDT provides a principled, powerful, and empirically supported alternative—a “parametric road not taken” that's well worth exploring. Its performance index, g', clearly outperforms d' in recognition memory and should be preferred going forward.
comment in response to post
A particularly noteworthy example of a Gumbel-min prediction is shown here. The ROC predicted from g' (calculated from a single yes/no point) closely matches the ROC reconstruction derived independently from forced-choice judgments. The Gaussian model cannot even make a prediction in this case.
comment in response to post
We also evaluated the predictive performance of both models: Identify a relevant portion of the data that that can be omitted. Leave it out, and test each model’s predictions against it. Across all predictive tests, a pattern was clear—the Gumbel-min model outperformed the Gaussian.
comment in response to post
We compared the descriptive performance of both models across 35 datasets from four different recognition memory paradigms. The Gumbel-min model fits the data nearly as well as the Gaussian model. Once model complexity was penalized via AIC, the Gumbel-min model matched or outperformed the Gaussian.
comment in response to post
The Gumbel-min model implies a behavioural principle: the probability of choosing a new item remains constant as choice sets grow. An experiment confirms this principle with constant accuracy for new item detection (2M-min). For old-item detection (2M-max), accuracy increase with choice set.
comment in response to post
We consider an SDT model assuming Gumbel-min (i.e., minimum extreme-value) distributions. The Gumbel-min model avoids the problrms of the Gaussian model, predicts asymmetric ROCs assuming equal variances, and allows calculating measures of discriminability and response bias, g′ and kappa.
comment in response to post
In recognition memory, ROCs are typically asymmetric, which requires Gaussian distributions with unequal variance. One problem with the unequal-variance model is that it predicts below chance performance for items with very low familiarity (i.e., studying makes some items less familiar).
comment in response to post
SDT is a cornerstone of recognition memory research, primarily assuming Gaussian distributions – a choice based more on tradition than necessity. The standard model assumes two equal-variance distributions, allows calculating d′ from a pair of hits and false alarms, and predicts symmetric ROCs.
comment in response to post
That's true, mixed models do provide benefits in a wide range of settings. However, the point of my talk is a bit the opposite. Oftentimes we are not in any of these settings and then mixed models provide more pain than gain compared to ANOVA (e.g. which RE structure, convergence problems).