Profile avatar
kmahowald.bsky.social
UT Austin linguist http://mahowak.github.io/. computational linguistics, cognition, psycholinguistics, NLP, crosswords. occasionally hockey?
61 posts 2,851 followers 506 following
Regular Contributor
Active Commenter

Delighted to have Elias joining the UT NLP community!

What do "Maui, Sicily, Thailand" have in common? Ok, "places". But I say "White Lotus locales": it would be quite a coincidence if I hit on all 3 by chance! We ask how LMs do at this kind of inference. Also fun to do a study on "the number game", the first Bayesian cogsci I learned in grad school!

I might be able to hire a postdoc for this fall in computational linguistics at UT Austin. Topics in the general LLM + cognitive space (particularly reasoning, chain of thought, LLMs + code) and LLM + linguistic space. If this could be of interest, feel free to get in touch!

Writing my first post here to announce that I've accepted an assistant professor job at TTIC! I'll be starting in Fall 2026, and recruiting students this upcoming cycle. Until then, I'll be wrapping up the PhD at Berkeley, and this summer I'll join NYU as a CDS Faculty Fellow 🏙️

PINEAPPLE, LIGHT, HAPPY, AVALANCHE, BURDEN Some of these words are consistently remembered better than others. Why is that? In our paper, just published in J. Exp. Psychol., we provide a simple Bayesian account and show that it explains >80% of variance in word memorability: tinyurl.com/yf3md5aj

@kmahowald.bsky.social with a beautiful high-tech illustration 🎨 while describing @qyao.bsky.social's latest paper at the HSP online seminar series! Paper: arxiv.org/abs/2503.20850

Will be talking about this work (and more) at 2 ET/11 PT in the HSP talk series on Computational Language Models and Psycholinguistics! www.hspsociety.org

If you give a mouse a cookie....does an LM learn something different than if you "give a cookie to a mouse"? Or if you don't give anyone anything? Or if you do other weird stuff to the input? New paper on manipulating ling input and training small LMs to study direct vs indirect evidence.

LMs learn argument-based preferences for dative constructions (preferring recipient first when it’s shorter), consistent with humans. Is this from memorizing preferences in training? New paper w/ @kanishka.bsky.social , @weissweiler.bsky.social , @kmahowald.bsky.social arxiv.org/abs/2503.20850

If you give a mouse a cookie....does an LM learn something different than if you "give a cookie to a mouse"? Or if you don't give anyone anything? Or if you do other weird stuff to the input? New paper on manipulating ling input and training small LMs to study direct vs indirect evidence.

Check out our new work on introspection in LLMs! 🔍 TL;DR we find no evidence that LLMs have privileged access to their own knowledge. Beyond the study of LLM introspection, our findings inform an ongoing debate in linguistics research: prompting (eg grammaticality judgments) =/= prob measurement!

If I ask model A “is this sentence grammatical” and it says yes, does that mean model A is more likely to produce that sentence than model B? Check out our new paper on whether models introspect about knowledge of language.

I'm excited to announce two papers of ours which will be presented this summer at @naaclmeeting.bsky.social eting.bsky.social and @iclr-conf.bsky.social ! 🧵

Check it out for cool plots like this about how affinities between words in sentences and how they can show how Green Day isn't like green paint or green tea. And congrats to @coryshain.bsky.social and the CLiMB lab! climblab.org

I just had a chance to watch this fantastic talk. I really recommend it for anyone interested in how LLMs can help us understand language: www.youtube.com/watch?v=DBor...

Looking forward to speaking tomorrow (Tues am) in this Simons workshop in Berkeley simons.berkeley.edu/workshops/ll.... Will talk about some empirical work and also share some takes from this recent preprint from me and @futrell.bsky.social arxiv.org/abs/2501.17047

LMs need linguistics! New paper, with @futrell.bsky.social, on LMs and linguistics that conveys our excitement about what the present moment means for linguistics and what linguistics can do for LMs. Paper: arxiv.org/abs/2501.17047. 🧵below.

LMs need linguistics! New paper, with @futrell.bsky.social, on LMs and linguistics that conveys our excitement about what the present moment means for linguistics and what linguistics can do for LMs. Paper: arxiv.org/abs/2501.17047. 🧵below.

This is a beautiful paper! The first third helpfully labels a stream of recent work in philosophy of AI as "propositional interpretability". The idea is to use propositional attitudes like belief, desire, and intention, to help explain AI in a way that we can understand. 1/n

Quanta write-up of our Mission: Impossible Language Models work, led by @juliekallini.bsky.social. As the photos suggest, Richard, @isabelpapad.bsky.social, and I do all our work sitting together around a single laptop and pointing at the screen.

If this is anything like the live version at the LSA (and it seems to be!), it's worth watching for an inspiring vision for how linguistics and LLMs can fit together...and, as this slide near the end shows, how linguistic phenomena can be described neurally, artificial-neurally, or symbolically.

I like the NLP reference in "Walking in a Winter Wonderland", where they say "In the meadow, we can build a snowman / Then pretend that he is parsin' [the] Brown [corpus]"

LSA president Tony Woodbury on Sapir's idea that each language has its own "genius", and that each language should be described with its own framework, rather than through a general ("theoretical") framework. muse.jhu.edu/article/948426

I defended my PhD at MIT Brain&Cog last week--so much gratitude to my advisor @evfedorenko.bsky.social, as well as my committee @nancykanwisher.bsky.social, @joshhmcdermott.bsky.social and Yoon Kim. Thank you to all my brilliant collaborators and the MIT community. I have loved this journey so much.

In Vancouver for #NeurIPS2024 workshops! At Math-AI tomorrow @sashaboguraev.bsky.social is presenting our experiment-infused position piece on the communicative nature of math and why that matters for AI arxiv.org/pdf/2409.17005. Say hi! Will be better than the Panthers 4-0 loss to the Canucks.