I’m on the academic job market this year! I’m completing my @uwcse.bsky.social @uwnlp.bsky.social Ph.D. (2025), focusing on overcoming LLM limitations like hallucinations, by building new LMs.
My Ph.D. work focuses on Retrieval-Augmented LMs to create more reliable AI systems 🧵
My Ph.D. work focuses on Retrieval-Augmented LMs to create more reliable AI systems 🧵
Comments
My work showed that scaling LLMs alone doesn’t solve issues like hallucinations or obsolete knowledge and is compute suboptimal, and Retrieval-Augmented LMs address these challenges. See our ACL 2023 Best Video Award paper:
https://aclanthology.org/2023.acl-long.546/
Retrieval-augmented LMs need more than off-the-shelf models. I developed advanced training/inference algorithms & architectures, including Self-RAG (ICLR 2024 Oral; NeurIPS Workshop Hon. Mention) for adaptive retrieval & self-critique.
Learn more:
https://selfrag.github.io/
Retrieval-Augmented LMs tackle critical challenges like:
1️⃣ Unreliable LMs in expert domains
2️⃣ Information access inequity across languages
I launched OpenScholar for scientific synthesis—20k+ demo requests in week 1! Details: https://allenai.org/blog/openscholar