Profile avatar
akumar03.bsky.social
<Causality | Ph.D. Candidate @mit | Physics> I narrate (probably approximately correct) causal stories. Past: Research Fellow @MSFTResearch Website: abhinavkumar.info
2 posts 66 followers 95 following
Prolific Poster

Love this quote from Sir Michael Atiyah: "Algebra is the offer made by the devil to a mathematician. The devil says, 'I will give you this powerful machine...all you need to do is give up your soul: give up geometry...' when you do algebra...you stop thinking about meaning, about geometry

I am happy to announce that the Kakeya set conjecture, one of the most sought after open problems in geometric measure theory, has now been proven (in three dimensions) by Hong Wang and Joshua Zahl! arxiv.org/abs/2502.17655 I discuss some ideas of the proof at terrytao.wordpress.com/2025/02/25/t...

Sometimes reading Hamming makes me sad, because I recognize myself in this quote.

My Simons talk on all the next-token & multi-token prediction stuff I've been yapping about is up, if anyone wants to watch! There're better versions of the talk that Gregor co-presented with me elsewhere, but sadly not recorded. He is an amazing presenter! www.youtube.com/watch?v=9V0b...

New paper: Simulating Time With Square-Root Space people.csail.mit.edu/rrw/time-vs-... It's still hard for me to believe it myself, but I seem to have shown that TIME[t] is contained in SPACE[sqrt{t log t}]. To appear in STOC. Comments are very welcome!

In a small trial, nearly half of pancreatic cancer patients who received an mRNA vaccine had no signs of relapse after three years. Dr. Vinod Balachandran from @mskcancercenter.bsky.social joins us to discuss the results and what they could mean for cancer treatment.

"Learning space is frustrating." This needs to be said. www.youtube.com/watch?v=zpR6...

[Reposted to include time:] I'm very pumped for my talk next Tuesday to the Causal Inference Working Group at Johns Hopkins: "Homeostasis, Feedback Control, and Dynamic Causal Models" Feb 25 12:15-1:15 EST Talk is in person, with Zoom attendance possible: jhubluejays.zoom.us/j/9811546085...

Do you want to estimate causal effects for a small set of target variables without knowing the causal graph, but discovering it takes too long? Now you can get adjustment sets in a SNAP🫰accepted at #aistats2025! 📜 arxiv.org/abs/2502.07857 🧩 matyasch.github.io/snap/ 🧵 1/10

Johnny Xi, Hugh Dance, Peter Orbanz, Benjamin Bloem-Reddy Distinguishing Cause from Effect with Causal Velocity Models https://arxiv.org/abs/2502.05122

New video! Terence Tao on how we measure the cosmos: youtu.be/YdOXS_9_P4U

Better diffusions with scoring rules! Fewer, larger denoising steps using distributional losses; learn the posterior distribution of clean samples given the noisy versions. arxiv.org/pdf/2502.02483 @vdebortoli.bsky.social Galashov Guntupalli Zhou @sirbayes.bsky.social @arnauddoucet.bsky.social

Hot off the press!💥 arxiv.org/abs/2502.00407

Great book by former center fellow Chris Weaver #philsci

Gil Cohen gave a fabulous mini-course on analytic approaches to spectral graph theory and Ramanujan graphs. Highly recommended! The lectures are available in the link below. www.youtube.com/playlist?lis...

I'm fascinated by areas of math and science where there's a method that sort of works, but it's unclear why it does. Disciplines like phil sci and comp sci that focus on axioms/1st principles have a blind spot for the fruitfulness of trying to explain such success. www.youtube.com/watch?v=KZsk...

Multiple threads from @mytholder.bsky.social about how Tolkien wrote LOTR, all of them great. Watch how tiny tweaks to fulfill narrative goals end up completely upending what kind of story is being told.

Simson Garfinkel's (high-level) intro to Differential Privacy (DP) book will be published in March at MIT Press. A well-written, insightful, and engaging non-technical intro to DP, with discussions, motivations, and historical context. Highly recommended! mitpress.mit.edu/978026255165...

For 2025, I am going to do something a bit different. Every Monday is now #MEstimatorMonday Each Monday, I'll talk about different M-estimators or some of their properties. This 1/52, which will just be some table setting

I've realised a couple of times over the break that when you're able to set aside a block of time for free research (even just e.g. a full afternoon), the benefits of having that unconstrained time are super nice in some quite specific ways.

Looks to be a really nice read; clear that the man enjoys writing! arxiv.org/abs/2501.00925 'Fisher Information in Kinetic Theory' - Cédric Villani

When the quantum revolution happened, top European universities were at the frontiers. But a historic breakthrough came from a physicist far away from the action. Indian physicist Satyendra Nath Bose, discoverer of Bose statistics, was born OTD in 1894. My article in The Hindu (from last year) ⚛️

Our new paper! "Analytic theory of creativity in convolutional diffusion models" lead expertly by @masonkamb.bsky.social arxiv.org/abs/2412.20292 Our closed-form theory needs no training, is mechanistically interpretable & accurately predicts diffusion model outputs with high median r^2~0.9

Finally compiled and refined this a bit; do take a look if you're curious! A more TeX'd version is attached in the screenshots. hackmd.io/@sp-monte-ca... 'Some Intuition Boosters'

I've been thinking about in-context learning for nearly 3 years. While there is still plenty I don't fully understand, five papers have--to a very large extent--shaped my perspective on it, and I believe everyone should read them.

For me, 2024 was the year of Synthesis Estimators. Synthesis is work that came to fruition from my interest in merging ideas from ‘causal inference’ and ‘mathematical modeling’ throughout my PhD. Here are some highlights to close out the year

Very interesting paper by Ananda Theertha Suresh et al. For categorical/Gaussian distributions, they derive the rate at which a sample is forgotten to be 1/k after k rounds of recursive training (hence 𝐦𝐨𝐝𝐞𝐥 𝐜𝐨𝐥𝐥𝐚𝐩𝐬𝐞 happens more slowly than intuitively expected)

Some of you might enjoy my lectures on "asymptotics and perturbation methods". youtube.com/playlist?lis... These are ingenious methods for approximating the solutions to integrals and differential equations by exploiting the presence of a small or large parameter in the problem.

Found slides by Ankur Moitra (presented at a TCS For All event) on "How to do theoretical research." Full of great advice! My favourite: "Find the easiest problem you can't solve. The more embarrassing, the better!" Slides: drive.google.com/file/d/15VaT... TCS For all: sigact.org/tcsforall/

I'll be at #NeurIPS2024 (Dec. 10–15) to present our papers on causal representation learning, causal discovery, and causal bandits (see thread). Looking to meet old and new friends and chat about causality and representation learning.

Dimitri Meunier, Zhu Li, Tim Christensen, Arthur Gretton Nonparametric Instrumental Regression via Kernel Methods is Minimax Optimal https://arxiv.org/abs/2411.19653

I'm surprised this result (which is essentially a nonuniform version realizable, online classification) was not known. Result says you can make a finite number of mistakes if and only if the class is a countable union of classes with finite Littlestone dimension. arxiv.org/abs/2312.00170

Created this starter pack for Causal Community here. Please feel free to add. go.bsky.app/5Qom6AB