urish.bsky.social
Machine learning researcher, working on causal inference and healthcare applications
58 posts
3,998 followers
493 following
Getting Started
Active Commenter
comment in response to
post
They are graded on giving full proofs. LLMs are quite bad at that.
Though I’m sure there’s also quite a bit of test data leaks going around
comment in response to
post
indeed!
comment in response to
post
Not a big SSC fan but I liked his idea of epistemic learned helplessness
slatestarcodex.com/2019/06/03/r...
comment in response to
post
Not going for exhaustive!
comment in response to
post
Tangentially related: interesting to think of the biblical law of jubilee in this context. It says that every 50 years debts are dropped, indentured servants released and land is returned to “original” owners
comment in response to
post
This piece from August seems like a good writeup:
mathscholar.org/2024/08/new-...
It mentions a new book from 2024 by Jack Szostak and Mario Livio called “Is Earth Exceptional? The Quest for Cosmic Life”.
Szostak is a leading scientist in the field (I haven’t read the book yet)
comment in response to
post
This is a good, longish essay on the subject
by @randomwalker.bsky.social and @sayash.bsky.social
www.aisnakeoil.com/p/ai-existen...
comment in response to
post
I love this sketch of a Purkinje cell (a type of neuron found in the cerebellum) by Santiago Ramón y Cajal
comment in response to
post
to be fair, ICLR is a far cry from what Yann is suggesting in that piece. I'm not questioning the need for reform. My question is why didn't the push for reform succeed back then, and what can we learn from that? (in the spirit of "Everyone will not just")
comment in response to
post
I recall that Yann LeCun had some interesting suggestions back in 2013-2014. Despite his clout the community didn't move much* yann.lecun.com/ex/pamphlets...
*ICLR public reviews and TMLR are small steps which I think followed from the discussions going around back then
comment in response to
post
This is such a good point, and I love the connection you're making in the paper to resilience to hidden confounding. In many cases the treatments that would shift are exactly those that have more "exogenous randomness" in them, and for these units the effect might be more easily identified from data
comment in response to
post
super interesting!
comment in response to
post
Breaking the Maya Code, by Michael Coe, about the deciphering of Mayan script
One of the people involved in the story is Yuri Knorozov, pictured below
comment in response to
post
Instead some scientists just said “close schools!” , conflating their own priorities with science and hurting the credibility of scientists overall.
Their intentions were good but I think the overall outcome is not
2/2
comment in response to
post
an example where I think some scientists stumbled: during COVID after the first few months, imo a responsible scientist would say “closing schools has these benefits and these harms (w/uncertainty), the politicians and public should weigh them and decide” 1/2
comment in response to
post
Related to this, been enjoying this paper by Icard, @jfkominsky.bsky.social & Knobe looking at how "normality" affects the way humans judge causes.
eg in when you need two factors to cause an event (say oxygen + match to cause a fire), humans will judge the less "normal" element to be more causal
comment in response to
post
I'm now getting a much better signal-to-noise for ML discussions here than on Xitter plus funnier/more profound shitposting, and much, much less rage inducing screaming and general junk
comment in response to
post
Feeling much nicer here
comment in response to
post
I don’t know about other domains, but in healthcare I’ve seen the term used to basically mean “a model of how a patient would respond to a treatment other than the one they’ve actually received”. When used in that sense it’s just corpo ai brainwash as @natolambert.bsky.social said
comment in response to
post
While I think this is a great paper, I also think that the focus on causal features (which is only part of what the paper is about) is a bit of red herring
bsky.app/profile/uris...
comment in response to
post
OTOH consider a severe headache. While the pain itself is probably not immediately causal, it’s a strong and stable symptom of underlying conditions and this it’s a stable feature. Indeed almost any classic diagnosis of disease by symptoms is anti-causal yet stable
(3/3)
comment in response to
post
E.g. consider the time of day someone goes into an ER. That might influence who sees them and how quickly which will influence many downstream outcomes causally. But the specifics of this effect will vary wildly between different ERs making this an unstable feature
(2/3)
comment in response to
post
To be fair, I think there’s no strong reason to think that causal features are a priori more stable than others
(1/3)
comment in response to
post
I’d like to hear your spicy ML takes
comment in response to
post
Merci 🙏🙏🙏
comment in response to
post
I loved this book
comment in response to
post
Great idea! I'd love to be added
comment in response to
post
And despite all these papers it seems that in many realistic scenarios it’s still hard to consistently beat good old ERM
an example in EHR data:
pmc.ncbi.nlm.nih.gov/articles/PMC...
comment in response to
post
There’s a long line of literature starting from the ICP paper arxiv.org/abs/1501.01332, with IRM being an important (if imperfect) checkpoint arxiv.org/abs/1907.02893
I also really like the anchor regression paper
arxiv.org/abs/1801.06229
and this is our work (calibrate!)
arxiv.org/abs/2102.10395
comment in response to
post
Let me know if you’re around!
comment in response to
post
The milk in Hersheys chocolate goes through lipolysis, which breaks down some of its fatty acids in order to extend shelf life. The problem is that lipolysis yields butyric acid, an acid which is literally found in vomit and gives Hersheys that very specific revolting taste
comment in response to
post
I think the recent spate was prompted by this piece
www.ft.com/content/6596...
comment in response to
post
יש לי בראש שלפני איזה חודש חודשיים היו שם אופציות יותר מעניינות של להראות רק חלק מהתגובות לפי … כל מיני קריטריונים שאני לא זוכר בדיוק אבל שהיו לי שימושיים
מקווה שיחזירו את זה
comment in response to
post
בsettings יש את Following feed preferences
ואני חושב ששם אפשר לשלוט בזה
comment in response to
post
My knee jerk is “assumption which might hold only in special situations / do not accept easily that it’s true”.
Fascinating to see in the replies how people read it completely different
comment in response to
post
Seems like bluesky naturally lends itself to dunking on SUTVA (n=2, p<∞)
bsky.app/profile/uris...
comment in response to
post
!! Arabic Afrikaans is one hell of a mashup
comment in response to
post
Cochin Jews are also an interesting medieval (possibly earlier?) community, in south India. I love that they had their own dialect, Judeo-Malayalam
en.wikipedia.org/wiki/Cochin_...
comment in response to
post
This is surprising to me! I would love to better understand why you think that’s the case?
comment in response to
post
If you read some piece of information that seems to exactly confirm everything bad you ever thought about <something you hate>, there’s a good chance it’s misleading or wrong
comment in response to
post
Is there something analogous to Riemann’s series theorem, but for rearranging parentheses while keeping the order of the summands fixed?
comment in response to
post
I think this is only a threat if:
1. Revs. who would have written quality reviews now write bad reviews because they can use ChatGPT,
and/or
2. ACs who would previously recognize bad reviews would miss some because they're masekd in ChatGPTese
I might be wrong by IMO none of these is a big threat?
comment in response to
post
Thanks! I wasn’t aware of the noise aspect. Very interesting