complemental.bsky.social
Computer science PhD — causal discovery for Earth science. Also interested in trustworthy ML.
Born and raised in New Mexico.
🎶Trying to make a dollar out of what makes cents.
[Machine] Learn to Save Earth 🌎🌏🌍
195 posts
354 followers
1,283 following
Getting Started
Active Commenter
comment in response to
post
What explains that?
comment in response to
post
From March www.msnbc.com/morning-joe/...
comment in response to
post
Criticizing words like “gap” and “novel” doesn’t make much sense because those are things the journal explicitly wants. It’s just good writing to say “here’s the part you’re looking for.”
comment in response to
post
Hard to imagine two worse examples for any apparent shift.
comment in response to
post
I agree, they do not suck and are useful. However, they unintentionally can be extremely misleading. No fact finding should be done with them unless the user is qualified to independently verify the claims. They are absolutely *not* trustworthy tools by the NIST guidelines.
comment in response to
post
Just imagine the times of night spent writing them
comment in response to
post
Looks like a google scholar results page
comment in response to
post
(Or as likely to)
comment in response to
post
I’m no linguist, but this seems like a great analogy for all machine learning. I don’t think it’s impossible for an ML model to capture the true underlying structure (as a universal function approximator), but it is generally unlikely. It’s harder to say if LLMs are capable of modeling langue.
comment in response to
post
comment in response to
post
By that logic, if money markets or CDs were the hip place to put savings then Harris would have won. We can’t excuse young men’s votes because they don’t understand how to invest (or get a free bank account).
comment in response to
post
It’s really unfortunate that this is the generous interpretation and also most likely
comment in response to
post
That the real challenges are rarely in the work itself but navigating the administrative, political, and funding mazes. I’ve found most people who drop out did not do so for academic reasons, but because of either life changes or some intolerance for the BS in the system, and they’re rarely wrong.
comment in response to
post
And I think “investors” rarely expect them to grow value but often find it funny when they do. Doge is an example.
comment in response to
post
As an AI researcher like the ones you called out, that was truly unexpected and appreciated. There is strong interest in certain domains (generally only publicly funded) to make AI that supports humanity in transparent, trustworthy ways that don’t erode our social structures.
comment in response to
post
Where’s a good starting point for reading statisticians directly addressing causality?
comment in response to
post
The destruction to NNSA says it all: it’s destruction for destruction’s sake. No institution is safe for any reason.
comment in response to
post
Modern conservatism in a nutshell
comment in response to
post
Ah so he unironically just likes the evil dudes
comment in response to
post
I’ve wondered about this. I guess the answer might be illiteracy. It’s more likely willful ignorance though. He likes fantasy in a childlike way and chooses to ignore the themes that disagree with his ideology.
bsky.app/profile/comp...
comment in response to
post
Hopefully they can stop for those in New Mexico
comment in response to
post
Usually it’s labor regulations or waiting for materials or some other process. A lot of construction is delayed waiting for some machine that’s being used on another job.
comment in response to
post
Perhaps private funds could help in the meantime. There are plenty of climate-conscious philanthropists and international climate orgs.
Not only should data be downloaded but make sure to get raws with as much meta data as is available.
comment in response to
post
These are the right questions but I’ve been thinking of a distributed system. A lot of scientists have tons of data already stored locally for analysis. Coverage might be spotty, though, but we’ll need an effort to validate and combine datasets when we get past this.
comment in response to
post
Yeah it’s not easy, and forces work to muddle the narrative. From politicians to fossil fuel corporations to “think tanks.”
comment in response to
post
We did. The scientists don’t control the narrative though.
comment in response to
post
I’m more cynical. The environment appears to be a bubble right now because they are still working on productizing and engineering existing AI capabilities. Its intelligence won’t change significantly in the near-term, imo, but its integrations will improve. It will become far more insidious.
comment in response to
post
“Better is good”
comment in response to
post
That makes sense. Yeah I haven’t heard of matlab being used in production models but it’s a good environment for learning to program PDEs. I’d be curious to find out how they begin learning to model weather/climate dynamics actually
comment in response to
post
I would vaguely guess people working on those want to know C/Fortran/MATLAB, but I’m really not sure what their day to day looks like and what they’d learn in school
comment in response to
post
Well I guess I’m not sure. I’ve worked with scientists who were part of the E3SM project. I remember different parts, including model development in C/Fortran I think, model tuning, and analysis of output. I know development includes work from microphysics modeling to the dynamical core, to coupling
comment in response to
post
Do climate science students end up in more analysis jobs or modeling jobs or both? Is that even the right framing?
comment in response to
post
“To replace 40% of fossil fuels wouldn’t be practically possible (even with near limitless budgets) for at least 50 years.” Says who?
comment in response to
post
No worries, I appreciate the article. I think the language merits more discussion and I like where the authors wound up, I just don’t personally like the path they took. Thanks for sharing.
comment in response to
post
Some ways ML models make mistakes via “perception” include Simpson’s paradox, the Clever Hans effect, and confounding. These absolutely are errors of reason and not factors in papering over missing information. These models don’t know how to effectively reason about the information they see.
comment in response to
post
“It’s not that the model is suffering errors of perception; it’s attempting to paper over the gaps in a corpus of training data that can’t possibly span every scenario it might encounter.”
It actually is suffering errors of perception AND papering over gaps.
comment in response to
post
Hmm I actually like the term bullshitting over hallucination because people will understand it. That article has a lot of misunderstandings though. It’s written by two evolutionary biologists who I think miss what is going on inside a machine learning model.
comment in response to
post
Are you saying scientists should stay in their lane?
comment in response to
post
They're all in this starter pack!
go.bsky.app/5kuYAhM
comment in response to
post
Climate change?
comment in response to
post
And when a scientific field is politicized: bsky.app/profile/colo...
comment in response to
post
Something has to explain the different causal structures for different individuals right?