The irony of MIT having to withdraw an (almost certainly) AI-generated bullshit paper that faked data to prove how great AI is for science (only after having already received glowing WSJ science coverage.)
Comments
Log in with your Bluesky account to leave a comment
I find some solace in the fact that they had the courage to walk it back. They could’ve said nothing. It almost feels like there’s no consequences for lying in our current propaganda-laced landscape, so taking responsibility and admitting faulty results gives me a shred of hope. But just a shred.
None of the mechanisms we use to scale trust in science are even the least bit ready for any of this, so at this point, I suspect if you don't like cooked, maybe someone will offer you toast?
Altho I do wonder if evidence of bullshit - especially where everyone who knows is someone it’s meant to influence - is also evidence of future absence of bullshit
I think the problem is it looks like there was no actual data and no way to do the work, so he had no point to make other than his apparently fraudulent paper. In the end, getting caught made the much larger point ¯\_(ツ)_/¯
Ah, I’ve seen that before, but for a master’s thesis. Started out with a premise. A year later, all the data said premise was completely wrong - just usually you can use the data to then draw a new unique conclusion to salvage your work.
In this case, it’s looking like he manufactured the data from the start. At least when your data doesn’t align with your premise, you now have evidence that your premise is invalid, which is a useful result! His whole process is having trouble withstanding scrutiny.
A symptom of metrics-based evaluation. It's hard to define good scholarship. It's easy to define number of publications and citations. Yes, there are problems with subjective evaluations; there are also problems with supposedly objective metrics.
Did they expel the graduate student? Or just tell them not to do it again?
Hard to believe the grad student didn't know this was fabrication. But perhaps they thought the AI engine wouldn't produce it if its conclusions weren't true?
All I’ve heard about that so far is that that student “is no longer at MIT”. Could mean a lot of things. But at least they’re not doing the absolute worst, keeping the student and trying to squash the story.
(If MIT did expel motivated by egg on the Institute’s face, I’m cool with that.)
Comments
is
a
bullshit
bubble
Does anyone really think there aren’t hundreds more like this out there?
Hard to believe the grad student didn't know this was fabrication. But perhaps they thought the AI engine wouldn't produce it if its conclusions weren't true?
(If MIT did expel motivated by egg on the Institute’s face, I’m cool with that.)
And, having brought it into being, they got offered a job? Which they took?
(Just speculating. I have no facts. Don't even know names.)