Profile avatar
tedunderwood.me
Uses machine learning to study literary imagination, and vice-versa. Likely to share news about AI & computational social science / Sozialwissenschaft / 社会科学 Information Sciences and English, UIUC. Author of Distant Horizons (Chicago, 2019). Pluviophile.
14,905 posts 18,651 followers 5,181 following
Regular Contributor
Active Commenter
comment in response to post
Oui, tout à fait. Contre Trump, contre les défilés militaires autocratiques et en faveur des traditions démocratiques.
comment in response to post
we met a chicken named "Taco"
comment in response to post
I think you covered it. The whole range from immediate tactics to "big think" matters. I suspect the most important effects will be hard to anticipate or control. But that doesn't mean we shouldn't *try* to shape them. If open source remains competitive, we have a chance.
comment in response to post
I imagine AI will eventually allow us to produce whole new ~forms~ of art. Likely interactive forms. But that probably won’t look like one person sitting at a computer writing one prompt that produces one image.
comment in response to post
It may vary depending on what kind of document you need reviewed. Lately, I have needed review of a practical proposal in a domain where bots may be unusually well-trained. Not all the critiques here were on-target, but I think the batting average was comparable to avg human review I've received.
comment in response to post
I guess it’s related to this part of Liam’s article. This is where AI would likely fail
comment in response to post
There’s abundant empirical evidence about the diversity problem. And while I can imagine technical fixes for it (eg using persona models), I’m not sure they get at the really fundamental reason we value diversity of perspective, which has to do … with a need for independence and resistance?
comment in response to post
I’ve gotten high-quality conceptual reviews from current-gen reasoning models. The critical problem with AI, I think, is that it is very unlikely to model anything like the actual *diversity* of opinion in a field. A problem very relevant to this paper about the advantage of multiple reviewers. +
comment in response to post
Got it. The working hypothesis is that there’s a difference between new individual facts (for which, RAG should work fine) and topics that require a whole web of loosely related assumptions (like what is “social media”). At any rate, that’s one of the hypotheses one might want to test.
comment in response to post
You know, it sounds to me like all you need to explore a question like that is two comparably good (instruction tuned) models with different cutoffs. 2023/2026 might work as well as 2010/2025. At least it’s not clear to me why the size of temporal gap would be critical.
comment in response to post
I’ve pre-trained a small model on text before 1915, but it’s not *good* yet — just 774M. I will be scaling it up and instruction tuning but that could take a year. Another option, somewhere between pre-training from scratch and fine-tuning, is continued pre-training. That’s what this paper uses:
comment in response to post
I don’t mind the vocabulary of “reasoning”/“thinking” very much; I’m not sure we have better (short!) words for what’s happening. I’m substantively bullish about LLMs. I just suspect the idea that there’s a magic threshold—which happens to == human agency—is not a good way to reason about impact.
comment in response to post
Yes, I agree. It’s puzzling, and I’m open to the possibility that my skepticism is off base. But I would be *more* open to that possibility if I saw that OAI, Anthropic, etc had given serious thought to alternate trajectories where increased capability doesn’t == ever more autonomous agency.
comment in response to post
what are the most characterless examples of fiction, I wonder
comment in response to post
So SF can play around with entities like Wintermute — characters that make a big fuss of “not having a personality as you humans would understand personality.“ But situations where there really is never a clear boundary between characters and non-characters … are hard to narrate.
comment in response to post
The other thing about fiction is that it necessarily cares a lot about the boundary between characters and things that are merely tools/documents/problems to be solved. Stories must have characters, and they need it to be clear (at least, eventually clear) which elements count as characters. +
comment in response to post
I love the Claude letter! I am, sir, most willing to accede to the stipulated terms, and do hereby solemnly covenant and affirm that I shall refrain, now and henceforth, from likening said engines of artificial discourse to parrots of the stochastic persuasion.
comment in response to post
send 'em!
comment in response to post
The real problem with Bluesky is now you can doomscroll sideways, as well as up and down.
comment in response to post
The real problem with Bluesky is now you can doomscroll sideways, as well as up and down.
comment in response to post
For which reason, I’m glad you and others are contesting those claims on substantive grounds. What I’m saying right now—and it’s okay for us simply to disagree about it—is that an autocrat’s EO claiming the right to dictate truth is not best understood as an opportunity to continue that debate.
comment in response to post
I’m not sure that I communicated effectively, because all of this is at a tangent to my point. My point is that I see it as a political mistake in the first place to treat the EO as a substantive intervention in metascientific debate.
comment in response to post
I agree with you about that. What I’m saying rn is that the response to the EO I would have loved to see is “We may disagree about metascience. But all of us agree that this call for truth to be dictated by political authority is nonsense. Full stop.” I consider that a more effective response.
comment in response to post
People who really don't want science reform to be weaponized can best resist that through *solidarity*, and by laughing off attempts to weaponize it.
comment in response to post
There will always be a rationale he can point to, in some stumbling incoherent way. No one who cares much about science will be persuaded, and we shouldn't act as if transparent lies are a brilliant rhetorical strategy.
comment in response to post
I think this sort of strategic second-guessing would be bad for science communication. But also — what I haven't heard mentioned as much — I think it's bad *politics* to pretend that it matters very much which transparently hollow rationales a notoriously dishonest autocrat chooses to invoke. +
comment in response to post
Strongly agree. This is not an approach that would generalize well across different fields and different nations.
comment in response to post
When William Gibson wrote the line, medium gray snow was your only option — and that’s what we’ve got. You’re right about the rain. Smells like an extinguished campfire.
comment in response to post
ooh, yeah, exactly
comment in response to post
What flavor boozy slushies do 6th graders prefer?
comment in response to post
oh yeah, you can do that
comment in response to post
porque no los dos?