jongreen.bsky.social
Assistant professor, Department of Political Science, Duke University
jgreen4919.github.io
2,239 posts
4,608 followers
535 following
Regular Contributor
Active Commenter
comment in response to
post
in any case, paper here: www.nature.com/articles/s41...
and ungated here: arxiv.org/abs/2308.06459
comment in response to
post
this suggests both that a) using source-level categories understates the scale of misinformation on social media, but b) it's really not obvious that this is a problem you can "solve" with stuff like fact checking claims or down-ranking bad domains. sometimes when people post, they get stuff wrong!
comment in response to
post
in the paper, we see that this happens a lot: people who share information from unreliable sources *also* share a lot of information from reliable sources, but they share *different* information from those sources and are often *repurposing* that information to advance false claims
comment in response to
post
and people who want to promote misleading claims would much rather use reliable sources to do so, precisely because they know other people see those sources as more credible
comment in response to
post
information's value *depends on how it is used.* misleading claims are all the more persuasive when they're based on true information.
comment in response to
post
but this is limited for a few reasons:
- unreliable sources get *much* too little share volume/traffic to account for the amount of misinformation that seems to be circulating
- unreliable sources publish lots of true claims, reliable sources can publish false claims
- and, most importantly...
comment in response to
post
if you've ever tried to study misinformation, you know that measuring it is actually really hard to do at scale. standard practice tends to categorize either claims (true/false) or sources (reliable/unreliable) and then apply those categories to the data
comment in response to
post
also sets really bad incentives for anyone who opposes the protestors' goals!
comment in response to
post
I dunno I feel like the bigger issue here is trichotomizing red/purple/blue vs. looking at the actual policy differences across states (which I'm sure others are doing). wouldn't surprise me if those differences got stars but this is telling a very bundled story about what would be driving them
comment in response to
post
Like, the problem here clearly isn't that we passed a law trying to specify the circumstances under which the federal government can use the army to intervene in domestic affairs and weren't imaginative enough about how the word "insurrection" might be abused. bsky.app/profile/casc...
comment in response to
post
tbf this is more common in more professionalized general science journals, but probably not in the way the former member is imagining
comment in response to
post
Given a sufficiently tendentious opponent, you can't safely codify anything.
comment in response to
post
that's kind of what I'm saying
comment in response to
post
Blake isn't the first NYC-based candidate with a newsworthy bagel preference! www.bonappetit.com/story/cynthi...
comment in response to
post
by not knowing to shuck the tamale, Ford demonstrated that he was unfamiliar with Hispanic culture. by knowing that his bagel combo is gross and defending it, Blake is demonstrating fluency (and also guaranteeing the interview is shared widely)
comment in response to
post
fun implication that, like Musk, a non-trivial share want their current party to be replaced with a different/better version of that party
comment in response to
post
mark me down for is/neutral, fwiw
comment in response to
post
the beef is about jockeying for power within the Democratic coalition (as the post you're QTing implies, you don't really need to read the book to participate in the beef!)
comment in response to
post
it's amazing to me how people who use these models all the time think they can just prompt them with a plain-English description of what they want and it'll spit that out
comment in response to
post
Even if you accept the premise that we should be using large language models to evaluate government contracts, *this is a really lazy and error-prone way to do it*
comment in response to
post
the workflow here seems to have been "prompt an off-the-shelf OpenAI model for zero shot classification with no validation set, make the prompts increasingly ad-hoc and elaborate to correct for specific mistakes. only use the first 10k characters to stay within the context window"
comment in response to
post
I get the impulse to assume that, because it's funny and stupid, it must be a planned distraction from substantively bad stuff about the budget bill, corruption, etc. But their fight is also *about* those substantive things. It can be funny and stupid and also important.
comment in response to
post
*Murphy's own post* highlights the winning message here: "two billionaires arguing about who gets the bigger share of the corruption spoils"!
comment in response to
post
Murphy's *own post* shows how easy it is to tie it all together ("two billionaires arguing about who gets the bigger share of the corruption spoils")!
comment in response to
post
I'm really not sure what Bluesky is supposed to do differently as amateur Democratic postsoldiers. This episode is both funny and easy to tie in to politically advantageous narratives. I don't see how it's ~savvy~ to scold internet randos over their lack of message discipline here.
comment in response to
post
Musk's position is that he bought the 2024 election fair and square, and is therefore entitled to draconian entitlement cuts. Trump's position is that Elon Musk is a creep who's only gotten this far on government cheese.
These all seem like perfectly fine arguments for Dems to keep in the news.
comment in response to
post