Profile avatar
andrewbuzzell.bsky.social
@uwo postdoc. social epistemology/technology/ethics
23 posts 36 followers 128 following
Getting Started
Active Commenter
comment in response to post
yes!
comment in response to post
it would be interesting if running could support a high quality independent and audience funded outlet like cyclists have in @escapecollective.bsky.social
comment in response to post
cite is from a natpost "review". found this masterclass in misleading citation in the speccie:
comment in response to post
so we are somewhere in between embrace and extend but a bit before extinguish en.m.wikipedia.org/wiki/Embrace...
comment in response to post
that's super interesting! is it possible to share a bit more about this, i'd love to use it as an example in my class?
comment in response to post
and the new "i'm feeling lucky" standard
comment in response to post
Apparently, Russian propaganda directly worked to secure the election of a specific candidate in the U.S. elections in 2024.
comment in response to post
it’s like a hack-and-leak but from the inside out.
comment in response to post
wow nice photo too!
comment in response to post
100%, just like the old google. (leaving to one side llms aren’t info retrieval systems). and the ux will devolve in the same way for the same reasons.
comment in response to post
one thing i puzzle over - where will the new sources come from? "OpenAI says that Deep Research is trained to select solid, reputable sources"
comment in response to post
reminiscent of the effort aimed at the syria civil defence, which used youtube/twitter very effectively. much of what we know about how that worked was from the streaming api, which no longer exists. that this is still "dark magic" all these years later is such a failing.
comment in response to post
a betrayal of the distinguished tradition of subprime mortgage-backed security safety and radium health & beauty safety!
comment in response to post
i think you are right. somewhere in the mix of "what is special about the potential social risk of ai" is the affordances we are giving it as a regulative technology. l'etat c'est moi, translated to machine.
comment in response to post
4/ The biggest surprise? Media indoctrination and civil liberties repression are the most predictive of autocratic survival. These findings have big implications for “information autocracies” in the digital age.
comment in response to post
reminded me of the “human encyclopedia” from frasier
comment in response to post
this sounds amazing! do you have a syllabus you could share?
comment in response to post
super interesting! i wonder if the persuasive effect is parasitic on something like an automation bias - that there is some trust being bootstrapped by the setting's interface?
comment in response to post
100% epistemic paternalism sounds great when you like the paternalist
comment in response to post
i think you are correct wrt to information. but i think mis/dis info talk often masks worries about propaganda and various concerns about deceptive influence. these are much harder to articulate in a neutral way, whereas mis/dis info talk perhaps appeals as "debugging" the infosphere.
comment in response to post
ha amusing query. i have a transcript from a cbc radio program about mysticism in canadian art that i've been schlepping from machine to machine since 2002
comment in response to post
you know this one already but i teach this in an interdisciplinary ai ethics seminar and it gets great engagement: muse.jhu.edu/pub/1/articl...