Profile avatar
teddyroland.bsky.social
PhD Candidate, English, UC Santa Barbara. American Literature, Media Theory, Data Science. I publish under "Edwin Roland" but don't tell anyone.
252 posts 273 followers 139 following
Prolific Poster
Conversation Starter

writing about tacit knowledge, “know-how” got autocorrected to “knowledge-hoe” and the sentence attained its truest form

Amazing set of teaching resources!

Every day tens of millions of people on earth have a birthday and NOONE invited me

The @uofcalifornia.bsky.social and @oxfordunipress.bsky.social have signed a four-year transformative open access agreement. The agreement enables UC corresponding authors to publish open access in nearly 500 OUP journals at reduced or no cost. Full announcement: ucla.in/3QdW3Hz

“The US of AI,” public draft of a talk given yesterday at Princeton. drive.google.com/file/d/1O2qk...

Getting genuinely excited to revise diss chapters

A student comment in class the other day got me wondering: has anyone written about how ergodic literature—e.g. *House of Leaves*, *XX*, &c.—intersects—or doesn’t—with accessibility? While material texts folks (like me) tend to love those works, they do tend to lean so heavily on the visual

The punchline to this strip is one of my favorite jokes ever. I legit work it into conversation whenever I can.

The great thing about misinformation is that it articulates the space of institutional failure

Chapter XXV: The Dynamo and the Virgin

-Needed to learn something new but didn't have the right language to ask it -Chatted with GPT until it named my missing piece -Keyword search (on Bing lol) for the missing piece -Browsed to 5-year old StackOverflow page with a link to a 14 year old blog post -blog post confirms what GPT wrote!

Five (5) guaranteed panels on AI & pedagogy at next year's MLA. It's as if there is some kind of shared, urgent concern or something.

Posting my new headshot

Seeing a lot of emphasis on the "numerical representation" of text in LLMs (and other media in multimodals). That can be a useful way to think AI's difference from our concept of language. I'll throw another in: the plugboard. (1/3)

The Bancroft Library at UC Berkeley to house the Amy Tan archive (where it will join those of some other luminaries including Joan Didion, Lawrence Ferlinghetti, and Mark Twain).

One of the exciting DH developments in the last few years has been the creation of data collectives and dataset reviews. Let's do them for models/apps now! Journal of Open Humanities AI! Out-of-Copyright LLM Collective! Make everything! And then cogitate on it!

I wonder if the wave of programmer firings because of Ai might proletarianize that profession in a way that could could distribute power differently?

A metaphor that immediately jumps to mind is unbundling cable channels. A UaaS approach progressively disentangles uni "services" at lower and lower levels: degree > major > course > assignment But cable bundles are also sticky. Netflix promised to slay them, then accidentally remade them! (1/3)

In spite of it all the research group I'm leading at Cornell is hiring a postdoc! We're looking for someone w a lit/cultural studies + DH background, esp those who employ data-driven or computational methods. 2-yr position, 1/1 teaching load. Apps due 3/21 academicjobsonline.org/ajo/jobs/29746

The New York Times adopts AI tools in the newsroom

Epistemic humility is applying "a broken clock is right twice a day" to every instance of conventional wisdom and critical theory

Dear friends, there are three weeks until the annual travesty of daylight savings time. Please incrementally shift your nightly sleep schedule accordingly. No one wants to be a zombie.

When my friend and #HIstSTM colleague Ann Johnson passed away in 2016, she had this book in progress. I'm glad to see that with some co-authoring help, it's out in the world via @mitpress.bsky.social mitpress.mit.edu/978026254823...

Piracy and memory in AI: a number of LLMs are trained on the Books3 archive, a pirated set of tens of thousands of books. It turns out that LLMs trained on Books3 are indeed better able to recall details of those books, and that this effect is much larger for less popular books.

I reviewed "Neural Networks" by @ranjodhdhaliwal.com, Théo LePage-Richer, & Lucy Suchman for @criticalai-journal.bsky.social Big thanks to @dan-sinnamon.bsky.social who commissioned it and @pamelakgilbert.bsky.social and Lauren Goodlad, who saw it through criticalai.org/2025/02/11/s...

I had the pleasure of teaching a summer course with this program last year. You should apply! Ping me if you want to chat about it

It's sort of overkill but.... what if you use CoT to predict the next word in a sequence? If you recurse on it, the model would generate a fractal sort of writing. The reader could slide along a surface text or fall as far as you like into self-similar structures beneath.

The media forensics of my parents’ basement

Endorsing this “chef’s selection, designed as a balanced meal.” Esp The Windflower (1984), Indigo (1996), and Your Scandalous Ways (2008)

We're rolling out more and more romance coverage here at The Times — so we're collecting it all on a fabulous new page. #romancelandia

I built a bluesky bot that monitors when Federal US .gov domains are added or deleted. bsky.app/profile/fed-... It will update 4x a day and help you notice things like waste.gov or dei.gov (both just registered) or that PSLF.gov was removed. Share widely and feedback appreciated!

Really powerful models are getting cheaper and cheaper to build - genuinely really cheap. Universities, do not tie yourself to OpenAI. We can and should use and develop open source only. (Desperately hoping that my own uni is listening)

Unless you work directly with LLMs, it's hard to appreciate how quickly things change I am revisiting some code from Fall 23, this week. I updated a few libraries, only to find that they are 100x more efficient than before. Not an exaggeration The *same program* requires 1% of GPU memory/energy !!