Profile avatar
mblack.us
Associate Professor @ UMass Lowell. I teach writing and study the history of computing. Links to pubs & book: https://mblack.us
60 posts 57 followers 44 following
Regular Contributor
Active Commenter

Was at a conference dinner a while back and one guy said "The future is AI. It doesn't replace senior developers, but no more need for juniors." To which i replied: "Where do you think senior developers come from. Straight from the womb?" He went quiet, then wandered off to a different huddle.

I think we should have 0 sympathy for the people who foisted this technology onto us but we should give some grace to people who are relying on what they have been told to be a miracle technology. I think we’ll hear more stories like this as people eventually realize it’s not good for them

Heartwarming: The college freshman featured in the NYMag ChatGPT article who said she watched TikTok until her eyes hurt has since posted updates from her Reddit account (her username was included in the article). She said she is no longer using ChatGPT and finished her latest semester without it!

When I talk about AI and education, I always argue that we need to make a distinction between how AI *could* be used and how it's likely to be used and this article is a great illustration of why these are two different topics. nymag.com/intelligence...

i think the biggest public relations coup AI boosters scored was calling it “AI.” people genuinely think it is an intelligence, and that when they query it, it is providing reasoned answers

An effective institutional strategy to combat AI-based cheating (in a world where there was institutional will for such a thing) would require drastically shrinking class sizes, so teachers could get to know and work with each student individually.

Thread with examples of how students can get inundated with “use AI to cheat” content on their social feeds

stop. calling. them. hallucinations. hallucinations are perceptual events resulting from an improperly working system output produced as a result from how the model is explicitly developed to produce output cannot be a hallucination this is simultaneously anthropomorphic AND dehumanizing language

Amazingly, reaction times using screens while driving are worse than being drunk or high—no wonder 90 percent of drivers hate using touchscreens in cars. Finally the auto industry is coming to its senses. Real buttons are sooooooo back baby!

It's like every website, app and piece of software has developed its own Clippy. Clippy is following me around every day from Google to Zoom to Adobe Acrobat, telling me it looks like I'm trying to exist and would I like help with that

Stepping in here to defend Ann’s statement, which I’ve now seen framed this way by several technologists. Ann is an author. Her context is about usability, not technical capacity. On the merits of her context, she is correct. Technical specificity does not change her conclusions.

I've seen this post come up several times in my feed and each time, grown frustrated by all of the "Um, actually..." replies. Whether or not these models perform a web search before generating text does not change the fundamental problem Leckie is getting at here.

lmao

just gonna re-up this today for no reason www.vice.com/en/article/h...

Shoulda done legs first. You know, build from the ground up.

I was trying to vet a new textbook this morning by flipping through an e-book copy. Almost immediately, I get a pop-up about genAI "practice questions" and a little box keeps manifesting in the corner alerting me to new questions as I scroll through it. Are all e-textbook platforms like this now?

When I try to tell administrators or AI fetishists in ed tech this, they assume Im some kind of luddite who only uses a quill and ink to write by candlelight. I LIKE technology, I use digital methods all the time. I just don't get what gAI offers historians, except in some select cases.

We Now Know How AI ‘Thinks’—and It’s Barely Thinking at All Maybe you've heard that AIs are "black boxes" But a growing body of research keeps arriving at the same conclusion: Today's AIs all work in surprisingly similar -- and simplistic -- ways 1/2 www.wsj.com/tech/ai/how-...

I'm just getting started on this piece, but I see the AI maniacs have yet another argument to throw at the wall, which is essentially, *Skeptical educators need to learn a whole lot of AI tools or they won't be able to talk their students about why NOT to use them.*

I’m gonna take this seriously here for a moment because I understand that I often don’t make sense. You see when you say that bragging about using an ecological disaster to write an email is not the same as defending the sanctity of email. It is in fact saying an email is not that important.

LLMs are nothing more than models of the distribution of the word forms in their training data, with weights modified by post-training to produce somewhat different distributions. Unless your use case requires a model of a distribution of word forms in text, indeed, they suck and aren't useful.

You know how we often complain that the left or the dems (which aren’t the same I know I know) don’t stand FOR anything, only against? This is low hanging fruit. Be for whatever AI is against. Just as practice!

In which, as a words guy, I read way too much into Google AI pretending invented idioms are things people actually say

I’ve tried to stay out of this because it is particularly popular among groups of people that I love and respect. But the courses and coaching is just pure grift.