hpstorian.bsky.social
cognitive history - methylphenidate prose - news from nowhere
573 posts
552 followers
310 following
Regular Contributor
Active Commenter
comment in response to
post
It's not particularly polished. Still "why should I share the napalm guilt of Uncle Sam?" holds up as a question.
And there's a good anti-cop song on the first record.
comment in response to
post
This one is going to be interesting...
comment in response to
post
I think there was recent research released that indicates the social cost of disclosure. This sits alongside the data on just how much it is used as a social mediator. I've written elsewhere that I think this will drive companies to provide ways to make use increasingly invisible(smart glasses etc.)
comment in response to
post
I think one part of it is that many of us realise that even when it "works" it is in our interests not to make that usefulness apparent.
I used it to make something at my work easier (saving me and my colleagues time) but we weren't expected to work less. Use+conceal is the rational choice.
comment in response to
post
Today I opened my workshop on reference management by playing Jay Z's "what more can I say to you".
I asked what referencing is for and the response was confusion.
That's a problem that preceded them.
comment in response to
post
This frame brings us in on one side of fights between Disney & OpenAI, NYT & Microsoft.
That game is rigged. Not in our favour.
Maybe we should rethink playing?
comment in response to
post
I'm convinced that so much of the academy's anxiety about genAI comes down to the way that authorship is configured as something individual, something for the accumulation and guarding of credit rather than the representation of intellectual networks.
comment in response to
post
Thanks again for a great workshop. Definitely a symposium highlight.
comment in response to
post
We read extracts from these (amongst others) today.
The cohort of students signing up to a course ostensibly about AI have adapted surprisingly well.
comment in response to
post
Wonder if anywhere near me offers banana and custard delivery.
comment in response to
post
Then walking over to my tutorial a student approached me and asked me to explain something from the optional reading (all readings in the course not done in person are opt-in) and asked if it was okay if they shared it with other people and yet again I gained back a little bit of faith.
comment in response to
post
Argh you just unearthed an unholy craving for banana custard in me.
comment in response to
post
"I seek essays!" the academic cries but no one takes him seriously. "Maybe they are lost to the LMS? Uploaded into the void? Maybe he's afraid of us (markers) and is hiding?"
Admin laughs.
Frustrated, the academic raises his voice "the essay is dead, and we have killed it, you and I!".
comment in response to
post
The essay is dead. The essay remains dead. And we have killed it. How shall we comfort ourselves, the murderers of all prose?
comment in response to
post
As to the death of the student essay. If you've got a spare hour I think most of what James and I said in this: "the Essay is dead, long live the essay" holds up.
youtu.be/BgAhfVnXwD8?...
comment in response to
post
The EEG data does show interesting differentiation in ways that seem to be experimentally sound but so many of the conclusions that are drawn from it are absurd leaps.
comment in response to
post
That’s the real cognitive debt here. Not what happens when students lean on AI, but what happens when we collectively lean on numbers.
When we mistake what’s visible for what’s meaningful.
The University's promise, corrupted by maths.
comment in response to
post
When we listen to that story, we see students experimenting, resisting, feeling guilt, setting boundaries for the tool, expressing discomfort and doubt.
But that messy, evaluative, moral terrain? It doesn't graph easily.
comment in response to
post
I don't think this article tells us much of value about the cognitive effects of AI use, but it makes it clear that the problem is more than AI: it is the reason that students use it.
It also shows hope: even under pressure, students, (using LLMs for the first time) still reached for criticality.
comment in response to
post
But it's in the article, sort of:
“Time pressure ... drove continued use, ‘I went back to using ChatGPT because I didn't have enough time, but I feel guilty about it’, ethical discomfort persisted: P1 admitted it ‘feels like cheating’.”
The qualitative responses show how student criticality.
comment in response to
post
Looking at the conversations we see a great representation of the driving factors behind today's essay writing, and the accuracy of the simulation.
While everyone on both sides waves around brain maps - phrenology back like it never left (because it didn't) - the students themselves are ignored.
comment in response to
post
The students in the study were ushered into a room, given a list of wordy questions outside their discipline and given 20 minutes to answer them while wearing an EEG headset.
They were reminded at 10, 5 and 2 mins that time was almost up.
First glance: not an accurate simulation of essay writing.
comment in response to
post
What's debt? It's a promise perverted by maths (to paraphrase Graeber).
What is cognitive debt? Well it's a concept borrowed from business to point to the long term costs of short term metric driven efficiencies.
Quantification. Learning derailed by maths.
comment in response to
post
Likewise. I wish I could think about this stuff less but I think that its efficacy is ultimately going to be less a question of direct replacement: humans replaced by AI, as it is of augmentation: one human who uses it replaces others.
comment in response to
post
While I see what you're saying I think that appealing to people as workers is more likely to galvanise support than appealing to them as consumers.
Self serve checkouts seem to say that much.
comment in response to
post
There's definitely a lot of snake oil getting sold. Yet I think that by and large that's only the tip of the problem.
The change isn't happening because LLMs are as good as humans. It's happening because they're good enough in many situations.
comment in response to
post
If an LLM outperforms humans in low empathy, high-script tasks like refund policies, appointment scheduling etc., what then?
Because that is already arguably the case.
I agree that LLMs are bad at complex conversations/emotional reading, but is that the core problem?
comment in response to
post
We know that one thing they can do is talk, which is what they're designed for.
The LLM taking over at a drive-thru window might not be able to play chess.
But it doesn't need to.
Pointing to it and saying "it can't even play chess" is saving no one's job.
comment in response to
post
The more precise point here is that while AGI is not just around the corner, LLMs reason differently, incompletely, and opaquely. That's a more unsettling reality. For a range of reasons.
E.g. this thread:
bsky.app/profile/hpst...
comment in response to
post
In a lot of cases, it -can- and will do work for free. That's not a hypothetical, it's already happening.
I think that being clear-eyed about that reality is important.
Understatement as a response to hype doesn't address that problem, if anything it stiffles firm responses.
comment in response to
post
I try to keep what's left of my remaining faith good, though my response to the OP could have been more gracious. Appreciate your reappraisal - it's refreshing, I'm not used to encountering people on the internet reorienting their stance in response to new angles.
comment in response to
post
Could you break the point down for me?
Was Deep Blue reasoning in 1997?
comment in response to
post
And as to why this is so wrong-headed here's one of the triumphal replies to the OP.
"Chess requires reasoning".
Sounds plausible if you only think about it for 2 seconds. Because which is it? Was DeepBlue reasoning in 1997?
comment in response to
post
The issue isn’t that ChatGPT is coming for your bishop.
It’s the threat that multi-modal AI poses to your job, your arguments, your inbox, and your kids' homework.
The species of argument made by the OP is a pacifier.
comment in response to
post
Will an LLM ever become world-class at chess?
Probably not.
But could you build something that plays at grandmaster level and explains its moves while roasting your gameplay like a Xbox Live lobby in 2005?
Yeah. Easily.