It’s hard to write about what you don’t know. I’m reminded of my early years teaching and Ss had to write about Odysseus. Page 998 in the textbook had a great example on why Odysseus displayed poor leadership: I could help a S find evidence, but they didn’t do the thinking. Same fear with AI now.
I’ve found that evaluating a students’ AI prompt and not the AI output is a good way to gauge their understanding of concepts, even if there writing skillset needs work.
AI can also be a good feedback tool for their actual writing. I’ve trained my AI model to offer students a “polished version” of their writing to help them develop skills.
For students, absolutely, but this makes me think of my own use of AI for brainstorming
The key difference I think is that I am doing so as an expert of my own thinking and am refining my thoughts through a faux conversation exercise that the machine facilitates
Yup. I tested an assignment today and AI fabricated personal anecdotes about a visit to the National Museum of African American History and Culture. It fabricated 7 pages of a "personal narrative" and "reflection" for a high school senior.
It can sometimes…and AI should never be the starting point.
But students can also learn to use AI to help refine, expand, or focus their own ideas during this phase, point out avenues they had not considered, which in turn gets new ideas going in their brains.
Theoretically, sure. But I think classrooms should facilitate conversations that do this work before machines do. I also think students should know how to refine, expand, and focus as skillsets before they consider AI an option.
Definitely want them doing that. But honestly it’s been cool to experience as a writer and talk to kids who have experienced the kind of unfurling effect AI can have as they are developing ideas. Machines poking and pushing one’s own brain to go deeper. All use of AI is not equal.
Genuinely curious (there's no way to ask this without sounding judgy): how do you explain to students the ethical and environmental implications of their AI explorations? How do YOU square those implications for yourself?
Hey, that’s ok. I like curiosity! We’ve all got a footprint. I compost trash in the backyard for the garden, rescue baby trees and find them new homes, almost never fly, eat beef a few times a year, thrift most of my clothes and avoid fast fashion, don’t run my car in the pickup line…
But I also use AI a few times a week for meaningful pursuits. My decision has been to model my thinking for students (I talk about composting too sometimes!) and hope that my small part of their moral development has some impact. I also hope systems commit to building in reading about AI and
I worry deeply about the demographics of students who are being encouraged to use/rely on AI and what students are learning concrete skills in the age of AI. I have my suspicions. I also don’t think we’re doing enough talking about the environmental impact. It’s hazardous at best.
I call all of these “tentacles” of the issue. They matter! Like so many issues, this one is many-tentacled.
As far as the writing instruction tentacle goes though, I’m finding that coexistence with this technology is not all bad. “Use AI” is such a broad term, including helpful and harmful acts.
Agreed, though I’m not sure that discernment is definitionally absent in all LLM usage.
I think there’s value in brainstorming with other people for the social-discursive aspect regardless of any other “value”. Increasingly, I think the post-LLM value in school will be found in the social pieces.
Comments
The key difference I think is that I am doing so as an expert of my own thinking and am refining my thoughts through a faux conversation exercise that the machine facilitates
Students aren't there yet.
Brainstorming is thinking work. Outsourcing thinking work to AI for kids who should be developing thinking skills is so bananas.
I can't believe this is so widely accepted rn.
But students can also learn to use AI to help refine, expand, or focus their own ideas during this phase, point out avenues they had not considered, which in turn gets new ideas going in their brains.
As far as the writing instruction tentacle goes though, I’m finding that coexistence with this technology is not all bad. “Use AI” is such a broad term, including helpful and harmful acts.
I think there’s value in brainstorming with other people for the social-discursive aspect regardless of any other “value”. Increasingly, I think the post-LLM value in school will be found in the social pieces.