Profile avatar
aewolfai.bsky.social
I'll be posting my thoughts on various AI platforms for teachers and promoting good uses of LLMs. Read my thesis https://repository.usfca.edu/thes/1547/
56 posts 26 followers 16 following
Regular Contributor
Active Commenter

I hate wholesale rejections because I think it ignores creative avenues for learning. If the real question is the ethics, would a more ethically produced AI like Bloom be a better alternative? If it's the pedagogy and "meeting students where they are at," what happens if that's -in part- some AI?

I disagree with the conclusions while agreeing with almost all of the premises. Let's talk about it if you have the time :) www.linkedin.com/posts/anthon...

On an entirely related note, I'm tickled by the fact that NN were cracked by modeling minds and deepseek cracked the next step by referencing mentorship in its modeling. #educationAI

That the work is consistency checks is essentially the same problem every expert and company deals with to craft the best response to a company's day-to-day. We might have a better hint into how to better encode the output if we recognize, model, & automate the process of verifying

I'm a little hesitant to opt into Deepseek experiments for the time being. The research looks good but these policies are worrisome in the same way red note is worrisome.

For the laymen trying to get into developing AI or understand its application, this is an incredibly insightful video on building with AI. I also recommend the Anthropic blog post it references in the details. As for me, I'm definitely workflow AI > agent. www.youtube.com/watch?v=tx5O...

I just heard about "dirty sodas" and this just sounds like 7-11 swamp water and mocktail soda lounges I dreamed of as a kid.

“LLMs amplify existing security risks and introduce new ones” Even Microsoft sees it. New paper at arXiv, discussed briefly below. open.substack.com/pub/garymarc...

People too often forget that the legal protections are made by people and are as easily unmade and the first inconvenience www.axios.com/2025/01/10/m...

The conflict of interest is the worst part about this thread, but I highlight the public school issue because no AI system will stop a fight between students. No AI system will sense the issues with bullying and harassment and intervene. No AI system will set up the environment for learning.

arstechnica.com/information-... The bias against non-native speakers in chatGPT mirrors the way that ESL learners were systemically targeted when using tutors to help improve their writing. Elmesky's study on alleged "Cultural Mismatch" informed my thesis: doi.org/10.3102/0002...

Top comments for the post are 1) send articles that say ai detection doesn't work, 2) escalate to the dean, and 3) use ai detectors on unrelated original documents like the teacher's email. This is why the approach is so key: even though the teacher communicated the use of the tool, it's flawed

Before I forget, this is Alex Kotran's project Aiedu.org These guys have been on this for years starting with Machine Learning bias and and their lesson material is great.

Still, when I'm about to train teachers, questions should be addressed: How do you draw the line? What exactly is "ethical use"? What skills should we teach? And my fav. inspired by Alex Kotran: what jobs should we be training the next generation to do?

When it comes to teaching with AI, recognize that it's not AGI and it's certainly not infallible no matter the safe guards. It's better to proactively guide students if they're going to use it anyways. We are long past the age that hushing ourselves will make the problem go away.

I would have written this study for my MA if I had unlimited resources. It's worth a read if you're doing AI policy for schools

While appealing, the positive impact carries a lot of weight to explore: Teachers see schools to better students, even liberate them with critical knowledge. Yet students and businesses compare grades to measure *talent* (not even understanding) because who would be excused to blame their teacher?

So Bluesky apparently uses your device's system time to attach the time to posts. And as OP has demonstrated, this can be exploited.

AI is a threat to human teachers only if we believe the act of teaching is limited to delivering information and watching kids practice skills. If we value the art of teaching, we can find ways for AI to support and enrich it. #edusky #AIineducation brokenhand.substack.com/p/what-we-le...

In a world where we expect the most motivating educators to inspire learning in kids regardless of circumstance, what exactly in "agent AI" can cover for the emotional support demanded of teachers and mentors? Spoiler: the answer is that they cannot

LLM summarizing for outlines can be great for students and chunking tools are similarly helpful. Pre-prompted API feels more and more needed the more I look at the problem

For being a Comp Sci MA, this shows the framework everyone should follow: ✅ RAG leveraging class material ✅ Explicit the guardrails coded into the prompt Now we just need to add streamlined use. Coders really want LLMs to be a car, when in reality it's still a train. Build rails.

Gemini aggressively passing the buck for not being able to read my messages will never not be funny.

The parallels to the OCPL fiasco continue to intensify 🥲 docs.edtechhub.org/lib/9BRCRSSN

The examples are the most telling and useful: No AI: Skills cannot be evaluated while using GenAI (11) AI Planning: the decision skill being evaluated is separate from the AI inspiration (11-12) AI-Assisted Tasks: Using AI as a lab writeup tool b/c the lab is graded > independent writing 1/2