In group supervision one EP brought a fascinating Q . What are the ethical considerations of using ChatGPT /AI in our work?” A stand out comment for me was from one of the cosupervisors- I do the brain work, it does the leg work. #edsychs I’d love you reflections. @digitalep.app @edpsy.bsky.social
Comments
We don't allow psychs to quote computer interpretations of testing even when attributed. Don't allow computer generated AI / LLM therapy. Could you use it for notes? Dunno- is the LLM using your info as part of its learning data?
although not the ethical implications, these were the pros and cons from my presentation over 15 months ago... still very relevant.
The biggest one for me as a profession is the misue of AI. Many will misuse it and i see it as an extension of what we do... not what we do...
Calm Connor will use AI fine but Stressed Sally needs a person to coregulate with before learning.
I think TAs do more than AI ever will be able to.
My analogy is it’s like having a bread-maker machine. You decide what to make, choose the recipe, get the exact ingredients, creatively tweak it. The machine streamlines the process to save you time 🍞
But first, a disclaimer.
Perhaps my scepticism is as described by Douglas Adams:
1/
And AI was created after I was 35…😆
But…I just don’t buy the alleged distinction between brain work and leg work for genuine EP work.
Ok, so there are some things EPs do which don’t require brains: 3/
But the EP job is brain work. It’s about thinking and interacting. AI cuts across both of those. It confuses outcome with process. For EPs, process is inseparably tied up with outcome. 4/
5/
#edpsychs, particularly independents, may find this podcast interesting. It’s given me reassurance, loads to think about and as a bonus has introduced me to the word ‘enshittification’ which I love!!