I realized something recently, which is that we're building AI tools completely backwards.
While I was aware that there's some de-skilling happening already, I finally clicked a bunch of pieces together and decided to write this to talk about how we can do *better*
https://hazelweakly.me/blog/stop-building-ai-tools-backwards/
While I was aware that there's some de-skilling happening already, I finally clicked a bunch of pieces together and decided to write this to talk about how we can do *better*
https://hazelweakly.me/blog/stop-building-ai-tools-backwards/
Comments
- We use them to short-circuit the human cognitive reasoning (and learning) process
- We use them to silo people rather than bring them together
- We then feed broken data back into the AI and exacerbate the cycle
We're building anti-human tools
To no surprise if you've been reading my stuff lately, it involves *gasp* embracing collective learning and augmenting human collaboration patterns rather than breaking them.
(Also... ~4k words and no swearing, somehow?!?)
I’ve been trying to make similar points from the cognitive systems engineering stance (https://ferd.ca/ai-where-in-the-loop-should-humans-go.html).
I find the misguided myth of “it will just keep improving; fully automated replacement will happen soon” to be incredibly charismatic and hard to dislodge in people’s minds.
Too much focus on the objects, not enough on interactions.
I think we can use it better if we try.
I use AI for coding, but mostly to do the tedious stuff.
I use AI to build trivia quizzes because my wife needs them (every question gets reviewed by a human).
I used AI to determine room dimensions from pictures!
You should start a start up and e.g. build an IDE that works like this
Also liked the specific worked examples of good AI interactions to support humans.
Which is really kinda sad, but that's capitalism.