I think the decision to use LLMs is a decision to build a worse world, where people seem less valuable, and where the valuing of products over processes is exacerbated. Kevin is a good dude; he‘s not wrong about these uses. But I wish we were trying to build more caring even if less efficient worlds
Reposted from
Kevin Zollman
There's so much polarization around LLMs. They are way overhyped, I agree. But I also use them semi-regularly now.
Here's a thread of genuine use cases where I find them helpful. Please add your own!
Here's a thread of genuine use cases where I find them helpful. Please add your own!
Comments
https://www.theguardian.com/commentisfree/article/2024/may/30/ugly-truth-ai-chatgpt-guzzling-resources-environment
And yes, it's all based on a lot of guesswork because LLM companies aren't disclosing details. But I haven't seen any estimates that are orders of magnitude higher. So at best the climate argument is inconclusive.
A more caring world? That’s subjective and ends in semantic purgatory.
I want this more caring world too and Kindness.
My caring Kindness is hot tea but yours might be No Genocide. 🫱🏼🫲🏽
(mine is no genocide too)
Some ways of using them undoubtedly make the world worse, of course.