There is a strsnge kind of online oned that defend it as "resource sharing" despite the environmental cost and how it's supported by abusive labour even I almsot got wrapped in. It's mind boggling
they can't decide for themselves
ultimately it comes down to their training data and the rules and safeguards implemented into them
and we're well aware of the main type of people working on generative AI right now
Aren't they deciding for themselves what to say or create when they are asked a question or asked to create something? Of course it is based on what they learned, but isn't the whole point of AI is that it's different from just programing a result?
not really
at least not yet
LLMs just try to predict what should be said based off of the data they have access to and the data they've been trained on
they don't "decide" they just try to say what makes sense based on what they've learned
I always get confused by this. It you are right, and I'm not saying you aren't, isn't it misleading to call them AI? Isn't the whole point of something being AI that it can decide things for itself?
Yes, it is incredibly misleading, and Silicon Valley loves to lie about what it is they’re actually selling so that they can raise money—to the point that many of them have bought into their own bullshit.
I don't even like Ghibli but... Yep! Art without artists is a fascist's wet dream. You can see it in how art is ALWAYS one of the first things they defund. Why deal with pesky leftoid artsy fartsy types when you can feed their life's work into a machine and have it create your propaganda for you? 🙃
Comments
https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/
and related Acid Horizon podcast:
https://bsky.app/profile/acidhorizon.bsky.social/post/3lkvyadnqcc2o
ultimately it comes down to their training data and the rules and safeguards implemented into them
and we're well aware of the main type of people working on generative AI right now
at least not yet
LLMs just try to predict what should be said based off of the data they have access to and the data they've been trained on
they don't "decide" they just try to say what makes sense based on what they've learned
Buying into your own bullshit appears to be the entire point of Silicon Valley