One reason I don’t like genAI is I think people have gotten way too comfortable passively consuming huge quantities of media and I think genAI is another big stride toward anti-intellectualism and abdicating thinking
Comments
Log in with your Bluesky account to leave a comment
one of the reasons I do like genAI is because I really enjoy active thinking and it gives me a conversation partner that never tires of my obsessive hyper fixations and weird curiosities.
Your concerns are valid and correct - but I don't think anti-intellectualism is stopping regardless.
Anyone who cares about the environment should dislike it as well. The amount of energy it consumes is enormous, and most of that energy is coming from fossil fuels.
And instead of just slapping together a field of solar panels to power it, a lot of AI companies are talking about nuclear, which is far more expensive and takes a lot more time to implement. So in the meantime -- burn, baby, burn.
I think a lot of tech billionaires had to choose between fighting climate change and playing with AI and they chose playing with AI and essentially gave up on the climate with the excuse that, "Maybe AI can think of something. . ."
Unfortunately, we already know how to fight climate change -- burn fewer fossil fuels. AI is the opposite of that, but there's money to be made, so for tech billionaires that's always going to be the bottom line.
A lot of times when people talk about what they use genAI for it’s like “oh I don’t have to think” or “I can do tasks much faster” and I don’t love that
I can't help but think back to when i was in elementary/middle school and they would let us use calculators in math class but "only to check work, not to be dependent" and this seems like the antithesis of that
People at work including my boss complain and make fun of people's short attention spans but then want to implement AI in everything and I'm like um, you do see how wanting a machine to write and summarize all your emails and meeting materials is causing and perpetuating that issue, yes?
That is quite literally what we hear from young people. Basically, it's outsourcing thinking. They don't have to do the difficult work of drawing their own conclusions or overcoming the blank page.
My wife's work is trying to experiment with using AI for search and summarization of their massive document library. The hallucinations are a massive problem. People are failing to double check the AI, and the AI is consistently wrong.
The one woman from the Five (some talk show, idk) got called out for bad info, and she abdicated all responsibly by blaming ChatGPT and not the words leaving her mouth.
I'm old and I remember when computers first came into general business use in the 1970s and people immediately started blaming the computers for everything and evading responsibility.
A client at work handed me something they had written. I struggled to find the diplomatic words, and so suggested instead that they were proficient in a language I did not understand and perhaps I was not the ideal person to deal with this. They then admitted using ChatGPT to create the gibberish.
In theory, yes, but when I talk to people about how they use AI they often cite tasks that are creative versus mundane—I also think there is value to performing mundane tasks, including being more thoughtful about the entirety of a project.
Gotcha. I see the potential for loss of individual creativity but I feel like it also has the potential for fueling unbridled creativity and development. Whether that actually happens, who knows. Especially if it remains in the hands of the elite.
of course theres the possibility of technology to improve standards of living, but the history of industrialization is also the history of the political control over the form and function of tech. the problem with AI as we are is that the timescale of capital is much longer than of people
corporations and the state are a sort of AI, a primitive form to be sure, composed of neuronal networks of people and with dynamics that evolve in relation to their environment. we observe changes in people’s behavior in line with the imperatives set by these agents over long timescales
we’ve observed the consequences of propaganda, or of the effect that machines have had on the labor process. we observe what consumerism has done to promote a certain sort of culture
Maybe I’m still ignorant about AI but all I’ve seen so far is low quality plagiarism from internet sources. They are creating a huge empty bubble once again
I saw an argument that regulation would itself just generate more issues because Big Tech is basically government now. Regulation would only be designed to keep smaller innovators from coming in to solve the issues Big Tech AI is creating.
Yep - slowly folks yield their most intimate conscious power and happily watch the screen:
“The earth hath then become small, and on it there hoppeth the last man who maketh everything small. His species is ineradicable like that of the ground-flea; the last man liveth longest.” FN
I've had the conversation many times. If used as a resource aggregator or in support of original thoughts/plans/projects, it can be really helpful. But so many just use it in place of original thinking/creating. It'll compound until it does everything and people do nothing.
My biggest concern is that when Gen AI can produce new content about as rapidly as it can be consumed it will combine with short-form content and analytics to make a never-ending entertainment machine for the terminally brain-rotted. It will be an ever-refining, perfectly tailored imagination…
machine that perfectly responds to user preference even as it influences and corrupts them. Combine that with the bad actors that actively attempt to radicalize people and you’ve got a really really big problem on your hands. The BEST case scenario is permanent brain rot. Best case.
I feel like this is bad on its own, but the fact that it just generates nonsense so often is not worth the many billions of dollars and the CPUs we could use for other shit and the excessive power consumption and everything else.
They're not. People should measure the emissions value of whatever they're doing with the tool vs doing it without. Easy comparison, fails 99% of the time.
The fact that some of the biggest pharma companies are scaling down workers for AI should scare the fuck out of everyone but sadly it is only me yelling about this.
I was a dyslexic child and learned to write cursive first because you don’t see cursive in reverse. It took me until I was in college before I felt comfortable with print letters and words.
Comments
Your concerns are valid and correct - but I don't think anti-intellectualism is stopping regardless.
Regardless I don't think anything we do is going to slow it down, so I'm trying to utilize it for whatever benefits it may offer.
https://www.amazon.com/Anti-Intellectualism-American-Life-Richard-Hofstadter/dp/0394703170
https://www.snopes.com/fact-check/us-presidents-pardoned-family-members/
Just shows AI can't be blindly trusted, same as any reporter.
So, yeah. You're *already* correct.
“The earth hath then become small, and on it there hoppeth the last man who maketh everything small. His species is ineradicable like that of the ground-flea; the last man liveth longest.” FN
I’m stealing “the styrofoam of…” for future