It's less the concept of AI/ML in general—more the unethical & unsustainable ways that certain generative AI models have been trained (and shoehorned into everything, regardless of use case or deleterious impact).
(Acknowledging that I'm not privy to the specific inciting conversation/thread)
I do not think academics should be calling each other "termites to be driven off campus" if someone mentions AI because they are upset about some other dimension of AI use.
That’s not what Kruse was pissed about.
Students have been faking the work since forever, and the implication that teachers can’t tell the difference between honest effort and an LLM (with a high degree of reliability, not perfect) not merely insulting.
It’s enshittification of education.
i'm so tired of people acting like LLMs are analogous to calculators. they don't automate in a consistent way; they semirandomly produce arbitrary output. you'd be halfway to answering your own question if you could at least state it honestly
OMG I've lost track of how many times I made this point. "Yes, these days we do allow students to use calculators in math class. Because calculators are reliably accurate."
Informatics? Ah yes the ravenous maw of STEM shit that sounds good on resumes without generating anything of productive value that they fed my academic program at IU to. Of course you are all about generative AI.
If you use statistics programs without knowing statistics i can guarantee you it will not work as intended, it just makes the calculations, not the thinking. IA isn't advertised this way not by a light year.
Replacing people with AI is like replacing an orange with a picture of an orange and claiming it has the same nutritional value. That's why people are upset.
It’s funny, many people hate AI because they think it’s useless and produces bad results, while others hate it because it could replace their jobs. Those are both reasonable concerns, but they can’t be true at the same time.
Tech has this habit of overpromising and then moving the goalposts when reality doesn't match the delivery.
We can definitely end up with worse solutions that we pay more for and can't easily get rid of.
Spending any time with the history of technology shows that new automations create more (and better paying) jobs than they replace anyway, but that requires long-term and macro-level thinking.
I’ll leave it to the reader to speculate how common that is.
Not true. Ask translators. We're being replaced by LLMs that can't actually produce good enough quality, but which are being propped up by post-editors (some of whom used to be translators) desperate enough to take terrible money to make the AI look like it works.
Maybe for the highest end translation humans still win out but realistically speaking for everything else AI is good enough, much faster, cheaper and always immediately available. Hard to compete with that
No, it's really not "good enough" for most things, and it's only cheaper because the total cost (power, water, environmental damage, social damage, people dying as a result of incorrect translations etc.) isn't taken into account.
It is good enough for essentially all use cases. I have never used a human translator in my life. Neither has anyone I know. In terms of equity of access to translation services, AI has been a tremendous equalizer. Its energy needs are also vastly overstated: https://bsky.app/profile/stanislavfort.bsky.social/post/3l5jsc2gqf32k
So you, who have never used a human translator, are fully aware of all use cases for translation and therefore can state that human translators are no longer required? I think we need to just agree to disagree on that.
As for energy needs, if AI companies published them, we wouldn't have to guess.
Yep. People who frankly don't understand how AI works are making decisions to trust AI sales people more than the experts who do the work AI is sold to replace. Artists, designers, software devs, interpreters, etc etc all do better work than machines can... And yet AI is routed as a revolution...
I think it's understandable that they don't understand how AI* works. I don't really understand how it works (except that it's a probability thing). But when people who do the job say "it doesn't do the job, and here's why", I wish they'd believe us.
This is exactly what’s happening in multiple fields now!
Less pay for more work to create crappier products/content. And as long as it’s just barely “good enough” and looks like a profit to shareholders, executives don’t care.
That only holds true if we assume quality of industrial output remains constant. There is, I think, abundant evidence that capitalism is entirely willing to sacrifice quality for profit when the market being sold to is captive.
I think this excludes the possibility that bosses know they can use a tool that produces useless and bad results, and still make a short term profit, leaving both workers and consumers in the lurch.
It won't *actually* replace jobs, but it gives bosses an excuse to fire real humans, and if you are a real person with the need to feed yourself and your family, that matters a lot.
Did you see the post from Wednesday when someone asked Google if it was Christmas?
I’m not mad at LLMs. I’m mad at the destructive idiots shoving AI into places it doesn’t belong, because it isn’t 1) ready 2) useful 3) reliable 4) smart to put it there. Like at the top of Google search results.
As someone who runs a software company, I want to hire people who were taught how to write code themselves, not someone who can get slop out of an AI. I want graphic designers who can use Photoshop and Illustrator. Schools pivoting to AI aren’t teaching the job skills employers actually need.
So yes, termite is an apt term here; they are destroying the existing institutions for their own nourishment, providing nothing of actual value in return, and if they are left unchecked we will all be left out in the cold.
Because it is entirely based on stealing other people's work without their direct permission, without paying them for any profit made from their derivative work, and is likely going to end the ability to pursue careers in the creative arts - writing, music, painting etc which are fundamental joys.
And at a time we should all be desperately doing everything we can to reduce our carbon emissions and slow the devastation of the manmade climate change already going on around us, it has a hugely negative environmental impact.
Basically it's a form of stealing, and killing the planet to do it.
If you don’t understand why so many are so firmly against AI, you don’t understand GenAI nor its broader implications for society. Lots of reading ahead of you. ‘Statistical software’ did not come with a *boatload* of newly created ethical problems, nor negative impact on environment, for starters.
The people on this site care deeply about words, nuance, and meaning, and how they reflect, distort, or conceal authorial intent. The self-deprecatory admission “Metaphors by Mixmaster” is emblematic of the spirit.
Chatbots are literally expression Cuisinarts. Yeah, we’ll rage against that machine.
There's no comparison between statistical software and generative AI, aka the plagiarism machine.
Statistical software is trustworthy and universally useful. AI isn't trustworthy or useful except in specific circumstances (accessibility, summarizing text)
Words aren't numbers, so plagiarism-ecocide-theft devices aren't the verbal equivalent of statistical software. Random search, copy, and paste by a brainless piece of software isn't a shorthand for thinking.
Ecocide has a definition that goes far beyond "uses energy" afaik. I can see the plagiarism argument but Google previews have been stealing content for decades and no one cared to the same extent. It doesn't search at random. No one claimed it is a shorthand for thinking, that's just a strawman.
I was teaching math (not a prof) from the early 90's when almost nobody had a graphics calculator through the early 00's when the best graphics calculator could do symbolic integration, infinite series summation, etc. Yes, the use of the latest tech was pretty divisive all the way through.
Statistical software was an incredibly obviously useful advancement in statistical methods that allowed massively complex computations to be done in days rather than months.
It's not comparable to generative AI, which so far seems to have net negative practical application.
You are now seeing the anti-AI fury in your replies and some very bad behavior by some of the most vehement, but don’t take it to heart. I see a lot of misunderstanding what AI is and can do that you’re being a lightning rod for.
A lot focus on AI being useless so a waste of resources and hype, so I wonder if it was proven to be useful they would change opinion. It will get there, so it’s just time. It will probably not be called AI though, just software
There is also always a reaction about AIs replacing jobs, but it’s only replacing tasks. If a job is only pure tasks then sure it’s a threat, but most jobs are a lot more than that.
I think of it in 3 ways: AI replaces human thought; AI teaches humans how to think and AI and humans enhance each others tasks. The first is negative, the 2nd is akin to what happened with chess and 3rd is my experience. I’d like to show AI detractors that experience and see if it changes opinions.
Pretty sure a lot of people were saying that the invention of the pocket calculator would mean there was no need for anyone to teach maths in schools any more.
And a lot of maths teachers were trying to explain why it just isn't that simple.
Associations for math ed. supported & recommended inclusion- problem was access $$. My associations have supported GIS in schools. Issue is access to tech & teacher knowledge. We knew value & weren't silly about it replacing teacher or subject matter. Calculators don't "learn". That's a goal of Ai.
We'll disagree about 1st point. Second line Agree.
My students never said "I'm going to ask the software to do something I don't get" Now on campus daily "going to ask my Ai to write intro. got 0 time to read about it" They don't come to me. Reduction has begun. Guilt I'm retiring, dropping them
1. Wastewater from AI servers cannot be re-used. By shoving AI into *everything* we’re tripling water waste…at a time when droughts are becoming severe and clean water isn’t guaranteed even in American cities.
Images- half are AI. Search “Monet” images- unless you know his work, good luck figuring out if it’s just a …really bad painting? Or AI generated. Kiss the efficiency of scholarship goodbye- you gotta check Everything much closer.
3. Plagiarism- but who cares- artists don’t deserve compensation 🧵
I dunno. Trial evidence ? Political media stunts w/deepfakes?
Who out there has a functioning cerebellum & isn’t pissed off abt things we never were at risk from- which trash the environment Right along with doing measurable harm?
There are two aspects to it for me: 1) AI exposes that a lot of writing is meaningless beyond serving a narrow focus (a class essay, a legal letter, etc.), and that is disturbing.
2) It once again highlights the lack of real copyright protections for normies while corporate IP ~doesn't get added.
You have to go beyond the “happy path” cases and into the edge cases - a lot of AI is being deployed in ways that actively waste user time, provides incorrect info, or just isn’t ready for prime time.
I see valid use cases for it, but they’re not the money-spinning scenarios tech is using it for.
I’ll also note that this whole trend of launching half-baked betas into the world for user testing has the same impact as putting “suggested tip” screens on every merchant iPad: it’s flooding the zone with shit experiences.
A merchant thinks “can’t hurt”, but ALL merchants are doing it.
we’re anti gen AI because it literally wouldn’t exist without massive theft of millions of people’s works, exploiting workers to train it, and bringing back fossil fuel energy plants to cope with demand.
did that happen when statistics software came out?
does statistics software publish the most likely plausible sounding solution, regardless of whether it's true or not, or does it actually solve a problem?
I mean yes some new plants were definitely created to with the power demand created by the early (and inefficient) computer boom. But as data was dealt with slowly and purposefully by human users it wasn't a massive useless "beast" as it is now.
None of this is relevant to the topic at hand, which is a discussion of whether gen AI can solve textbook problem sets.
I'm also broadly against many things other people engage with but I don't spend all day flipping out if someone even mentions them in an extremely anodyne way.
I'm not even slightly interested in whether anyone or anything besides my students can solve problem sets. The more they outsource their thinking the less they learn.
This is endemic everywhere in education now. People in education have an obligation to work against it.
It’s using as much energy as Japan. It’s only good for making pregnant Garfield and human artists do that better too. It does nothing positive unless you are into CSAM and revenge porn aka the worst people.
sorry but there are a lot of people who are sick of mediocre gen AI being pushed into everything when humans could use their own critical thinking skills. besides that i’m an artist married to a writer. this is a matter of our livelihoods and survival, of course we’re going to get upset.
I understand being concerned about crappy output and your real livelihood. To give an example, I have a great fellow (pediatrician) interested in antibiotic stewardship for children. They wanted to study what "wait and see" antibiotic prescribing (WASP) does, and how doctors use it. (1/x)
Studying WASPs pre-LLMs was very expensive. You have to do manual chart reviews of clinical note text to see if the doctor wrote a WASP or an immediate prescription. Pre-LLM, this would've required paying doctors or medical students to do this manual work. Not in the budget for a junior fellow.(2/x)
so...in response to someone expressing concern about AI used to steal creative work and replace artists, you offer this long, unrelated, unverifiable single anecdote about one coding task. You aren't even trying to engage in good faith.
Hey, so, I work in tech. There's absolutely a use case of AI to analyze small datasets because it can do so faster than you ever will. But gen ai like ChatGPT? It's slop sloshing around poisoned by disinfo, and if you're using it it makes you stupid
Quick question: Why would you want it to, and what use could this possibly have?
Honestly, people "flip out" bc AI evangelists don't listen to the extremely valid concerns and issues people have with this technology, which is incredibly resource intensive and usually runs roughshod over consent.
Generative AI produces answers that you don't know are true or not, it just makes stuff up. I can tell when my students use AI to answer short answer questions because of how they're wrong. So instead of learning the material and learning *how* to think critically, they just fail and learn nothing
I think AI tutors could be very complementary to human teaching! I teach clinical fellows (MDs learning research) and have found they use LLMs pretty effectively to learn about and apply stats/econometrics concepts that I give them in their own projects much faster than pre-LLM students.
I can nominate at least one person I can think of who can be made redundant by incorporating unthinking machines with no discernible concept of morality and no real understanding of the problems it attempts to provide answers for.
And beyond its numerous negatives it's just something that doesn't work with human reality. A doctor must get a medical degree to prove their expertise and when they mess up it's them who'll face the threat of malpractice. Who will be responsible when an LLM hurts someone? The owner, the programmer?
We just had someone murder a CEO in the street because their company was letting algorithms decide who lives and dies to maximize profits. That's what happens if you put an unaccountable toaster programmed to maximize profit in charge of human lives.
If you're actually responsible in any way for educating students would you mind doing me a favor and researching how AI and statistical software differ before doing any more of said educating? Thanks
Do you have a carbon monoxide leak in your house? Can't imagine why else "people are annoyed by the machine that costs hundreds of billions of dollars and does nothing useful, only flood public spaces with meaningless garbage" difficult to comprehend.
What is the use case for AI? What problem does it solve? It's very good at producing derivative works of writing and art, but that's not exactly something we need.
Alright, since you also can't grasp why the humanities is important, or why exploitation of people to create usable LLMs of the scale of generative AI is bad, let's talk business.
To integrate one of those big AIs into your department would likely cost a pretty penny as it sounds like you're looking for more one of those deals microsoft has. That costs millions. And it's still inaccurate. You could pay your grad students more and get better results.
And let's pull back from the world of academia to the world of business. Tech companies like Microsoft, Amazon, and Google, are shoving this technology everywhere. Even in areas it's REALLY not needed.
You do realize that the business people you suck up to plan to replace you with AI? And in the interim, an exploited H1B replacement. Why are you obeying in advance? I’d advise you expand your horizons and augment your math literary diet with T Snyder, M Gessen, R Ben Ghiat, G Lakoff etc
My mouth literally fell open when I read your question. You either have absolutely no idea what AI is, or what statistical software is. Not sure which one would be scarier to me.
I admit I abbreviated “chat gpt like generative AI” to just “AI”, but other than that I don’t think we’ll find common ground on this one. My hostility for it is well-earned.
No one on this thread referred to generative AI/LLMs specifically, and I think as scientists and engineers we have a duty to be specific in what we are criticising
Actually, I was an undergrad when calculators started displacing slide rules, and yes, professors did freak out when programmable calculators allowed students to solve problems without memorizing all the formulas.
As a stats prof who knows a lot about the history of statistical software: no. And as someone who is really anti-generative-AI, it's clear why: statistical software doesn't steal ideas and reify bias in society. I make a clear distinction between machine learning and "artificial intelligence," tho.
1. Use more energy than a small country (to help kids cheat on homework)
2. Produce lies/ misinfo on a massive level
3. Caused mass waves of firings
4. Destroyed the usability of the internet by clogging it with slop
In late 80s undergrad stats classes I pointed out that my graphing calculator could do the problem that took me 4 or 5 pages to do by hand. My prof said that my job was to learn and practice the concepts so I could recognize it when the calculator made an error. Sound advice.
>ai getting shoved into every product both imaginable and unimaginable
>ai making every single one of those products worse than ever before
>”i can’t believe people don’t like ai”
The generic use of the term AI is a problem. What is going on is like a bunch of people outraged over mountain top coal mining, and then someone saying, I can't belive people are upset about energy development, I put solar panels on my roof and they are good!
The coal company wants you to equate these things, just like open ai wants you to equate the use of predictive ai with generative ai. You don't have to have mountain top removal to get electricity. You don't have to have chatgpt to have the benefit of targeted use of predictive ai.
You better get out of this thread with that logic. The doomers are on a rampage. The OP made a hamfisted analogy and got absolutely crusaded. I'm fascinated by the binary thinking on display, tbh.
These people aren't doomers. They have correctly identified that large-scale generative AI projects are consuming so much power and water (while failing to do anything useful and doing a bunch of copyright thefts) that they are now a threat to us all. People need to speak up about this.
2/2 Projections are difficult to make because of scaling inconsistency, but even doubling that will not meaningfully change the overall energy demand. Land use for population growth is a much bigger issue imho. I'm not here to change minds. It's cool to have whatever opinion about it. ✌️
Yes, it seems we have different opinions on this. For me, energy development for people is different than energy development for a tech that has yet to be proven reliable and helpful. I've also seen stories that indicate they are not limiting training to fair use copy rights.
There is fair debate regarding whether consumption of public data is copyright infringement. what many of the "ecocide" arguments I saw fail to acknowledge is that data centers worldwide represent only 2-3% of energy consumption.
You can't grasp it because you're an idiot, pal. It's really not hard, but you've put a lot of effort into being distracted by the shiny jangling keys and you'll be damned if you're going to let that sweet feeling get ruined by exposure to the grim facts of reality.
You're getting a lot of vicious responses, but many of them boil down to people hating the way bad actors are utilizing AI to make a lot of things objectively worse - ignoring the fact that "AI" isn't actually a monolith, and there are good uses for it that just aren't what they're encountering.
And there ARE a lot of bad uses of AI infecting everything we do online.
Google should never have incorporated AI search summaries that are objectively wrong so often. Facebook's AI penalizes folks who did nothing wrong while letting bigots keep bigotting. AI argue bots ruin social media. Etc.
"I don't understand why everyone hates the plagiarism machine that steals art, puts people out of work, makes up information, and burns down a forest every time someone asks it a question"
To answer your other question: genAI is one kind of AI (like my house cats are one kind of feline), and the line between AI and ML is blurred by techbros and marketers, but there is a difference, if one can pin down the algorithms and data used.
Stat software wasn't shoved into literally every facet of life, making the experience worse, while being based on intellectual property theft.
On Facebook, I can't tell you how many times I've accidentally hit the buttons for AI stuff because more space is dedicated to that than the like button.
If AI was a calculator and half the answers were always wrong or totally unintelligible then sure you’d have seen this. When Microsoft issued a version of Excel with major mistakes in the formulas? Huge anger. It got dumped.
But worse than that, you don’t understand that AI was created using theft
The public wanted robots to do the dishes, clean the floor, pull weeds.
Not write our books and rules (incorrectly) and morals and principles (things no generative AI is capable of, nor will it ever be, founders say it can’t.)
One of the earliest major migration and early adopters of Bluesky was the art community who found Bluesky’s commitment to not use users content for gen-ai an attractive contract. Another major wave of adopters happened after Twitter updated its privacy policy.
It is garbage whose primary use case is summarizing emails and it’s being shoved down our throats in ways that make every digital UX orders of magnitude worse because idiot executives believed the bullshit of Silicon Valley conmen who are also high on their own supply and have sunk so much into it.
If statistical software was heavily accelerating global warming and spitting out inaccurate analysis, I'd be pretty furious about that, too. Funny that you don't seem to answer any of the replies bringing those concerns up and it's leading me to believe that they're not a problem for you.
1. Prism and software like it have almost definitely led to poor research practices and lower quality research.
2. Learning assessment requires active work on behalf of instructors, not half-assing what is most likely 40% of your job description
There is no way #1 offsets research productivity gains from automating calculations. Do you think that science would be better off making everyone do slide rule calculations?
I don't understand your point on #2. Nothing about Arpit's post suggests he wants to rely solely on textbook problem sets!
I'd say the effects on reproducibility & meaningless papers very well could. It's a kind of automation that undermines understanding & competence, rather than enhancing it. Just like genAI. Textbook problems sets are a part of assessment & learning. What he's describing is laziness & a lack of care
Weird 2nd order negative! Should we apply this 'friction as quality gatekeeper' to other things: writing was better before the typewriter, music better before composing software, photography better when it required a darkroom? Team collaboration was more meaningful when you had to mail a letter?
Specifically it asked if there were any disciplines in which this isn't true.
There are a lot of disciplines in which not only is that not true, it's not even a meaningful question to ask--and asking it implies a fundamental lack of understanding of what those disciplines are and do.
Statistical software follows the fucking rules of statistics. It doesn't decide that if it doesn't know the answer, to just make up the most right-sounding answer. It does the fucking opposite of that. It follows rules that either yield answers (as probabilities) or don't.
You can't grasp. Yes.
Comments
(Acknowledging that I'm not privy to the specific inciting conversation/thread)
Students have been faking the work since forever, and the implication that teachers can’t tell the difference between honest effort and an LLM (with a high degree of reliability, not perfect) not merely insulting.
It’s enshittification of education.
We can definitely end up with worse solutions that we pay more for and can't easily get rid of.
I’ll leave it to the reader to speculate how common that is.
https://www.forbes.com/sites/salesforce/2014/09/13/sorry-spreadsheet-errors/
As for energy needs, if AI companies published them, we wouldn't have to guess.
(*AI here being the generative shite.)
This is exactly what’s happening in multiple fields now!
Less pay for more work to create crappier products/content. And as long as it’s just barely “good enough” and looks like a profit to shareholders, executives don’t care.
- Execs are sold on the hype that “AI” can replace employees
- So they use it as an excuse to lay off employees
- Remaining employees get paid less to do more work “fixing” AI junk
- And it’s spun as profit in quarterly reports
Many companies are already sacrificing long-term quality and actual profits by adopting AI use already based on a "promise."
I’m not mad at LLMs. I’m mad at the destructive idiots shoving AI into places it doesn’t belong, because it isn’t 1) ready 2) useful 3) reliable 4) smart to put it there. Like at the top of Google search results.
Basically it's a form of stealing, and killing the planet to do it.
Chatbots are literally expression Cuisinarts. Yeah, we’ll rage against that machine.
Statistical software is trustworthy and universally useful. AI isn't trustworthy or useful except in specific circumstances (accessibility, summarizing text)
But idiot executive will try to make it work and ruin millions of lives.
It's not comparable to generative AI, which so far seems to have net negative practical application.
Me: 🌋💥 Kill it with fire!! 🔥
And a lot of maths teachers were trying to explain why it just isn't that simple.
I don't think reducing the level of human involvement in the process of learning is a positive or helpful goal.
My students never said "I'm going to ask the software to do something I don't get" Now on campus daily "going to ask my Ai to write intro. got 0 time to read about it" They don't come to me. Reduction has begun. Guilt I'm retiring, dropping them
And why we’re pissed off.
1. Wastewater from AI servers cannot be re-used. By shoving AI into *everything* we’re tripling water waste…at a time when droughts are becoming severe and clean water isn’t guaranteed even in American cities.
2. Google is useless.
🧵
3. Plagiarism- but who cares- artists don’t deserve compensation 🧵
4. Actual Misinformation Warfare.
I dunno. Trial evidence ? Political media stunts w/deepfakes?
Who out there has a functioning cerebellum & isn’t pissed off abt things we never were at risk from- which trash the environment Right along with doing measurable harm?
5. It’s an even easier generator of revenge p*rn.
And you can sit there wondering why ppl are angry….how?
Is it simply because ..you have fresh water? Accessing factual data isn’t necessary? Not an artist? Think ppl are too smart for deepfakes?
Aren’t a girl?
🧵
I’d like to know why these issues perplex you as a source of anger. I’m no mathematician/scientist. maybe my logic isn’t as sharp as it could be.
In good faith, can you expand on why you aren’t as angry?
2) It once again highlights the lack of real copyright protections for normies while corporate IP ~doesn't get added.
I see valid use cases for it, but they’re not the money-spinning scenarios tech is using it for.
A merchant thinks “can’t hurt”, but ALL merchants are doing it.
did that happen when statistics software came out?
I'm also broadly against many things other people engage with but I don't spend all day flipping out if someone even mentions them in an extremely anodyne way.
This is endemic everywhere in education now. People in education have an obligation to work against it.
Theory developed in an ideal hypothetical universe tends to fail miserably when applied to the one we actually live in.
Honestly, people "flip out" bc AI evangelists don't listen to the extremely valid concerns and issues people have with this technology, which is incredibly resource intensive and usually runs roughshod over consent.
You said you couldn't grasp the fury. Someone explained the fury. You said that was off topic.
The fury is moreso at the mode.
But at this point it's basically a pollutant, with extremely niche practical benefits, so there's ire there too.
In my experience it's often wrong, expensive, has massive externalities, pablum like no other pablum.
And it outputs inaccurate info to this day. Sounds like a big waste of money if an underpaid grad student can find more accurate info.
Why? How is this worth it?
You're aware of the sunk cost fallacy, correct? Yeah, CEOs of massive tech monopolies can still succumb to that.
btw?
Because so many companies have shoved "AI" into things like washing machines and hair dryers, and people have noticed it doesn't work.
https://mstdn.social/@JohnMashey/109440130844417136
1. Use more energy than a small country (to help kids cheat on homework)
2. Produce lies/ misinfo on a massive level
3. Caused mass waves of firings
4. Destroyed the usability of the internet by clogging it with slop
This has been a running conversation; if this question is sincere, here's a longer response I gave regarding LLMs and the practice of history.
https://bsky.app/profile/sschwinghamer.bsky.social/post/3le5jw5ope227
Statistical software did not have the same taxing demands on the energy supply only to, again, not work.
In late 80s undergrad stats classes I pointed out that my graphing calculator could do the problem that took me 4 or 5 pages to do by hand. My prof said that my job was to learn and practice the concepts so I could recognize it when the calculator made an error. Sound advice.
i don't remember Statistics having to steal all the content they could to produce half-assed results.
>ai making every single one of those products worse than ever before
>”i can’t believe people don’t like ai”
Google should never have incorporated AI search summaries that are objectively wrong so often. Facebook's AI penalizes folks who did nothing wrong while letting bigots keep bigotting. AI argue bots ruin social media. Etc.
You’re complaining about a particular implementation.
This is partly why we can’t have sensible conversations on the topic.
Is “AI” the same as generative AI or is it the same as ML?
GAI proponents try to have it both ways, deliberately confusing the question.
I advocate targeted ML but not GAI.
That’s worlds away from “genAI is bad and shouldn’t exist”.
And it’s already deployed in ways that subtract value from many important software services: Facebook, Google, MS Office, etc.
So, yeah, in short, bad.
Is this your opinion or a generally accepted distinction among relevant research communities? If the latter do you have a reference? tia
On Facebook, I can't tell you how many times I've accidentally hit the buttons for AI stuff because more space is dedicated to that than the like button.
But worse than that, you don’t understand that AI was created using theft
Not write our books and rules (incorrectly) and morals and principles (things no generative AI is capable of, nor will it ever be, founders say it can’t.)
So this is a sizable group here.
I can grasp it easily.
We dislike the art theft garbage-regurgitation bots, bc they're shitty.
We dislike the misinformation bots proliferating & making life more difficult every damn day.
It's not complicated. ai sucks & is shitty
Meanwhile, advocacy for GenII in most disciplines belies a fundamental misunderstanding of what those disciplines do, or what the point of writing is.
2. Learning assessment requires active work on behalf of instructors, not half-assing what is most likely 40% of your job description
I don't understand your point on #2. Nothing about Arpit's post suggests he wants to rely solely on textbook problem sets!
There are a lot of disciplines in which not only is that not true, it's not even a meaningful question to ask--and asking it implies a fundamental lack of understanding of what those disciplines are and do.
You can't grasp. Yes.
statistics software isn't flooding online marketplaces with fake "
copies of the books people worked hard to write -