Calculators perform mathematical calculations as ordered. Llms produce semantically empty garbage that resembles speech. Llms don't write, they produce outputs that superficially resemble writing. If the writing is so low priority that a an llm output is acceptable it didn't need to be done at all.
What about the use case scenario of asking it to do the most weird political smut between Putin and Trump so that I can laugh at it? Or is that still a nono?
I use ChatGPT daily at my job. Proud of it. Built a pretty advanced data analysis web application from scratch. ChatGPT guided me through it step by step. Now I can build web apps. All for free. No online courses. No weekend boot camps. No money spent. New skill. More $$ in my bank 🤷♂️ Thanks ChatGPT
imagine being a social scientist and having the take that "the damage it does to others doesn't matter because it lets me steal work and profit from it"
You could have done this for free without chatgpt. Coding and software has so much freely available resources and documentation, especially compared to sciences, medicine etc.
I think in theory there could be use cases for chatgpt...if it wasn't the stupid plagiarism machine that takes up a nightmare amount of resources. You really can't use it for anything in good conscious, because it literally lies and steals out of sheer incompetence.
People have always used stuff like idea and name generators. Sometimes you do benefit from some machine spitting out random bullshit to kickstart brainstorming better. AI still sucks ass on ethics and design, so it's not usable, even in that more reasonable case.
While it is true that AI is a tool that should theoretically be usable for both good and bad based on the user, the fact that it does so much environmental damage effectively means that any "good" or "ethical" uses are unfortunately invalidated. It's better to just not use it.
That's one of the biggest issues for me. Though frankly the straight up theft of people's work is what gets to me. I get the appeal conceptually, but looking at the actual technology...nah.
I'm an artist & it's the same issue as AI generated art. Is not being capable of mastering a skill ableist? Before AI art, no one was saying it was ableist that every person couldn't draw well Not everyone is good at everything. (I'm disabled, btw) You aren't entitled to everything.
how else will i read books? i need ai to change "In my younger and more vulnerable years my father gave me some advice that I've been turning over in my mind ever since." to "when i was young my dad told me something"
There's plenty of good use cases, just don't use it to do any creative work for you. Replacing art in any form is not okay.
There's so much software we use daily that does work for us, like calculators, autocorrect, translation programs like Google Translate etc.
This topic and the lack of nuance surrounding it is so annoying. Like, do you not use Google Maps to get to unfamilar places? Or do you use a physical map instead?
I don't wanna "use my brain" to figure out what the best route is, I just type in the location, select a suggestion I like, and go.
Most of those examples are categorically different. Google maps is a map. It's not a fake map pretending to be a real map, it's a map. Translation is a better comparison because using Google translate for anything important is a horrible idea; the machine can't evaluate context, culture, idioms,
Wordplay, mood, anything that you need an actual person to do. Asking it to translate "where is the bathroom" is fine. But as the complexity of the query increases the stakes increase and getting a bad translation becomes a serious problem.
Regarding the map example, I replied here to clarify, if you don't mind taking a look: https://bsky.app/profile/miphera.bsky.social/post/3lgvlgvmzp22r
On your point about Google Translate, I agree, but as you said yourself, there are use cases for it, but when using it for important or complex stuff, it can be inadequate or even risky.
a map is a resource done by experts assembled with intent, do you not remember when Apple Maps was literally unusable because it kept using then what we'd call 'AI' to fill in the gaps and it just invented streets and shit?
To be clear, I entirely agree, and think we're talking past each other a bit here.
I dont want AI shoved into all kinds of software, I hate 100% of art-related uses of AI, and about 98% of any other uses of it.
My point with the map example was specifically in response to the sentiment, that
using GPT is inherently bad, because it does for us what imo plenty of non-AI software, like Google Maps, is doing: both are software that *can* be used to simplify or solve tasks that would otherwise require a lot more work and/or using our brain ("do your own work and use your own brain").
Not that long ago, everyone arguing against AI "art" (including myself) championed the statement "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes."
Now AI "doing the dishes" is suddenly also an issue?
Well, I woke up, looked at posts on Bluesky and replied to this one, then got out of bed and did morning routine, had some more thoughts regarding this post, and added them once I got to my PC.
What's amazing is that every single argument against AI could have been made against the internal combustion engine a hundred years ago
AI is in it's infancy and will be as impactful as the wheel. The solution to climate change is a clean energy transition, not going back to smashing rocks together
No it couldn't. They're categorically different kinds of machines. And many of the arguments against the ICE were correct and ICEs have significant contributed to the sixth great extinction and the devastation of earth's biosphere, to say nothing of how we all live in car hell now.
Them being different machines doesn't require unique criticisms. Because not a single one I've heard against AI is. From the impacts to users, jobs, the environment, IP theft, not one.
And *yes* many criticisms of ICEs were correct. That's my whole point.
The solution wasn't to ban engines 😂
Yeah, what I would say though is that the solution is to regulate them.
I also think that the entire approach to AI (and even in the past, with ICE) in the workplace has a lot of drawbacks in that, it doesn't benefit workers, just shareholders.
Sure, a lot of stuff got automated, and tons more
Today’s AI is replacing things that don’t need to be and SHOULDN’T be replaced because they’re important for humans to do. AI has its place in the world, as you’ve said, in gps and such. But outside of that it’ll just lower the average IQ
Theyve convinced their investors that it will magically print infinitr wealth and if they don't deliver to their investors they're fucked. Tale as old as time. It's radium water.
Personally I think it’s part of the “I love the uneducated” brain drain shtick that’s decreasing our attention span and critical thinking. They’re encouraging as many people as possible to offload our consciousness and creativity. It’s really scary
Oh, many want AI… to do work people don’t want to do. Give us an AI that *accurately* manages paperwork and accounting and every adoptive parent and self-employed person will thank you.
Capitalism, or at least modern capitalism, is not about selling us what we want to buy, but about getting us to buy (or forcing upon us) what they want to sell.
It explains Ed Zitron's "rot economy", Cory Doctorow's "enshittification", and so much more.
I like to phrase it as there are no car company's or food company's or llm company's. Every company is a money making company, and it's only ever a money making company. It has no other purpose.
They got nothing else. The industry monopolized themselves into a corner and they haven’t created anything meaningful since the iPhone but they still believe they’re geniuses who deserve to be rich
I read the Teachers subreddit for "fun" sometimes, and it gives me the impression that the use of generative AI is running rampant in American classrooms. Which it probably is.
Since we’ve hollowed out and devalued education and replaced it with standardized testing as the only measurement of success, this come as no surprise.
It is, even in college. All the syllabuses say no AI allowed unless given permission. I sit in the back of my classes and I spot at LEAST 2 ChatGPT users each class. I have no respect for students like that
I’ve never used AI unless told to. Even using it I feel disgusted and it’s very dystopian. I hate even the thought of using it. I don’t understand how other students can use it without a second thought
story time: In New Testament class, and we’re taking an open book quiz about the Jewish Sects. EMPHASIS ON OPEN BOOK QUIZ. Guy sitting right in front of me opens ChatGPT, so does another girl off to the left. He gets full score, while the girl gets 3 things wrong.
I am CONFIDENT that ChatGPT didn’t actually help either of them. But jfc they had the book RIGHT IN FRONT OF THEM! Yet they turn to an unreliable data pool???? They never looked at the book at all, the book with **all the answers.** Fucking idiots
As much as I'm against chatgpt, it can be useful for creating random backstories for NPCs in home TTRP games. Every DM has experienced players "chasing the blue hat" and averting from the plot just to chase down a random NPC they "find interesting." But that's like the only specific circumstance.
like i feel like the attorney should understand the concept of "using the unreliable lying machine to compile a list poisons the well, no matter how much you claim to have independently verified it"
Yup. Also like, reading and revising/pushing back against TOS is tedious but it can be important AND you can rack up some easy billable hours doing it yourself while knowing the Lying Machine isn't leading you astray
There's also the extremely real ethical concerns about allowing any of these Llms to access your client data if you don't know where that info might end up.
Not even to write that detailed operational step-by-step guide to my job that the big boss asked for? The one we all know means I'm getting outsourced or replaced by someone cheaper (or by AI) and therefore I don't care about, and we all also know the big boss will never actually read?
Too many scifi brained fools still holding out hope that this will turn into something more so they want to “be in on the ground floor”
All the ridiculous comparisons to shit like the internet, photography, and fuckin typewriters give it away. LLMs are not like any of those things at all
Right, the one potentially valid "use case" (not actually valid) for my profession is creating templates for motions, legal forms, letters to clients, etc. But the thing is, literally every fucking attorney and firm/legal organization already has that stuff! It's a solution without a problem
So what if you aren’t working for one of those firms and can’t afford to hire an attorney and don’t have a law background. Wouldn’t you see benefits in normal people without the $$ and knowledge being able to generate those legal documents?
No, because they aren't equipped to spot the inevitable accuracies and even small mistakes can fuck you. LLMs are not the answer for increased access to legal aid
It could MAYBE do basic forms right but like courts already have those available for litigants, there's no need for it. I blocked another reply from a guy who said he used it to spot an inconsistency in opposing counsel's argument. BITCH THAT'S YOUR LITERAL JOB.
What about for people like me, who are constitutionally lazy and feckless? The ADA protects everyone who claims to have a disability, I think, I haven't read it
I've used it to brainstorm ideas, and I think that's a reasonable use-case? Like if I have a general idea in mind to write about, but I need help coming up with details, I'll bounce ideas off of gpt, tweak its suggestions, and go from there. I don't always have people available to brainstorm with
Just remember that the energy-cost of AI is enormous. Frivolous usage is dangerous mostly when people don't know that and just go crazy with it asking Chat GPT stuff they could have simply thought of if they spent a couple minutes on it.
Generative AI can be helpful, but its just not worth it
TBH that sounds worse somehow? You could skeet about your ideas on here and get feedback. You could look up similar works other people have made. Brainstorming isn't a new thing, you don't need a new tool. ChatGPT is incredibly wasteful for something you could do with a friend and a notebook.
Please explain how posting my ideas into the aether and hoping I get a response is better than getting immediate feedback? GPT is literally an aggregate of similar works. I agree that brainstorming isn't a new thing, but there's nothing wrong with using new tools to do the same thing.
By posting about your ideas you can connect with people. Isn't that something worth trying for? People who know and understand in ways AI can't.
GPT mangles the meaning out of things. Even if its repeating something verbatim someone else wrote, it strips the context of who wrote it when and where.
Get a notepad/word document to list down cool ideas you have as they come up, or put some ideas onto a random selection program and get it to mix and match
Folks have done this for centuries without relying on Plagiarism software
Please don't treat me like an idiot. I've done that kind of brainstorming, and for me, it's less effective. Immediate feedback helps me direct my thoughts instead of them being a chaotic mess that accomplish nothing. I'm not stealing ideas anymore than modern fantasy being derivative of Tolkien.
Oh? So every story that follows, say, the hero's journey, is plagiarized? The broad strokes exist, it's up to US to put the details into place, which is what I DO. Please start understanding that shades of grey exist.
brainstorming is an important skill in itself. the key word is BRAIN -- your brain. the ability to generate ideas & then discern which ones are worth pursuing -- perhaps with guidance from other PEOPLE -- is a skill you can only develop with practice. automating it robs you of the ability to improve
Did you even read what I posted? I tweak everything it gives me to my own design. At most, I take generalized plot points from GPT and adapt them to my own purposes. There's no plagiarism any more than being inspired by other works of fiction.
if you're inspired by other works of fiction, I assume you get most of them through legitimate means (library, buying books, consensually shared content). the content that was used to train models was not shared freely -- a substantial portion of it was straight up torrented/pirated!
but also, aside from that, I think you might be selling your own ideas & thinking process a bit short. YOU are the only one with your unique thoughts, life experience, or lens on the world. your ideas are infinitely more beautiful and interesting than anything a robot could chew up & spit out.
Perhaps you feel, subjectively, that your brainstorming process is a bit flawed/inadequate compared to what you can do with GPT, but those flaws and challenges make you, you! the world needs and deserves your art from your brain.
Okay, that's actually the correct take. Props on that 👍🏾
However your admonishment does come across as flippant.
"There's no ethical consumption under capitalism" isn't a carte blanche, but it is still useful to remember.
We've long known that chiding consumers isn't the solution to climate change.
When taking part of labor actions, successful organizers didn’t just target the producers but also the consumers of the products that harm the working class. We should do more but it’s a start.
That quote comes from tumblr, not theory or tested practice.
I tried to use chatgpt to write an email just to see what the fuss was about. The drivel that it shat out read like it was written by someone trying to reach 500 words on a book report. I would instantly hate anyone that sent out emails like that.
I cringe every time I hear my coworker go "I asked chatgpt to to [insert prompt here] wanna see" and I always say no. Last week it was "cats in the future" with about five different asks for "even farther into the future".
I also worry that there are way more people whose brains were mulched by long covid than we know about, and a lot of them are using LLM bots to approximate their former abilities
on a background gig a few weeks ago I talked with a dude with a business degree who said he used to write all of his papers. I told him that they don't assign papers because they want papers, it's to teach you to think through the process, and I think I broke him. may have also called him a dumbass
It's funny how a couple of my relatives are corpo and they think it's great: I keep reminding myself these guys are THE target demographic - loads of expense $, sort of time poor, not real bright
Any defensible use cases, in my mind, are current systemic issues. For example, people with intellectual or learning disabilities that use AI to help them sound professional or articulate their thoughts more clearly. However, the same thing could be accomplished with human support. Which costs $$$.
Both scenarios come with inherent bias, risking the individuals words being misinterpreted/misrepresented… using AI offers a significantly higher degree of autonomy.
Idk man. I work in human services. I’ve seen how it can help and hurt.
There's nothing to learn. It's garbage in garbage out. Enter a natural language command and the machine will output some garbage that may or may not resemble real information. It's a mad libs generator with some weighted semi-random numbers thrown in. They throw around fancy terminology but
Yuuup. I have ADHD and cannot stand tasks like doing my taxes or sending certain kinds of emails. Breaking a project into listed parts is hard. But guess what? I use friends, goblin tools, a paper planner, and several other workarounds to do it anyway. And the end result is much better than cgpt.
ppl claim c .ai (which is the same basic system) is great bc now lonely ppl w no friends can rp. this excuse legitimately disgusts me, as someone who had *zero* friends in real life throughout high school.
i use it when i have questions that cant be answered by a cursory google search and i don’t have to time to read through tens of scientific journals. but yeah no using it to write stuff for you is just indefensible
A co-worker of mine uses it to write performance reviews of their employees and all I could think was “if I got a performance review written by ChatGPT, I’d fucking quit.”
Was chatting with my partner the other day, and he mentioned employees using chatgpt in workplace surveys as a way to 'anonymise' their feedback. I think that's an appropriate way to use it and especially useful for employees in small teams where your writing style could easily identify your input.
I've never used it for performance reviews, though, and I never would.
Performance reviews are far more personal and generally in-person. What's the point of chatgpt for interpersonal conversations?
Anything I write in a system on performance is just a written record of a conversation.
on the one hand, I always change my writing style for that reason without chatgpt, but on the other hand, i can generally figure out who 80% of the respondents were when we get to read the comments later
Yeh, so I think folks are trying to change the wording on their "free text" responses, so remove their writing style. Still may not deidentify them if the feedback is overly specific.
In short - I think using it for any interpersonal interaction is not only lazy but worse extremely disrespectful to whomever you’re interacting with and also to whatever it is you have to say. “Either you or the thing I am saying (or both) are literally not worth my time.”
Their prompt (they told me) was “write a performance review for a sales leader whose team met their assigned metrics, but struggles in interpersonal relationships and developing the sales managers underneath them.”
I wonder if people using ChatGPT as a search engine (which is a terrible idea, and I have had to stress this to several family members) is in part because Google and some other search engines have become such horseshit over the last 4 to 5 years.
I certainly do. Google is awful and automatically uses their own shitty AI in search anyway, so if my choices are to use a reasonably reliable AI versus a shitty search that also forces me to use a shitty AI, I'll take the first
Wow, I had no idea I could stop reading! How novel!
A lot of people cite their central beef with AI as being the energy usage. Scrolling past does not magically undo the energy expenditure from the search being conducted
I've started using DuckDuckGo, which seems to get better search results (Google's search results have gotten worse well before they added the AI slop to the top) and doesn't have any AI crap at the top of it. I imagine there are some others that also still care about their search quality.
I’ve used DuckDuckGo for years now, and I can highly recommend it. Their use of Gen AI is much more in line with the limited way these tools should be used right now — and they’re easily turned off if you’d rather not deal with results you’re not sure you can trust:
this reminded me to try this out and I'll pass it along in case it's useful — people I trust swear by it. I'm excited to try it, you get 100 free searches and after that it's $5/mo for the starter plan (I know, I know, but it seems worth it) https://kagi.com/
My boss is now asking for daily examples of how we use IA to improve our work. He afirm that we need to use at least once a day, that his kids are using all day. THE GUY IS QUESTIONING US (I work with Product Management)
Not a problem. I teach a class on how to integrate AI into your product's business model & incorporate AI into your day-to-day product management work. Glad to talk to your boss about backing up his desire for change by funding employee training.
I wrote "yeah ill give it a look", asked it a basic question it didn't already train on, in response it wrote the worst function I ever read (endless linearly multiple layers of recursion), and I laughed and closed the tab.
It's pretty easy to find real world problems where it spits out obvious gibberish. I throw solved non-trivial problems into the internal LLMs every now and again to confirm they are still slop machines. Maybe I shouldn't bother, but it's something to do before the Butlerian jihad.
horrible management strategy. you should be cultivating your employees to improve their skills and empower them to be the best they can be, not have them race to the bottom
The concept is great... The plagiarism is shit. If we had the models pull from fair use literature and materials that paid consenting authors/artists respectable royalties I'd be on board... What we get is stolen garbage. Do we have a "Nightshade" program for text?
The only good one I've heard is using it to clog up the trans /abortion/whatever snitch forms in conservative states. Just spew garbage straight into it. Lot harder to filter out.
Floored by the amount of people who think ‘but I only use it to write emails’ is a case. You need an AI to write an email for you? Your brain is a blancmange.
I fucking hate writing cover letters for that exact reason - noise out of signal, grinding whole paragraphs out of “I want this job and I can do it”. AI being ‘good’ at them is damning!
I have coworkers where one will write an email with AI and the other will ask AI to summarize that email and it’s just ridiculous… it’s okay that your email isn’t a formal essay, just tell me what you told the robot! I just want to do my job and move on
This is just a power fantasy of two losers who want a snappy witty secretary a-la Donna from Suits, so they can pretend to drukenly tell her "Tell that idiot to go fuck himself". Then they get an answer - "Donna what does he want - he basically tells you to fuck yourself too, mr. Specter"
My job banned AI because people were doing that and it, uh, resulted in a multi-million dollar error because it was removing important details and inserting fake ones and someone straight up nearly died as a result.
Why would you waste valuable time on menial tasks when there’s a tool for that? I don’t “need” AI for anything. I could also respond to emails with written letters and snail mail if that’ll make you happy.
Ethical issues of AI aside, I genuinely view the use of generative AI for basic tasks as an admission of failure. Like, you seriously can’t do this shit on your own? Pathetic!
this extremism is as ignorant as the full-tilt desire to replace all humanity with AI. it has many excellent uses. we must address its problems and pry it out of the hands of fascist oligarchs but the extreme denialism is just not engaging with reality.
that's true, but "robots writing emails that other robots will later summarize, and the person you wanted to communicate with might read the summary" will not make any of that better
That's also equally true and I didn't mean to make it sound like I'm for the robots.
I have great disdain for the hour a week I have to spend writing up what I did for my boss just to shove it into a robot to feed to his boss and his boss's boss to not actually look at but say they did.
and you're absolutely right about menial tasks, i was too trigger happy
it just absolutely drives me up a wall that a lot of people correctly clock the existence of robotic and menial tasks, and then cleverly use robots for Not Those
there are already so many templates/samples available online for any type of professional email you might need to write, just customize it a bit to fit your specific circumstances and you're good to go, no ai required
100%. This takes on a whole other dimension in software development. There's an old saying, "it's easier to write software than to read it." That's important because it's not enough to simply go "code is written, code works, ship code."
This becomes a problem down the road, when something breaks and some unlucky fool (usually, me) has to figure out what that code does, what broke, and how to fix it. This is hard enough with code people write, but you can just about guess how illegible genAI code is. It's a disaster.
Which is honestly terrible for developers like me, who has been writing disasters of code on my own, and now you’re telling me whenever sees the string of 7 array methods, they’re just gonna think I used AI 😔😔😔 (this is a joke, my code is okay)
For anyone reading this and thinking that GenAI would make my code better, it actually just scours Stack Overflow for answers which should be your #1 best skill as a dev. If you can’t research the problem, you can’t solve it
Like I hate this fucking future as much as anyone but when troubleshooting a complicated system with a bunch of gatekept documentation… shit has been game changing. Takes 6 hours of scrolling forums and making dents in the wall to “oh shit. Yeah that worked”
I was trying to trouble shoot a problem with a copier a few weeks ago. One of the dudes in the room helpfully looked up the problem and model on ChatGPT and it produced a solution.
The solution did not work, as it required a button the copier did not have, as ChatGPT is actually dogshit at this.
I spent half a day troubleshooting an Avaya IP office to find out the provided SOP from the manufacturer had issues in it, and chatGPT was able to find an amended SoP made on a forum 10+ years ago that never came up on google once….
Sorry your xerox technician is a dumbass? Dude should’ve started at intuition and worked down, not used it as a crutch and been like “welp… guess I’m fucked”
K used it a few times, but, no. Like, I'm a tech guy, so I almost always check things out, just to see. But, I haven't used it. The brunt of my AI shit is dine through edge with copilot if I need something lol...which isn't often
Nothing will ever be more horrific to me than the one time last year I overheard a nurse suggested to a student nurse to use that crap to do her homework
Like wtf? Do you want her to accidentally kill people?
Left an ND support group because all of the advice people gave was chatgpt anymore. People even said they used it to write friendship breakup letters, and that disgusted me.
It also cooks the brain of anyone using it at an alarming rate. I've worked with grown adults who started using this garbage and are now dependent on it to write an email.
It’s literally the only thing I have a stick up my ass about as a professor. I will give all my students A’s and B’s if they submit even remotely decent work. To submit a paper with chatGPT is the most disrespectful thing you can do to waste someone’s time like that.
me too! i'm a computer science professor and it is literally the ONLY thing in my class i give zeros for. I also have a policy that they are allowed to resubmit all assignments with corrections for full points, unless they lost those points for plagiarism
they find out early in the semester how much i mean it, and how much easier it is to just let me teach them and let themselves make mistakes and iterate. Well, some of them do and some just take the L and fail unfortunately
I gave a couple of zeros to my 9th graders for doing this with their final essay. I dont normally give zeros. I do 49% so they don't dig themselves into a hole they can't get out of.
Same! I tell them basically anything in their own words will get you a 50. I even would give someone who plagiarized sections a chance for a rewrite as even they’re at least reading what they’re copying. Just do not use AI, as I’m not interested in reading what a robot thinks.
same. but my students are really smart and thoughtful. I give them a big overview about WHY using these models for schoolwork is harmful & bust myths about what AI can do in the first class. In 2022 I was getting multiple LLM-generated responses in every assignment & I've reduced it to nearly zero
There's no defensible use case for googling anything. Accept that if it doesn't exist in a local library you must forever exist in uncomfortable ignorance.
Like, I get it, you're upset it uses water. So does googling.
I use it frequently at work to proofread and build outlines for web pages. Because I can use AI, or I can work overtime. I prefer to spend the time with my husband and children. 🤷🏻♀️
This. I’m pretty old, but not so old to remember if this was the same attitude when the calculator, email, or spreadsheet were invented. It’s just a tool.
If GenAI goes away, I’ll still have a job. I just think it’s silly to dismiss a technology completely a couple of years in. Already ChatGPT and equivalent help non-native English speakers improve their writing. This isn’t like Bitcoin where there’s entirely no legal use case
Keep in mind, that includes image generators and text generators but also protein folding generators, chemical composition generators, etc (GenAI encompasses a lot more than just the arts)
We use GenAI for translation so you can watch a video on YouTube in, say, Portuguese, and it generates English subtitles. Is it a perfect translation? No, but I don’t speak Portuguese and it’s pretty cool for me to be able to watch interviews with Brazilian footballers and understand them
do you think calculators had a consistent hallucination problem where they'd spit out wrong answers so often that you needed to double check anything you put into them
Yes. I suggest you watch the film “Hidden Figures” which starred some black women who history mostly erased who worked as “computers” making the math calculations for orbit, etc. since the electronic computers of the time were unreliable
That's an example of people doing work because giving it to the machine and calling it a day was unacceptable. So you agree, A.I. can't be trusted with human labor right now.
No. The earliest form of a calculator was what we call today a computer and they made mistakes (mostly programming related) so human calculators were kept on hand to do thr math manually. https://en.wikipedia.org/wiki/ENIAC?wprov=sfti1
I have no idea why you’d bring this point up. GenAI uses less energy than Bitcoin mining or social media at this point, much less adding HVAC to developing countries.
There’s a lot of strange hate. Pros and cons-talk is unforgivable.
I use it for proof reading too. And math. Not because I can’t proof read or do math, but because it eliminates boring and time consuming tasks without changing the end result.
Math is math.
That is somehow an even worse use case than using ChatGPT for creative work. It can't do maths, it can't even count. Maths is the one thing computers have always been great at, and LLMs fail even that.
All of the haters crack me up. I sometimes use this one particular tool to do a mindless thing so I can save time and use my brain for more important things.
And now we see why Twitter turned into such a hellscape. People like y’all who’re calling me names and trying (unsuccessfully) to shame me. 🫠
Personally I wouldn’t brag on the internet that I can’t figure out how use a new tool to handle rote tasks so that I have more time to focus on the truly important things. But hey, you do you.
Personally, I wouldn't brag on the internet that I'm so incapable of understanding basic things that I needed to "figure out" how to use a chatbot to do the job I'm bad at for me.
I would never trust a machine to proofread something for me. I would never want to admit that the computer program I used to do my work for me fucked up.
DO NOT OUTSOURCE THINKING!
read the fucking thing. train your neurons. gain skill chatgpt cant do. have reasons for people to pay you over them using that themselves to skip you!
I would argue launching satites using up massive amounts of rocket fuel and carbon is very damaging but I guess we forget about how damaging basically everything we do is
Ideally humans would all die but we will probably go extinct before we actually create anything cool
To be fair, this process was already well underway. I wonder how many of the Outraged think nothing of using SatNav for directions, Calculators for maths or Autocorrect for spelling. Any external augmentation has the downside of atrophying innate ability.
Map errors are due to human error and can be documented and corrected. The core function of AI is to produce stuff that looks real, but has no way or knowing if it is. That's literally what the tool does.
Early on in automated map directions, there were a ton of errors and people did stupid shit like drive off bridges and into lakes because the machine told them to.
People trusting machines over their own perception is fucking dangerous.
I’m not understanding how a template solves creating polices out of whole cloth. So I let’s say I need draft a policy on use of a personal vehicle for work in one of overseas offices and a remote work policy. How does the template + ctrl F help me do that
Let’s say all of us individuals stop using AI in all forms tomorrow. Will that solve the problem or will the commercial applications continue to drive the development and use of AI?
This is a very dumb take, ngl. I mean, ChatGPT is a tool. Tools are meant to be used.
I'm pretty sure that 500 years ago, there was some medieval peasant just like you saying: There's no "defensible use case" for a hammer. Do your own work and use your own hands or fuck of.
I'm pretty sure the tool resourcefulness depends mostly on its user... I've rarely experienced issues with ChatGPT and its responses. And the times it answers wrong, I notice it and change it, since I don't blatantly copy and paste whatever answer it gives me.
My point here is that everything depends on how a user searches and then uses the information given by these tools. You can't just put two rocks against each other and expect fire to happen just with that. But each one has their own opinions, of course.
Sorry, human error. I didn't bother to Google it and fact check the exact timeframe for my answer.
And ChatGPT is, at its core, just electricity and molded silicon. Your point be?
Ah yeah I remember my first hammer that drank hundreds of thousands of gallons of water to make functional & even when I use it properly it has at best a 40% chance of actually hitting the nail I aimed out.
Fuck outta here with this weak ass shit, hammers build shit. Llm's steal and fabricate
honestly insane to me how many of these ai people think they're like, enlightened smart dudes and then come at you with the most dogshit, zero thought argument.
It's so maddening bc they'll br like "I know it's terrible but I use it wokely so it's good!" And like dog
It's the fruit of a poisoned tree. It ain't good
jean-dominique bauby didn't write the diving bell and the butterfly by blinking his left eye for two months just so some guy named josh could go, "ehhhh, actually expressing my thoughts would cut into my ncis time"
fwiw i have found one (1) use case i feel like i could stand behind. i had a blood test that gave me a red result. I used an LLM to help me find search terms related to that result (would be too expensive to see a doc abt it), then used google scholar for pubmed research around those terms
long story short it was fine and i was glad to have a constellation of "nearest neighbor" search terms to help me track down some relevant scientific literature
Yeah I was torn but it feels like a shitty choice when the first ten results on Google are ad-ridden sites trying to sell me boner pills. I was able to type "what are five medical conditions consistent with x result?" and then go straight to the literature for differential diagnosis
It'd be nice if my healthcare provider--which already provides the results electronically--would provide a little more information but why give it away with the results when you can charge extra I guess
I have to get blood draws every other month to see if my meds are causing organ failure so they've stopped doing the phone call & just give me a "we'll tell you if there's an issue" but I don't like to wait and see bc I don't know how long means I'm fine
Converting dense/deliberately opaque ToS or government documents into Easy Read for use in supported decision making with intellectually or learning disabled people.
Currently, most docs aren’t made in easy read by publishers.
Please don’t dismiss how much AI is already improving disabled lives
I get only a few mins- 1 hr of “cognitive work” before I cannot function due to my disability. Bcuz of AI, I can once again simplify & problem solve important decisions affecting my medical care, finances, & understand/create meaningful communications by myself. Empowering
Whether that's your intention or not, it's putting the text you give it into a mathematical blender and spitting out an output that has words that it thinks appear together most often.
I’m fully aware of that, and capable of discarding or editing “AI’s word salad” output. Again, I think false assumptions about my capabilities and how I have used AI have been made, leading some people to panic unnecessarily here lol
I’m confident I know myself and my abilities, and I don’t relinquish autonomy nor deductive reasoning skills. I’m not using it in a way that puts me at risk. Everyone calm down
Ya no. I didn’t say either of those things.I maintain highly functional cognitive abilities.I just cant sustain using them for som activities beyond certain time spans. Like an sprinter who collapses after a race: they still maintain the skill to run very fast, but not again until after they rest
hi, I'm disabled too and I am very afraid relying on "AI" is going to end very badly for you. Your community/state has organizations that provide this help from real experts.
do you live in a community or state? I hope I'm not imagining that part. Because there are several organizations that provide free legal and non-legal support. You're not alone.
1. Defining what intellectually disabled is
2. Defining what learning disabled is
3. Providing real case studies where this has helped people in category 1 or 2, significantly and en mass?
Using an AI chatbot to rewrite complex legal documents and contracts for learning disabled people is unethical behavior.
I agree that we need to have processes to guarantee accessibility, but a LLM is not capable of understanding how to interpret legal documents consistently and accurately.
Buddy, signing a government document that you didn't read and only had summarized by a program that regularly produces bad information is an extremely bad idea.
And who is making the design decisions that guide such LLMs towards which parts to cut, which to elide, which to retain? There are very few true synonyms - every word has a slightly nuanced meaning - and if you are relying on an AI model to summarise legal documents you might easily get in trouble.
hi I'm disabled and you can gargle my nuts. if I see one more person using me as an ethical use case for the stupid Theft Machine that steals my work and employment out from under me I am going to pop out of one of your air vents
For end users, yes. Not for NGOs already doing this work that could do more proofreading than they can currently write from scratch.
But if you’d prefer: Be My AI uses ChatGPT so I can ask stuff like “has my menstrual cup leaked onto my pants” without having to call a random volunteer.
In regards to your last sentence: wtf. As a be my eyes volunteer theres soooo many of us, just fucking call through that instead of relying on killing the planet robot?
isn't Be My AI the thing Be My Eyes rebranded to when it kept getting things super, dangerously, wrong and then the ghouls running it went 'uh we'll call it a disability care service so people will feel more awkward saying it sucks' and now they're using rando disabled people as test subjects?
personally I'd be embarrassed if I brought that up as some slam dunk because I'd consider that one of the most predatory and cruel uses of chatgpt bullshit yet
yeah it's just a bullshit dodge by the people that made the bullshit machine and anyone falling for it or playing into it when there's better tools is a rube at best
also there should be no use of generative AI for any legal functions at all - it wouldn't hold up in a court to suggest "i had my AI chatbot reinterpret this for me and i adhered to the rules the chatbot laid out" so do not do that
"criticizing intellectual burrito taxis is ableism, actually"
Having people with cognitive disabilities use spicy autocomplete instead of human assistance on legally binding documents is both incredibly stupid and profoundly insulting.
The only remotely defensible use case I've seen is for a friend of mine who struggles with texting and writing due to a traumatic brain injury. And even then text to speech is usually enough for me to understand her without AI assistance
I’ve been writing technical process documentation and training for 20years. I work with very smart people that struggle to communicate effectively in written form. If these tools can help them, why not?
god this is the funniest honeypot i've ever posted. i thought surely no one will read this and start angry crying in my mentions so i know who to block and it worked so fucking well
One of the miracles of social media is how "this is bad and if you do it you're bad" will draw "not me though! Haha I do it but I'm good right? Haha pat me on the head!!!" Like moths to a flame
people see "big account" and i guess decide that a) everything you say is 100% literal and should be treated as such and b) any semblance of being nice is out the window
"Well I only use it as a tool to help me work faster, and it really helps me on the day to day. But I agree, letting it do all your work for you is bad"
What do you even say to people like this at this point lmao
I mean, it's easier to just type "road" into a search engine and the first suggestion that comes up is "road synonym," and then you click it and it gives you 50 online thesauruses. You don't even have to click; it's right in the search results. They used AI to make something extremely easy harder.
I did hear that the biggest online Thesaurus, https://thesaurus.com added AI and completely broke itself because chatgpt doesn't know the difference between a synonym and an antonym so it just shows a bunch of words it thinks might be related
the problem is we just can't trust search engines anymore
This doesn't work at all, these people are morons. I am incredibly stupid and UNFATHOMABLY lazy and I still get my shit done and produce good work product without the Plagiarism Machine
Was thoroughly bewildered by my friend disagreeing with me at the weekend that a guy I read about getting ChatGPT to write in the anniversary card to his wife for him was completely impersonal - "Some people might not be very good at writing and might need a tool like that" ???
For anyone taking notes, struggling through it because you care but doing your best anyway even if it's obviously flawed is way cooler than giving up at the first minor obstacle.
And you won't ever get good at writing if you don't practice the skill
I would agree except that every year I see my dad give my mom a Hallmark card for their anniversary with a very sincere-sounding but completely pre-written message. Mom always tears up as if Dad wrote it himself. He could never write anything as sincere-sounding himself. Somehow works for them
Fair point. I guess there is something to your dad being the one that chose that particular with that particular message and an actual human did write that message.
Exactly, it's the thought that counts... And as far as I'm concerned, pretending you wrote a loving message when you actually just asked Super Autocorrect to poop one out for you is much more thoughtless than signing your name below a prewritten message
The worst ones are the "i asked chatgpt to write something for me and i think you'll be interested" and spoiler, nobody is. If we were interested in chatgpt's output we'd just use the chatbot ourselves
A Redditor will “helpfully” reply with an incorrect AI answer, and then the next generation of AI will be trained on the AI answers that were wrong the first time and get even worse, and then…
That's what your brain is for. If you use AI, they aren't 'your thoughts' any more, because you've outsourced the decision-making about classification, grey areas, etc.
Again, for the most part, i don't disagree, but it can be a place to start. Don't copy the prompt, but you can allow it to turn your brain on. For example, if AI uses a word in a prompt that reminds you of another word more fitting in writing, then great. It's more of a responsibility tool.
You can, but look, all I'm saying is it's a convenient tool. One side of the argument. I also think that AI has been a hindrance in many ways, especially in education. But due to people's reliance on it rather than veiwing it as a tool. That and lack of oversight. It isn't inherently good nor bad.
someone on reddit asked chatgpt for pickup lines and was genuinely impressed by this. imagining teens on TikTok flirting like robots is fuckign hilarious
I know someone who wrote their wedding vows with ChatGPT and I have stowed it away in my heart for future blackmail material if I ever need it. the most loser shit I have ever encountered in real life
fully cementing my idea that ai fans fundamentally do not understand art. the point of wedding vows and love letters is not to be the most technically impressive or efficient or use the most flowery language. the point is it came straight from your heart
While there are some legitimate uses for some forms of ML, ChatGPT is not one of them, it’s an expensive toy at best and a crutch you don’t really need at worst.
When I say good uses I mean more rapid discoveries in climate science, medications, etc. these are not generative AI.
There've been a few citation-based LLM's (e.g., Scite) released in recent years that help with academic research. They give different results than typical search engines, and they directly cite (w/ working DOI links) their sources
Of course, one still has to check if the sources are used accurately
I'm saying that sometimes it will reference a *real study* but misinterpret what the study says. For example, I was looking for studies that *supported* contact theory as a means of prejudice reduction, but one of the sources it gave me did not find statistical significance for such.
It's good. I've worked on projects that have saved lives using heat monitors on train tracks to determine when barings would overheat, and the train would derail.
There are others I've worked on, but it all gets lumped together as one thing.
All genAI is, and i say this as a poet who has written many well-regarded poems from a crude machine learning opt-in bot that used to tweet and make amazing prompts, with buy-in from yje creator, who took him offline because of genAI, and am collecting the poems i wrote with him about the exp.
they're all "after notaleptic" and can be found everywhere from nightmare magazine to Utopia Science fiction and the editors always knew because it wasn't theft but attribution matters.
I’ll never forget the time I got into an argument with someone about this and they said it was ableist to not allow gpt to be used in a writing competition
Someone who I think of as ordinarily very intelligent confessed to me last night that she has been using it to help mediate a disagreement with friends, and I… did not control my face well.
Ive never ever ever used it or wanted to, but now, for the first time, your friend has sparked my interest!
(for fun & curiosity, wtf could a machine tell me on such a subject???)
As opposed to controlled and censored by people far nearer who have ALSO shown themselves to have nefarious purposes contrary to our best interests, but speak only English?
I used to think cover letters were acceptable. Then I realized there were tons of templates for those online and find+replace is much better than ChatGPT.
"if i find a way to use it ethically, all effects of ip theft, financial and environmental harm vanish i'll bet!"-ass takes from some folks lately. weak.
Don’t forget about all those lazy f***s that use a calculator for all their maths! If you do the research, you’ll find the launch of the TI-30 is really what brought us to the broken state we find ourselves today.
It’s a comment regarding the resistance to the adaption of new technologies. It has nothing to do with ChatGPT’s ability/inability to properly execute mathematical problems.
but chatgpt can't do anything useful, you gave the example of a useful product that also doesn't steal other people's work while wasting energy and giving false results.
"i've invented a new technology that when you press the button smashes your hand with a big mallet. if you ask why you would want that or why your boss is going to make using it mandatory you hate new technology actually"
I'm at the point where i think a butlerian jihad is the only option, if a computer starts asking me to let it answer questions for me or plan things for me we should destroy it immediately.
It's extra irritating that it even got advertised asa math solver because we've had computer algebra solvers for many decades and they actually work. Wolfram alpha even takes natural language input pretty well and it was released in 2012.
A calculator uses a fixed set of logic to ensure that it will always give the same results based on the same inputs. A calculator follows strict rules and thus always provides the result it is designed to provide.
ChatGPT is designed to provide something that statistically speaking looks okish.
ChatGPT has a fuckin 50/50 rate of success doing basic math because it's not a calculated made by experts for a specific task but rather a robot that makes up bullshit for stupid people
I have found it surprisingly useful, however, when I have a small code fragment that's not working and I can't work out why — it's like waving over another developer and asking them, which sometimes you just need to do
I think this is a good comparison, because if you wave another developer over, you're not outsourcing your brain to them, or asking them to do your work for you. You don't assume they are 100% correct, you're capable of human reasoning even when using an external tool to help yourself along
They made a widget that massively streamlined coding, realized that “reducing the number of coding jobs” was going to be as popular in SV as hot dog piss, and pivoted to saying it worked for every other job and societal function instead.
I feel like this is because of a group of people who can't imagine anything is harder or more important than coding, and therefore AI must be useful for everything else. which is probably a really good argument for the importance of liberal arts education...
Building a creative skillset, and recognising that inspiration will strike when and where it will and capturing that, serves one far better long term than developing shitty habits like “I’ll just ask the bullshit generator”
I mean yeah, but I still think the whole idea that "AI is always bad and if you use it, you are bad" feels like a weird take. AI is a tool, like anything else. The main problem with AI is the risk it serves to the livelihood of many people (risk of losing their jobs and being replaced by AI).
Companies will use it to replace people's jobs whether you use it or not. AI wasn't made for us, it was made for them. They don't really give a fuck if you use it or not, because they will find ways to cut costs by using it anyways. Again, the problem isn't AI, it's the system we live in.
I haven't used ChatGPT recently, but in the past I've used it to explain some engineering concept I'm trying to wrap my head around. One thing it's (sometimes) good at is explaining complex academic things in ways I can understand. But I'm happy to go to a different place for this information.
i watch some korean vloggers and one of them said she used chatgpt to come up with a recipe and i was like "girl noooooooo". a person should be able to cook without having to use that shit! just wing it!
the thing is i don't like to wing it but there are soooo many recipes out there. written by humans. If you're thinking about a food combination the recipe probably exists.
and the thing about it is she already knew how to cook. it wasn't like she was going in totally blind. she coulda just used any search engine or social media instead of asking chatgpt.
Ugh like a year ago, 2 channels I followed made chatgpt recipes and I had to stop following for a while. Thankfully it seems like the audience agreed against doing that because they both got backlash. One has been basically not active after that and the other moved on begrudgingly
And OpenAI will simply scrape the copyrighted story around the recipe and use it to train their LLMs without ever acknowledging let alone paying the original author
People constantly act like times before we used ChatGTP for writing was archaic. It’s insane, given that they miss the point of these assignments or why they’re handed out to you in the first place. Papers aren’t supposed to be “busywork,” they’re meant to teach you a life skill used in the
Job force. That, along with teaching you proper research skills, and honing your discipline. Hell, even some are to help hone your ability to write. So many people don’t realize ChatGTP outlining your paper or writing your paper for you is cheating, and that
As universities have been reduced to incredibly expensive diploma mills, as the consequences of failure are life long debt peonage, I support anything kids do these days to ensure they get a diploma. The university system destroyed academic integrity when these kids were in diapers, they're just
Even with the cost of University, it doesn't make using ChatGTP right. If you're paying to go to school, you are expected to do the work yourself. To actually learn. Otherwise you can skip out on college and live your life just fine. Especially if you go into a trade. If you can't
Comments
you "yeah and I didn't even have to pay to do it"
There's so much software we use daily that does work for us, like calculators, autocorrect, translation programs like Google Translate etc.
I don't wanna "use my brain" to figure out what the best route is, I just type in the location, select a suggestion I like, and go.
On your point about Google Translate, I agree, but as you said yourself, there are use cases for it, but when using it for important or complex stuff, it can be inadequate or even risky.
I dont want AI shoved into all kinds of software, I hate 100% of art-related uses of AI, and about 98% of any other uses of it.
My point with the map example was specifically in response to the sentiment, that
Now AI "doing the dishes" is suddenly also an issue?
AI is in it's infancy and will be as impactful as the wheel. The solution to climate change is a clean energy transition, not going back to smashing rocks together
And *yes* many criticisms of ICEs were correct. That's my whole point.
The solution wasn't to ban engines 😂
I also think that the entire approach to AI (and even in the past, with ICE) in the workplace has a lot of drawbacks in that, it doesn't benefit workers, just shareholders.
Sure, a lot of stuff got automated, and tons more
open sources though? time will tell
It explains Ed Zitron's "rot economy", Cory Doctorow's "enshittification", and so much more.
All the ridiculous comparisons to shit like the internet, photography, and fuckin typewriters give it away. LLMs are not like any of those things at all
Generative AI can be helpful, but its just not worth it
GPT mangles the meaning out of things. Even if its repeating something verbatim someone else wrote, it strips the context of who wrote it when and where.
Folks have done this for centuries without relying on Plagiarism software
B) You claim to be getting valuable feedback from a goddamn *Autocomplete* with ideas above it's (plagiarism derived) station
C) See the environmental/computing impact
We can’t entirely eliminate the harm our lives cause, but we shouldn’t be flippant about it either.
However your admonishment does come across as flippant.
"There's no ethical consumption under capitalism" isn't a carte blanche, but it is still useful to remember.
We've long known that chiding consumers isn't the solution to climate change.
When taking part of labor actions, successful organizers didn’t just target the producers but also the consumers of the products that harm the working class. We should do more but it’s a start.
That quote comes from tumblr, not theory or tested practice.
Because at least they *know* what they're doing is wrong
Idk man. I work in human services. I’ve seen how it can help and hurt.
which explains how one person got this ad for Domino's, and apple intelligence interpreted it as "trade in lemons for free pizza":
Performance reviews are far more personal and generally in-person. What's the point of chatgpt for interpersonal conversations?
Anything I write in a system on performance is just a written record of a conversation.
Their prompt (they told me) was “write a performance review for a sales leader whose team met their assigned metrics, but struggles in interpersonal relationships and developing the sales managers underneath them.”
(I’m going off memory but that was the gist)
What the fuck is wrong with these people.
A lot of people cite their central beef with AI as being the energy usage. Scrolling past does not magically undo the energy expenditure from the search being conducted
Until then, enjoy these:
https://github.com/deanpeters/product-manager-prompts
"Actually bitch we CAN'T all agree on that"
It’s that simple.
I have never read anything and said "ya know what this needs? More filler!"
I have great disdain for the hour a week I have to spend writing up what I did for my boss just to shove it into a robot to feed to his boss and his boss's boss to not actually look at but say they did.
✨metrics✨
it just absolutely drives me up a wall that a lot of people correctly clock the existence of robotic and menial tasks, and then cleverly use robots for Not Those
1/
2/2
Until I go back to it two years later and I'm like "a dog wrote this."
Like I hate this fucking future as much as anyone but when troubleshooting a complicated system with a bunch of gatekept documentation… shit has been game changing. Takes 6 hours of scrolling forums and making dents in the wall to “oh shit. Yeah that worked”
The solution did not work, as it required a button the copier did not have, as ChatGPT is actually dogshit at this.
I spent half a day troubleshooting an Avaya IP office to find out the provided SOP from the manufacturer had issues in it, and chatGPT was able to find an amended SoP made on a forum 10+ years ago that never came up on google once….
Thanks for your anecdotal sample of one though :>
It's soooooooooo bad
Like wtf? Do you want her to accidentally kill people?
functioning ones & ones that can ask a computer how to cross the road
But for cheating, it's a zero.
Like, I get it, you're upset it uses water. So does googling.
Fundamentally, what IS GenAI?
Keep in mind, that includes image generators and text generators but also protein folding generators, chemical composition generators, etc (GenAI encompasses a lot more than just the arts)
i have normal opinions and am engaging in good faith
I use it for proof reading too. And math. Not because I can’t proof read or do math, but because it eliminates boring and time consuming tasks without changing the end result.
Math is math.
Do you drive? Eat meat?
It won't randomly hallucinate a wrong answer without telling you.
And now we see why Twitter turned into such a hellscape. People like y’all who’re calling me names and trying (unsuccessfully) to shame me. 🫠
read the fucking thing. train your neurons. gain skill chatgpt cant do. have reasons for people to pay you over them using that themselves to skip you!
Your own brain, or a machine?
Ideally humans would all die but we will probably go extinct before we actually create anything cool
This "A.I" is just a step
It can actually *do* what its supposed to do.
"All technology bad?" Is a dumb argument, we're saying that THIS technology is bad and stupid.
Also GPS was SO BAD when it first released
We improve on stuff, but it's insanely slow in this system and like you said
Damaging
Sometimes you have to see past that though, and the future of automation is inevitable
People trusting machines over their own perception is fucking dangerous.
I'm pretty sure that 500 years ago, there was some medieval peasant just like you saying: There's no "defensible use case" for a hammer. Do your own work and use your own hands or fuck of.
And ChatGPT is, at its core, just electricity and molded silicon. Your point be?
Is that not embarrassing for you ? Do you need ai to brush your teeth in the morning
why are you dressed like a date rapist from 2009?
Fuck outta here with this weak ass shit, hammers build shit. Llm's steal and fabricate
It's the fruit of a poisoned tree. It ain't good
If there's anything I'm unfamiliar with, I prefer to learn more prior to that conversation.
And I almost always need to help when it's certain family members.
I teach research methods so I can look it up
Currently, most docs aren’t made in easy read by publishers.
Please don’t dismiss how much AI is already improving disabled lives
I get only a few mins- 1 hr of “cognitive work” before I cannot function due to my disability. Bcuz of AI, I can once again simplify & problem solve important decisions affecting my medical care, finances, & understand/create meaningful communications by myself. Empowering
1. Defining what intellectually disabled is
2. Defining what learning disabled is
3. Providing real case studies where this has helped people in category 1 or 2, significantly and en mass?
I agree that we need to have processes to guarantee accessibility, but a LLM is not capable of understanding how to interpret legal documents consistently and accurately.
But if you’d prefer: Be My AI uses ChatGPT so I can ask stuff like “has my menstrual cup leaked onto my pants” without having to call a random volunteer.
I'm not going to shed tears for them lol
Having people with cognitive disabilities use spicy autocomplete instead of human assistance on legally binding documents is both incredibly stupid and profoundly insulting.
Signed,
A person with an intellectual disability
I’ve been writing technical process documentation and training for 20years. I work with very smart people that struggle to communicate effectively in written form. If these tools can help them, why not?
Walk into kitchen, sing song: guess who's fighting on the internet again
Him: -unbothered- “Hopefully not on the company account.”
Spouse, certified white man: when are you not?
What do you even say to people like this at this point lmao
"But I use it to improve my writing, yesterday I needed another word for road and GPT told me to use lane!"
Ok you dork ass loser lmfao
the problem is we just can't trust search engines anymore
ChatGPT I’ll use only to design coursework for myself because school is expensive and anyone I can ask just says “ask chatGPT” and I wanna scream
Me: I’m going to scare you with this
And you won't ever get good at writing if you don't practice the skill
Absolutely incredible
When I say good uses I mean more rapid discoveries in climate science, medications, etc. these are not generative AI.
Of course, one still has to check if the sources are used accurately
There are others I've worked on, but it all gets lumped together as one thing.
Not all AI is theft.
Ive never ever ever used it or wanted to, but now, for the first time, your friend has sparked my interest!
(for fun & curiosity, wtf could a machine tell me on such a subject???)
Since it's open source, maybe someone can use it to make a non-government controlled AI? Mayhaps can @mcuban.bsky.social fund the shit out of that?
I want to know WHYYYYY
"Just tryna keep you consistent, maaan. You're not tracking."
Fucking debate bros and their constant yap to maintain the delusion that they're intellgent. Because being smart > being in touch with reality.
Humans > AI, always.
Then it was kids cheating on homework
Then it was teachers checking on whether kids cheated on homework.
But it all became not funny, when non educated management started challenging experts and engineers with its inaccurate bullshit. It's anti science
Fascinating
"But this one doesn't work"
"... And?"
ChatGPT is designed to provide something that statistically speaking looks okish.
The other shits out something that probably but nor reliably resembles what you're looking for
I have found it surprisingly useful, however, when I have a small code fragment that's not working and I can't work out why — it's like waving over another developer and asking them, which sometimes you just need to do