If the information is easily verifiable, then it can be used for this purpose. Coding is one place this works, and asking specific questions about obscure topics can work as long as you verify after.
It is not, I can assure you, a librarian. If you want a librarian, there are librarians in every city and town. Talk to one of us to do the same work with actual understanding of essay structure and more helpful sources. We're happy to help.
Um it’s certainly a search engine, and it gives better info than Google does. I use it for recipes and I have made hundreds of great recipes just trusting it blindly.
I use it to help me write better posts, emails and stories. At times it gives me arbitrary ideas I had not thought of. It’s a great tool. I’m sure the recipes are delicious.
I literally gave the thing the full lyrics to Edge's WWE theme (don't judge) and asked it to tell me the song's name, who made it, and what the lyrics meant.
ChatGPT gave me the wrong title, wrong band, and got some of the lyrics wrong when defending its choice.
Funnily enough- I do believe one of the answers was Breaking Benjamin. That at least was a real band. The song title and lyrics it came up with- however- don't exist.
i asked it what 1+1 is and im not fucking lying it said "according to several articles, 1+1 is 3" Idk why everynyan is so scared about AI taking over the world, im scared of people using it.
Reading these answers makes me angry. One of my pet peeves is people adding useless words into conversation/speech, corporate gobbedygook that serves no purpose other than to extend sentences (like cust service reps using a script adding your name to fill air)..looks like ChatGPT has that down. Ugh.
Nobody wants it, dude. Nobody cares what model. It's a scam and energy-wasteful inaccurate piece of junk that the broligarchy is trying to force upon us, whether we want it or not.
Oh, right. You're right, everybody else is wrong. We don't have a choice. You're the vanguard of the future. Us lesser beings should just shut up and let this crap-theft-"technology" be used, despite the cost to us and others. Suuuuuuure.
AI gave me an example of a case study when I was writing something up. I couldn’t find any reference to it. Asked it about it and it admitted to making it up.
When I was marking assignments last year, it was easy to pick the ones written by ChatGPT: no in-text citations, and reference lists full of sources that were completely made up - a plausible title and publisher, with lists of randomly chosen names and an equally random doi.
an innovation which may either create more work for humans or so dilute the value of professions, standards, predictability, and compassion, that it will constitute throwing sand in the gears of industry, growth, progress and even wealth accumulation. but wisely used in controlled settings…
Models providing provenance for information is becoming more of a thing. Advanced models can still hallucinate. Keen judgement is wise — we need to treat it like professors did the internet broadly when I was in college (class of 2000).
absolutely, LLM are tools, they're not oracles, they don't know if they're making a mistake.
They can teach you how to think critically and question their answers, they can write out their process and discuss how they arrived there, and rate their confidence level based on sources.
You have to know about their limitations and what they can do. And you have to have a basic idea of how the world works in the first place. you have to know what answers it can fudge. Treat them as a flawed learning partner not an all knowing oracle. Ask multiple models- use multiple web sources.
Ngl tho since Google started their Gemini crap I've found 'searching' on ChatGPT and then cross referencing everything myself the only reliable search method.
That, or adding 'reddit' at the end of every Google search.
What about when the AI can't find citations & makes them up? Now you just have to go check all the citations are real, too. You've solved no problems & given yourself extra work.
Or you could just, you know, do the work yourself in the first place, & not use the plagiarism machine. Your brain will thank you, so will the planet, & the end result will be better.
One of my favourite pastimes is to ask Chat GPT a question and then tell it the answers incorrect. Even if it's 100% correct. It's a fun way to pass an hour or so.
This is true [We] found. One instance, noting certain KNOWN atrocities as unfounded "rumors" most likely to downplay the damaging effects on those affected.
Not much research,no. A little bit, like asking the clothesline question and seeing how wrong it was. For chatgpt I was avoiding using it based on mostly my instinct, intuition, gut feeling, whatever you want to call it. Even if I do start using it, which I might, I will likely do my own research.
Google isn't a source. It only returns results that include the words you searched for based on their algorithm. The problem is that the people I'm talking about would rely on that top Google search result the same way they would on whatever ChatGPT spits out without verifying anything.
Your concerns are valid. However, the issue is that plagiarism has existed long before AI. No one here is glorifying ChatGPT, and it's important to avoid either glorifying or vilifying new technologies.
Plagiarism is rightly called out. People use chatGTP (or midjourney or whatever flavour of the month "AI" nonsense is in favour) & claim that they made the thing. The "AI" is a absolutely fundamentally a problem.
😉 My apologies, I thought you knew the definition of plagiarism. Plagiarism is the act of copying someone else's work. The "absolutely fundamental" problem is that people engage in this practice. Bonne soirée!
People really need to stop seeing LLMs as database parsing systems and see them as the secondary side character assistants they are. Sometimes they have good ideas, but their entire purpose is to assist you on what you're supposed to already know, not inform you on what you don't.
I’m one of those awful “my mind is nothing but news and politics it’s my job and obsession ” people & I routinely get replies that are just straight up LLM screenshots presented as if it was gospel truth w/ zero reflection.
I hear a lot of people quoting the Google AI summaries as facts too. That should be the most obvious example that it's not facts. It's just a summary of the results that were found.
I was looking for info on an upcoming movie, and Google AI gave me a bunch of plot and casting details that turned out to be from a fake trailer. A regular search would let me identify and weed out results from sketchy sources, but AI summarizes results with zero context.
I just saw a post of Google's AI describing a small Scottish animal called a Haggis. These systems are not smart - they're vast, and can spit out regurgitated info - but they are not intelligence and should not be used to replace using our brains.
Exactly. They are modeled to emulate a human response; yet we are more than capable of disbelieving an actual human response but because it's a machine trust is implicit.
We assume machine knows, and it does, but it doesn't understand articulation or rhetoric, and therefore is wrong half the time
Depends on what you’re doing. Copilot is really handy, I only use it to quickly insert code I would have written myself. Code is predictable enough that the computer can correctly infer what I want in many cases.
I don't really want to waste either of our times extrapolating on the semantics of what an idea is considered by definition when we both clearly already agree that the use of AI for research and fact finding is bad.
It does provide citations in the form of links, which you can (and must) confirm with Google. I don't know how anybody could take what it says as fact without checking!
I'm sure. But for each invisible character it has to remove the character which takes up tokens. But it can adapt, of course. It won't last forever. But it will work for now. But the goal is to keep adapting.
Comments
I agree they are not marketed that way though
And other reasons I find it hilarious that so many people let it “help” with their jobs now.
ChatGPT gave me the wrong title, wrong band, and got some of the lyrics wrong when defending its choice.
Three times- each time a new flaw.
I don't remember the other two bands.
https://medium.com/@nturkewitz_56674/copyright-and-artificial-intelligence-an-exceptional-tale-60bdd77a8f13
GPT 4o and 4.5 have the ability to search the web. I’m curious to know which model you’re running.
Oh, right. You're right, everybody else is wrong. We don't have a choice. You're the vanguard of the future. Us lesser beings should just shut up and let this crap-theft-"technology" be used, despite the cost to us and others. Suuuuuuure.
It's a plagiarism machine on its best day, and not even a good one.
Versioning makes no difference
if you don’t understand something then don’t use it but don’t go making shit up.
People are dumb.
I 𝙩𝙝𝙞𝙣𝙠 the answer it gave me is correct. 😮
They can teach you how to think critically and question their answers, they can write out their process and discuss how they arrived there, and rate their confidence level based on sources.
That, or adding 'reddit' at the end of every Google search.
And this is as dumb as it will ever be.
Google, however, sells advertising.
It's brain poison.
Top panel: a city bus driver, driving past a factory, says “Don’t make me tap the sign.”
Bottom panel: close up of driver’s hand as he taps the sign, which reads “LLMS ARE HALLUCINATION MACHINE WHICH OCCASIONALLY OVERLAP REALITY”
+1 for anecdotal “a lot of people are.”
Google’s search AI isn’t helping.
We assume machine knows, and it does, but it doesn't understand articulation or rhetoric, and therefore is wrong half the time
Trust but verify... everything.
https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/