A subheading "Does AI understand (X)...?" always has the answer "No". Because current AI doesn't understand anything. It's a brilliant, statistically-informed guessing tool.
fascinating. the interesting thing is the bias of different models to different cultural narratives. sounds like building out the dataset to get to an expert level is the next best step.
I think we need some hilarious examples to illustrate the point.
Don’t like the “anti-woke” political skew of this, but it conveys the point nonetheless.
Yet, that's precisely how it's marketed, received, and utilized by the end user. So people with expertise in various fields SHOULD remind the public at every available opportunity that "AI is garbage at my expert area".
This is surely the problem - anyone with even a cursory knowledge of a subject area knows the results being produced by AI tech look superficially authoritative while actually being fundamentally and deeply flawed, yet that’s what general users are expecting (or being prompted) to use it for.
The recurring example seems that AI can’t retain discrete authoritative sources (e.g. specific legal precedent in case law, report or medical study/ research paper) without disembowelling the contents and jumbling them into incoherent spaghetti. It looks like food, but you can’t actually digest it.
You say that, but I’m hearing from many people in education that students are using Chat GPT for their history coursework and are defiantly refusing to be corrected by teachers and lecturers that the information they’re getting is wrong
yeah, it's a "produce a statistically-probable textual response/output" machine, not a "say true things" machine. There's overlap in the outputs, but "is" and "isn't" are quite close, character-wise, so it's a bit messy and limited in terms of "accuracy"
As things like ChatGPT stand, accuracy is luck rather than actually knowing anything. That will improve as various reasoning tech is added to the big LLMs, but even then, it's not knowledge in the way humans have it. This is a really good explainer on how the underlying tech works
My twins are the only ones in their class who refuse to use it. They worry that they won’t learn higher order/critical thinking skills if they delegate their work to it. The quality of discourse in their generation is pretty low. We are literally making our kids less intelligent 🤯
They are correct! We wouldn't want to give calculators to 6 year olds instead of making them learn arithmetic, or give spell check to 7 year olds instead of teaching them how to spell.
Can’t tell if you’re being sardonic or not..! 😂 I actually think these are good arguments for AI but not at the expense of the development of human intelligence. What we want is ability for higher order analysis. Fact checks online are fine but NOW we don’t know whether we can trust the source..
No I'm totally serious! Why would we NOT want to teach children arithmetic or spelling? Comp sci students similarly should learn coding, not just how to ask GPT. If you don't know what is correct or how to arrive at something thru human intel, you can't identify what GPT output is garbage or not.
it's being bigged up as a knowledge engine by people who haven't bothered getting to grips with even the basics of what it is. Google isn't helping by serving up garbage in its search results
Teachers should ask for cited sources which I can never get Co-pilot to give me. Don't know if ChatGPT cites sources but without that, and without being a subject matter expert, I have no way to judge the quality of the output so its worthless.
Had an absolute doozy yesterday when ChatGPT invented a whole-ass quote. Never use generative AI for history, journalism, science, government policy or, in fact, anything that relies on facts.
Jesus, that's even worse than Co-pilot giving me a long winded explanation about how everything it says is from a robust online consensus. Bitch please, I've seen StackOverflow arguments over a question last months over pedantic detail.
As I had to explain to my son, who had difficulty believing it, that the answer he had “concocted” for an economics task had not actually answered the question, and the mark he got reflected it. The future is bright?
we are raising a generation of stupid. these kids will never learn how to read a book, use a library, an index, write an original research paper or essay. or even think critically. deeply troubling.
Pity they don't give any clues as to how it fails, or what it's poor analysis looks like. That, of course, is because the journalists themselves have no interest in history, only on AI.
I think it struggles any time there's a need for quality control of information. You ask AI a history question & it doesn't know it is using film & literature as a basis for an answer. It also can't tell the difference between a professional historian & a mad blog post with zero sourcing.
This is my big fear about AI. They’re closing public libraries and other sources of trusted, properly researched information. People are lazier about checking facts, they just assume something they read online is correct. They share a meme of something Orwell never said instead of reading his books.
AI doesn't "struggle". It doesn't "interpret". It calculates what the next word is statistically most likely to be based on user input, based on the vast and unchecked corpus of internet crap it has been trained on.
Comments
Or, AI is a bit shit.
Don’t like the “anti-woke” political skew of this, but it conveys the point nonetheless.
https://www.youtube.com/watch?v=sImKMN4UOoE&t=11s
It turns out there are no cheap shortcuts in life just hideously expensive ones. Who knew?
LLMs are a statistical model of language. It has no "knowledge". It just produces the most likely next word.
I do wonder how many Hitler's Diaries (or similar) we will see from AI sources in the future.