It isn't useless for teaching basic technical information about things like programming or math concepts. But it's terrible when you actually ask it to DO it.
Nate Silver is just dumb.
He's obviously never tried to find technical answers, solve a problem or get references to the infantile responses that come up. There are SO many things AI can't even address or speak about.
Yes, doing your thinking yourself is definitely the lazy person’s way out, you utter fuckwit. Though to be fair Nate may be getting it to write his tweets for him, and it’s just blowing its own trumpet?
Sure: The answer to all questions is “no.” Doesn’t matter what the question is, or how many questions you ask per second. I’ll be right about as often as AI, I believe.
I'm sure there are uses for AI. I have little doubt LLM's could be quite useful in the right circumstances. But it's not really my job to figure out how, and I certainly have no faith in how they are being marketed. Right now, I wouldn't touch anything " AI ' with a 39.5 foot pole.
it is guaranteed to invent bullshit. It will mislead you every single time if it has to do anything more complicated than look up a fact it ripped off somewhere.
Trumps "Stargate" 😂 team hasn't funded anything, and OpenAI has serious competition.
Deepseek (Chinese) has developed a model superior to OpenAI (for about 5% of OpenAI's cost), and they released the code to open-source. More's on the way.
Microsoft pulled back AI investment Q4 - Pay Attention.
I want to put it on a shelf next to the argument that employees who won't use generative AI are in fact robbing their employers, so that I can admire them like Hummel figurines.
Since Nate Silver thinks his job is promoting propaganda. For that, ChatGPT may be ideal.
After they throw enough hogwash in the well, then everybody they hope to influence drinks the same swill.
Ah yes, as all consumer-facing tech innovations that have improved the world, users must be forced to install it and guilted into using it before it can work all of its magic good.
That Microsoft are paying companies to replace both the Menu and Command/Super/Windows key with a single, massive, Microsoft Copilot key is going to be an annoyance by straight up losing keyboard utility to some trend that will crash soon
I think the key is still Menu when you install Linux though
i cannot believe microsoft saw samsung do a whole bixby button and get clowned on for years for it and still decided the exact same button for the exact same use case is the way to go
A magic box? Does he not understand how software works? i mean, this is Nate Silver, so he may genuinely think its magic Also, ChatGPT can be anywhere from completely accurate to insanely wrong, all presented with the exact same level of certitude.
"If you can't get a bot connnected to a huge database to write stuff for you or do some surface-level research for you, you are just being lazy" might no be the brilliant, galaxy-brained hot take he thinks it is.
“AI” can’t do surface level research. It’s not a bot connected to a database. It would be more useful if it was. It’s just a random text generator with a probability filter.
lol, did this dipshit just call me lazy because I can write my own emails and my job requires accuracy?
Like, I’m the person people in our org come to for accurate data. Sorry ChatGPT, even for coding it’s about as much work to rewrite what you quickly spew out, and inaccuracy is a hard no.
Or you could just use a normal search engine that gives you the same information and the source that provided it while using a tiny fraction of the energy.
I'm quite well versed in AI and find it extremely attractive, while being aware of its flaws. Its no secret that people are generally not experimental with technology that is not already comfortable for them, it will be the next generation that will likely have people mastering use of it.
I went looking for a half-remembered book whose title Google couldn't find. I later learned I was off by one word. A friend laughed and said ChatGPT could have found it.
The Internet was supposed to be a revolutionary invention that would instantly make people much smarter than they were before and guess what: it made the dumb people EVEN DUMBER. They only believe they're "smart" now.
Here is what I predict will happen: The billionaires like Elon and China who will control the AI think they are smarter than the AI and will meddle and fine tune the AI's to give answers they feel are more correct.
Hmmm. The answers here are kind of depressing. At the very least, LLMs are much better replacements for Google without the sponsored links, that maintain some kind of context so you can follow up with a deeper question. They make mistakes (like Google does) but they get a lot right too.
Successful LLM use requires treating as a junior person. Their work must always be vetted. Look at how many human written articles, papers, books are retracted because human fact checkers fail.
My experience is not this (coding, health, travel). You can treat it as a relatively senior person who will make mistakes. It is very dangerous to give it to people who don’t know what they are doing and ask them to use it to craft professional answers. Not so for professionals who want to save time
It’s incredibly hard to gauge the total. I would estimate that I am 2-4 times more efficient with an ai coding partner than without. I am delegating: code reviews, syntax look ups, architectural suggestions, bug finding, writing tests
Absolutely not in my experience. I ask it a coding question. It answers, i am suspicuous or don’t understand, i ask it for the original docs and it provides them.
Yeah, for coding, solving Excel formula issues, even some tech support, it's pretty good. Not always perfect, but that's fine so long as that's your expectation. My guess is most folks just are enjoying dunking on Nate Silver these days and are sick of marketing everything as "AI"
the advent of deepseek means that it has almost no chance of it going away. The main thing holding it back was the cost. If what this latest model claims is true we now have cheap AI at our fingertips. This also means headroom to increase the accuracy again at a reasonable price.
Yeah, I hear ya. The prompts of what some people are asking for is a bit alarming, especially when it's medical advice. I've found it to be accurate enough in what I use it for (and for generating quick kids stories for my little kids using their names haha)
At its worst, it's a souped up search engine. The problems with gen ai seem to be linked with the problems of capitalism as it better enables scalability of the status quo and since that favors the wealthy, it can create rampant inequality compounding societal problems.
"If you haven't bothered to figure it out" isn't the whole selling point that it doesn't have to be figured out, though? If it requires long, arcane string of text to get anything useful then it's just programming under a different name.
ChatGPT is great at answering questions, so long as you already know the answer to the question so you can tell if ChatGPT is giving you the right answer or just making something up out of whole cloth.
Or if the answer is rapidly verifiable. I've used it or it's kin a few times for random tiny code tasks where I can quickly check if it's working and didn't want or need to dig really deep into the documentation.
To be fair, understanding natural language is what chatbots are literally trained to do. I generally wouldn’t trust them more than like, deepl or google translate’s ai but being able to provide additional context can make them useful when a real human is unavailable.
El Goog's translate is painstakingly curated through freely given community translations (uncredited or rewarded) that bridge the gap between literal and native translation
So Google translations are generally very «context aware» because it's primarily lookup, whereas ChatGPT goes off Reddit vibes
By "providing context" I mean being able to literally provide it yourself, like in this example. Google translate and DeepL struggle when the text is imperfect and/or contains typos. ChatGPT is sometimes (and confidently) wrong as well, but in simple cases like this it is genuinely better.
They're not trained to "understand" anything. They're trained to guess what word is more likely to come up next. If someone asks for a translation of something that doesn't have tons of examples for it to learn from, it's going to make shit up.
Yes, that is how every machine-learning based translator works. It's true a machine can't "understand" or "guess", but I don't think arguing about the meaning of these words is constructive in this case. In the end, what makes these AIs both dangerous and useful is their convincing guessing ability.
Most other translation MLs are built and trained specifically for the purpose of translation, not to make sure their output sounds convincing. They're allowed to output nonsense if they don't have enough data to translate something, and that lets you know something's wrong.
I had someone come into my job and swear up and down we sold something we didn’t. When I asked where they got that info, they confidently showed me their ChatGPT conversation…
Right? I am far too busy using my brain and education and doing my work correctly to waste a bunch of time coaxing "utility" out of the mediocrity machine.
Right? I truly try to check myself and my own instincts, which are often either incorrect or at least less than helpful.
But I just want to fucking hit him in the face with a pie, or put him in a headlock until he cries. That can't be the correct response, but it FEELS like the correct response.
That man built an entire career out of being right just once, and now he thinks he has a valuable opinion on everything and for some reason everyone should care. Perfect example of a disposable celebrity
I think AI is largely overhyped but as a sort of Google assistant (asking it to compare a few things that previously would've take multiple Google searches) it can be very helpful.
the real problem is that in 5 years or so, when it is clear that LLMs are useful, in many diff ways, and not useful, in many ways, no one will recall this post of yours, so you can go on making stupid posts year after year after year, with no reputational damage
If I were a relatively useless person like Nate Silver, who could very easily be entirely replaced by an AI, I don’t think I would be so stoked about it.
It can be useful as a tool but it's def not all knowing or right. If you can't remember something and want it summarized in a certain way and have background knowledge. But like normally it's better to just ask your fellow human or other resources. Certainly not for a job, learning, or actual info
I tried to get my ChatGPT to become my DM for DnD. It does an okay job, but... it forgets all the stuff that came before. Other than that... It's okay.
An interesting use. But you are the magic. The machine just regurgirates stuff from the net. Be brave. Steal scenes from books you read. Mash things up yourself. Use tools, but in the end you are the director of the collaborative dream that is rpg. Take pride.
I'm a lonely old nerd and none of my other old friends play so I've tried solo play with this tool. It passes the time for me a bit, but it is imperfect. And not as fun as a group of friends rolling dice and being silly.
If you know any given response might be only "modestly coherent" how can you trust anything it says without manually fact checking? In which case, why not go directly to the sources of information in the first place, and save both the hassle and the electricity? That's not lazy, it's due diligence.
Nate Silver really turned being more correct about politics than other people one time and acting like an insufferable twit about it online forever afterward into a lucrative career.
polling has been cack recently as you can't reliably poll a random person, which has meant outlier polls with bad methodology can come out closest to target. nate and the lads really weighted a shonky new poll in 2024, which is real stopped clock energy, but you're right. they built careers on that
genAI has some uses but it isn't what business is banking on. There are no finicaly viable genAI products, not even including environmental costs. Nvidia's current gen chips will not fix this if the implementation problems cannot be realistically solved. We may already be at an AI deadend.
"ChatGPT WILL make mistakes. If you are not going to review the output for mistakes, or don't know enough about the content to even identify a mistake if there is one, do not use the output in any professional work. Doing so may make you look like an arse"
Paying people to do things correctly is expensive, maintaining a search engine is expensive. What if the computer just did everything and I just collected a check every month. Is that so much to ask?
Hate to play devils advocate here, but AI is sometimes better for certain questions that are difficult to word in a way that gets good results from a google search.
Let me be honest, if you think that ChatGTP solves your doubts better than searching on the internet, it's because you don't know how to search on the internet
I certainly do know how to make effective queries in a search engine. And don’t get me wrong, search engines are better for 99/100 questions and I only use ChatGPT like once every two weeks.
Again I’m not saying that ChatGPT is some miracle technology that revolutionizes the digital world. I’m not a tech bro. I’m just saying that AI should be rejected on a rational basis, not out of principle
So what's the problem? You make sure that the information you're looking for is 100% true, much better than trusting a program that tells me that eating stones is healthy and that I should put glue on the pizza
1. You’re wildly underestimating ChatGPT, which is dangerous and could let you be caught off-guard. The examples you’re thinking of are from google’s shitty AI that they pushed on everyone like three months ago (which is already much better).
Idk I've never had a problem wording things in a way to get what I wanted out of a search engine. However, I think I was in high school or junior high when Google was first getting popular, so I've had plenty of practice.
Yeah usually it’s fine. But I also feel like search engines have been getting worse, and not just because of AI - even back in like 2018 before all this stuff took off. Often you’ll search something and it’ll just give you a full page of crappy content farm articles instead of actual relevant things
And then you’ll add “reddit” onto the end of your question. In which case you’ll be trusting the word of some random guy on Reddit which is arguably even worse than ChatGPT. I’m not saying ChatGPT is great, I just think it’s often somewhat useful
Like for example, let’s say you forgot the technical term for a really specific phenomenon. You can describe it, but you can’t exactly paste a really long, rambling description of a specific phenomenon into google and expect it to give you the right answer. But you can do that with ChatGPT
Generative AI doesn't "answer" questions. It generates what it thinks the user wants to see based on the prompt. This is an incredibly foolish way to try and get answers.
It’s great at analyzing documents attached to a query. Generating meeting minutes from a Zoom transcript takes seconds. Also saves time w/ literature reviews, summarizing recent developments or trends from scientific publications. But its only as good as the data you feed it, and you must fact check
Exactly. Nate sites have a point that there is utility in this tech that most people can find a use for. The problem is that he (like seemingly everyone else using ChatGPT) completely misses what that utility actually IS.
Exactly, the state of the art today are trained to return *plausible* answers, which is different from *correct*. Yes, the two often correlate, and if you don't care, then AI will be a lot cheaper.
What we need to start asking ourselves is; "Do I *trust* this source?"
And what gets filtered out or set aside to get a "plausible" answer, based on probable sequences, is certainly not random. It favors predictability and determinism, and suppresses the unexpected or unusual information that leads to change, challenge, and discovery.
AI are tools still in development and the annoying and worrying thing is that it sucks up all the crap along the way and steals material which is copyrighted.
Creators should be able to opt in to this as it will be practically impossible to opt out.
Chatgpt is garbage in, garbage Garbage out. It is created at its root by humans who gave it its DNA code. It doesn't create. It responds to what data already exits.
It's an additional layer of Telephone for people who are too lazy to even use a reasonably well-annotated crutch like Wikipedia as an entry point into really understanding something. The more people in my sphere like it, the less I trust them. Critical thinking is becoming an endangered philosophy.
Someone on reddit asked ChatGPT for hospitals with birthing tubs and ChatGPT just made up answers and provided a list of hospitals that 100000% do not at all have birthing tubs. 10/10 worth the bottles of water.
i hate how its been should and many people assume its a magic answer box, and like no its a plausibility machine trained on a specific set of expectations. so its a limited plausibility machine, n like nothing wrong with starting with plausibility, but a whole sure answer it ain't.
I like how at some point the people obsessed with AI basically reinvented divination. Anytime they talk about how ChatGPT knows everything I kind of just mentally replace it with astrology or haruspicy.
Nah that's something else. "So it goes" is a saying of the inhabitants of Tralfamadore, which feature in Slaughterhouse-Five. It's repeated throughout the book but not the last line of that or any Vonnegut book so far as I know.
It's just tedious how sucky the AI is at some stuff.
Maybe something is gained by humans doing the necessary intellectual legwork in order to arrive at an answer or solution to a problem s opposed to getting an instant answer that may or may not be correct but that also doesn't flex any of the mind's muscles. Just a thought.
Chat gbt actually seems good for helping you sort ideas and writings you've already done. But the idea that it has thinking and research capabilities is embarassing
I don't understand how so many people convinced themselves that ChatGPT is anything more than a shitty search engine. Just googling but with a chance of getting a made up answer
I know this will be lost on lots of people because we're having a performative Nate-bashing session, but he's talking about how unlike a program where you can examine the code, the final LLM product is just millions of weights so you can't really tell why it arrives at any answer it does.
No, that's what you *want* him to talk about. He's talking about how to use this "magic box" to do all ypur creative stuff for you, even though it's frequently wrong.
That's what he meant by magic box. It's referred to as that and as a black box, even by critics of AI. Actually, especially by critics of AI as that's a common complaint about it.
Comments
lmao, even
He's obviously never tried to find technical answers, solve a problem or get references to the infantile responses that come up. There are SO many things AI can't even address or speak about.
The faux AI are making everything crappier.
I take psychic damage each time I type "before 2022." Even if I trust a source... what if they slipped up vetting THEIR sources?
These chucklefucks destroy a Library of Alexandria... daily? Hourly? Minute-ly?
But yeah, I'm "lazy." 🙄
It's all the funnier when you realize that LLMs and statisticians use the same toolkit (and apparently have the same lack of imagination).
https://bsky.app/profile/davekarpf.bsky.social/post/3l2ufp7duev2d
I am therefore worth $5 billion dollars.
I choose not to.
Deepseek (Chinese) has developed a model superior to OpenAI (for about 5% of OpenAI's cost), and they released the code to open-source. More's on the way.
Microsoft pulled back AI investment Q4 - Pay Attention.
After they throw enough hogwash in the well, then everybody they hope to influence drinks the same swill.
(though its answer little meaning, little relevancy bore)
"Four."
The utility there is...really something.
I think the key is still Menu when you install Linux though
Like, I’m the person people in our org come to for accurate data. Sorry ChatGPT, even for coding it’s about as much work to rewrite what you quickly spew out, and inaccuracy is a hard no.
(Somedays I crave for my wit to be at least mid)
insult supplied by ChatGPT
Hard working: Spin the wheel of AI competency and deliver the generated report with blind faith to the Deus Ex Machina.
It will always need verification and you might as well go straight to the reliable source first time.
https://theconversation.com/knowing-less-about-ai-makes-people-more-open-to-having-it-in-their-lives-new-research-247372
So, I tried ChatGPT for the first time.
So, I told it the correct title.
And it *still* couldn't find it. At least Google pointed me to Amazon.
ChatGPT sucks. I don't use it.
The lazy skeptics who say it's "just a toy" are beneath contempt.
Accuracy isn’t even part of the conversation with these things.
Overhyped, underperforming.
It’s going to crash eventually.
“More data” will never be enough.
Anyway
Gave up on him years ago ..he had some lovely arguments for epistemological caution and hedging predictions in a couple of books
but seems to have disregarded the scruples of earlier versions of himself a long, long time ago!
So let him fade out with the crumbling remains of Elonfashchat
if you need to figure out how to work a "magic box" that thinks for you, 1) its not sufficiently advanced and 2) neither are you
So Google translations are generally very «context aware» because it's primarily lookup, whereas ChatGPT goes off Reddit vibes
Its a C- or D level plagiarism device. The kids dont know the right answer so they just copy/paste a paragraph with fake citations and zero details.
The awfulness is the ubiquity. 19-year-olds think like nate silver. It HAS THE ANSWERS!
I'd rather hope an LLM can do better than 'moderately coherent', however, coherency does not touch on the [lack of] accuracy of those answers!
"I'm sorry I can't share that information with you"
When asked the same from Google's Gemini A. I., it replied
No, and went on to explain all the protections of my data while using Gemini.
But I just want to fucking hit him in the face with a pie, or put him in a headlock until he cries. That can't be the correct response, but it FEELS like the correct response.
What???
the real problem is that in 5 years or so, when it is clear that LLMs are useful, in many diff ways, and not useful, in many ways, no one will recall this post of yours, so you can go on making stupid posts year after year after year, with no reputational damage
Maybe I didn't see that tweet for a reason
What we need to start asking ourselves is; "Do I *trust* this source?"
Creators should be able to opt in to this as it will be practically impossible to opt out.
🙃
and ergo,
all pollsters.
Get yer spankin' new snake oil - now AI improved!
Game recognise game
My immortal hand and I cannot frame this fearful symmetry!
Not only is that not even from Breakfast of Champsions, it's not even the last line of Slaughterhouse-Five.
It's just tedious how sucky the AI is at some stuff.
https://www.wired.com/story/anthropic-black-box-ai-research-neurons-features