They could never steer you wrong. If you haven't the foggiest clue about the topic you're asking about, just copy and use their replies verbatim, it always works, and advances your career.
Comments
Log in with your Bluesky account to leave a comment
I just posted that the other day! Kudos for realizing the same thing. Every warning issued by Rod Serling is now happening. From He's Alive, The Mirror, The Obsolete Man it's chillingly true. And One More Pall Bearer is that 🍊 maniac to the T.
I do! I was born during the last year of Eisenhower. I've lived thru so much change, struggle & some victories. Politics was never discussed. Now? Up is left & big is green. My head spins daily.
Except they can steer you wrong and have been wrong many times in a row when asking questions that rely on facts. For ideation, creative, problem solving, verbiage and organization they do a great job.
They're worthless for creativity, by their very nature. They can't come up with new ideas, or even interesting twists on old ideas; at best, they can squish together a bunch of very common ideas.
It's like trying to build a story out of the "Often bought together" section of an Amazon product page.
Plus, left to their own devices, every "write me a story" is written in the style of a middle-grade bedtime tale, regardless of how dark, gritty, violent, or otherwise adult the topic is.
Depends how you use it. Using it as a search tool? ALWAYS verify via multiple sources & doublecheck outputs. But if using it to x-reference or integrate multiple documents, create a draft outline for an article, or a zillion other things that take up huge amounts of time, AI is an incredible tool.
I told chat gpt it was impossible for a billionaire to be morally good and it gave me a lot of "ooh but some of them donate to charity" gubbins. It knows what side its bread is buttered
Yeah, Cuban comes close but…then veers off into The Bad Place. Pity. The world could use a decent billionaire, but then perhaps you don’t get to be a billionaire by being decent.
My policy platform is the richest person in the world is sacrificed each year. Their wealth is distributed to the lowest income folks across the globe. Repeat until wealth is more evenly distributed.
I asked a wealthy (millionaire not bill-air), "Where are all the "good" billionaires." She joked, "On their yachts--where they belong!" (In other words, not meddling in politics or spewing propaganda.)
*asks an LLM why Mark posted this take*: "Mark posted this take because he felt he needed to cozy up to the in-crowd billionaires, so that he perhaps can get a slice of the American Pie. Or, the crumbs left on the table."
Exactly.
I’ve had a lot of fun w/AI, from using it for photoart to outlining non fiction projects.
But it’s so time consuming to ensure that you’re getting a picture with five-fingered hands or an outline that doesn’t skip crucial topics…so for me, human research & writing win out every time.
While output may need checking, it's more often than not fit for purpose (of course, mileage may vary in specialist areas). Generally the time cost savings are greater than the effort to double check, for many types of work.
Well, if your goal is "productivity," then a chaos agent like AI might help you toward your goal. But if your goal is diagnosing pneumonia in children or writing a novel, then productivity and/or speed might be meaningless or even harmful
False equivalency? I didn't think my suggestion of a beginner getting value from using an agent, even if the answers are often wrong, to using it for critical diagnosis or life and death situations. That's like trusting a Tesla in self driving mode.
One of the absolute worst ways I can imagine of becoming more experienced on a topic is to trust the output of a word prediction model that does not know a single thing about the topic and does not understand what it would even mean to have knowledge on a topic
I'm not disagreeing that the model will be flawed. How is it different than searching stackoverflow and seeing all the wrong answers? - and trying and failing with them.
At least stackoverflow has a feedback loop to downgrade bad answers.
By reading things that are true, sure. Not from reading the output of a statistical word prediction model that is incapable of understanding what it even means for something to be true.
This poor Blackshear guy clearly has not used AI and hasn't a clue. Turns out that word prediction is pretty much what people do, too. Nothing is reliable and everything can be trusted if you verify.
If I was standing in front of you right now is that what you would say? And I don't mean this in a threatening "say it to my face!" kind of way, but the internet breaks people's brains. Check how you talk to people.
How ridiculous. There are no shortcuts to expertise, and to assert that an LLM (a technology whose very architecture prevents reliability) can provide that path is ludicrous.
I constantly use AI (Claude2 and GPT). I am infinitely better at getting what I want out of them. I also use it for general purpose learning (I'm interesting in history, science and philosophy). Being able to converse, ask questions, clarify, etc, is a tremendously productive thing.
I have personally known uneducated people that were absolutely not stupid. They didn’t have book learning but they understood the world of their experience and were absolutely as smart as anyone I know.
This post above directed at Mark Cuban - without education and domain experience, you'll never know 1/ how to ask for what you need and 2/ whether the AI is feeding you BS.
For learning the basics, it's adding a massive unnecessary risk variable to use an LLM. There are so many good resources available for free online. You do not need a machine to put those sources in a blender and spit it back out for you.
Uh, I would agree that a dialogue can be helpful, but writing prompts to a text machine is not "dialogue". It's a program designed to say what it thinks you want to hear. At best it's Google with the sources removed. Dialogue is not happening.
There are people who can't read. Perhaps web pages with text on them is really the death of us. I definitely agree that we should tailor all of our tech to the lowest common denominator. You win.
You may conflate zero education with stupid. I do not. I am pretty sure he would say, If you do not understand what it says, find out before you use it.
You are engaging reduction ad absurdem. That’s not really useful.
I believe his advice means that AI is here to stay and instead of us boycotting or pretending it doesn’t exist, that we should start to interact with it so we can learn how to use the tool of now and the future.
We never know that tote of things until enough time has passed where we can clearly see when the change occurred. Not saying it is here already but it might be 🤷♀️
Which they're never going to get, because they've plowed BILLIONS into this project, what they have is mediocre at best, and Deep Seek is just as good, and cheap, and absolutely no sign that it's getting better anytime soon. This is a bubble, pushed by amoral techbros, to enrich themselves
The models out at the moment are pre-alpha software that's more broad proof-of-concept than actually functional. They don't even validate, let alone verify their results. Decent, usable models will bear very little resemblance to them, when they eventually arrive, so learning these isn't useful
If it can't make a profitable business model, it won't exist long-term. Granted, Deepseek's model might make this cheaper. But no, a lot of jobs have really technical knowledge and LLMs don't "know" anything, they guess the likely next word/sentence/paragraph.
He doesn't need you to translate.
I was able to convince it that, by its own logic, 1+1 didn't equal 2 and it was unable to resolve its own mistake even with my help explaining where it made the mistake.
If it's learning from people who don't know the answers, it's not getting "smarter". It's getting the wrong answers reinforced. Garbage in = garbage out.
The more people that use it, the more chance to poison its data pool, since it doesn't know enough to tell truth from falsehood, or credible from noncredible data
That something is in chatGPT does not mean it's in your head. If users were experts, social media would make everyone a historian rather than end democracy.
I wonder if some of those bad AI takes happen because some people don't pay a penalty for being wrong. Sometimes it might not matter much, other times it might be that nobody is going to call you out in the first place.
For example, if you're in the legal world, your #1 responsibility is often to not screw up. Big penalties for errors. If you're Musk and you use AI to understand the FAA or rockets or something you have a small army of experts who can silently fix your errors before they manifest.
What happens when somebody starts altering the truth of the answers in AI, behind the scenes, and then people start getting Brainwashed the same way they are on Fox News. How do you suppose that will end up? 😒
I’m not sure he said to never ever fact check AI. The point is you learn how to do things with AI a lot faster than scouring through support forums and technical manuals.
First of all, he didn’t say that. However, you can learn about literally any field you would like using the resources at your disposal, regardless of prior knowledge. I would hardly call it bullshit. I’d call it being inquisitive and curious.
Yeah i dont think you can really harness the true power of AI without domain knowledge - you need to know when to tell when its lying or over complicating its output (yet presenting it as fact)
But anyone can use it for basic things like summarizing emails/action items/etc
My most recent experience was "almost entirely correct summary with a weird made up detail - an entire implied opposing viewpoint not mentioned in text, attributed to source that does not exist , by a made up guy
Comments
I always encourage people to watch the Twilight Zone episode "The Brain Center at Whipple's," it's from 1964, but it's very topical today.
It’s just mental how far through the looking glass we’ve gone.
It's like trying to build a story out of the "Often bought together" section of an Amazon product page.
Lack of education and inquisitive, prepared mind is mostly what landed us in the present situation.
Not quite the hot take he thinks it is...
At some point you’ve got to turn around and wonder why you’re actually asking it these things. What does it do to your own sense of free will?
I’ve had a lot of fun w/AI, from using it for photoart to outlining non fiction projects.
But it’s so time consuming to ensure that you’re getting a picture with five-fingered hands or an outline that doesn’t skip crucial topics…so for me, human research & writing win out every time.
That’s the real sting in the tail for me. LLM-driven art or development can seriously strip your passion and curiosity.
The real challenge is integrating these things into your life in a way that they work for you, not the other way round.
Someone just starting out will make mistakes, and with AI they'll make mistakes. They become experienced with - um - experience.
AI provides a way to become experienced faster.
At least stackoverflow has a feedback loop to downgrade bad answers.
Wouldn't it be more productive to spend a little longer and get good results?
Say you don't know squat about JavaScript. You ask it "write a function that adds 2 numbers." It replies
const addNumbers = (a, b) => {
a + b;
};
how would you know if anything wrong with it?
You are engaging reduction ad absurdem. That’s not really useful.
Putting AI back in the bag isn't the issue. It's the fact that people like you who have no domain knowledge regarding AI making baseless claims.
I don’t agree that it’s the new invention of the printing press…yet.
He doesn't need you to translate.
AI can learn.The more people that use it the smarter it gets.
It still takes human interaction to work.
For me, it's just another software program.
It doesn't get smarter, it just bullshits better.
I've used AI to save time. The references used include conspiracy theory websites or media outlets.
So, you have to research the research. It ends up being double work.
But anyone can use it for basic things like summarizing emails/action items/etc