I believe some people are fooled by the IM interface into thinking the model should behave like a human. When it doesn't, they conclude that the model is wrong. It's not wrong, their yardstick is wrong.
I think about Dijkstra and submarines in this context. LLM:Thinking :: Submarine:Swimming.
Except it's a very interesting question right now. I'm not smart enough to even guess what they're doing, but I feel like we might need a new word for it.
Article you posted - multiplying 4-digit numbers is hard, but writing a program to multiply 4-digit numbers is easy. LLM==A new tool to learn.
C'mon, it's a toy example, you know that. The point is that it's a tool, and we're still figuring out how to hold it properly. Asking it to write a script to solve the problem is one way to do that
(Your birdsong images are beautiful. Anyone reading this, go check out aguasonic's post history)
AI is an actor that will try to convincingly play the role of any expert you want. And it'll be as good a doctor/lawyer/engineer as any actor who plays one on TV.
What are the prompts we should enter to use up the maximum amount of computing power from these AI systems? With the goal of maximizing cost to these companies?
Maximising cost to the companies would also be maximising the environmental harm they cause, for which the companies don't pay anything or take any responsibility while marginalised people are more likely to experience the effects, so please don't do that.
Okay so claude is the worst ai to ask questions about such stuff. Claude is good at basic to intermediate programming stuff. Ask chatgpt with the new search mode or whatever it is called. That might help.
Well, yes, but that's where hallucinations come in, especially when you're asking for something that's essentially broad statistical analysis and not general concepts.
this is funny because earlier today Claude wrote me a script to scrape Fed data, whole process took ten minutes and saved a couple hours of drudgery. Brilliant example of what it's good at versus bad at.
Education is a lifelong endeavor. That’s my whole point.
Use it to do the mundane work that you understand thoroughly enough to be able to do yourself and can double check the machine’s work. Otherwise you risk the machine confidently lying to you and you’ll be none the wiser.
Because you asked it to write a script and told it what the script should do.
When it decides independently "I need to write a script to answer this" and then only notices downstream it needs data, it hallucinates it, which only kinda-sorta works if the data was well-represented in the training set
I think in some ways we have a literacy problem similar to Wikipedia when it first came out.
There’s a ton of things it’s bad at. But also it’s uniquely great at things like George’s analysis. If you loaded the data, Billboard script would probably work, too! (But you have to know that.)
Perfect example of what people on both sides miss about AI. It’s not a miracle machine it’s a tool. If you don’t know how to use the tool, don’t be surprised when you get bad results.
LLMs are pretty amazing when you pump it full of information and then ask it a question based on that. if you just ask it a question that doesnt have a lot of info on it already, it'll just make shit up.
it doesnt know what it doesnt know but it'll try to answer regardlesss.
I don’t think so, that looks like Claude? ai is a language and if approached thus a remarkable path awaits.In my opinion , I’m no expert yet since
The first engines debuted in 2018 my results are astonishing.The early and that the new Encylopedia for many like Me👋😊
Well, that answer was trash. LLMs can do very cool things, but you really can't trust them past the point where you can verify what they are telling you.
The fact that such a prompt is required (and to date doesn’t really work), seems to lend credence to the idea that it’s an inbuilt flaw of genAI/LLMs that they may never be able to get this type of “AI” up to any acceptable level of accuracy.
So it doesn’t follow the direction? I really don’t know anything about it and have only used to proof read and edit cover letters or ask it random questions 🤷🏼♀️🤣
AI can be a useful tool in specific circumstances, but tech CEOs want it to be used for everything because they think it will make them money to market it as the solution to all problems. The equivalent of when all you make are hammers you need to convince everyone all their issues are nails.
I don't get why "use external applications where appropriate" and "curate from available data" aren't skills incorporated into AI programs. Any reference librarian with a $10 calculator can answer far more questions than the most educated genius working from memory. Real brains use tools.
It emits plausible nonsense that sounds like someone "laying out their reasoning". If you know stats well, then you know how large language models work, yet you're still susceptible to them.
True for first generations of LLMs using statistical learning, transformers, and self-attention over 100s billions parameters.
New generation has chain of thought reasoning models that retrain LLMs with multi-stage reinforcement learning not to hallucinate plausible nonsense but correct responses.
You're the second person in a week to tell me this, but I've never seen an example in real life. Gemini and OpenAI's models are hot garbage, but maybe you have a link?
A.I. is something I only intuitively know. What my "gut-feeling" is screaming is this: this human invention is going to be as deadly as the human invention of religion, if it is not designed with tamper proof security measures in place .
Ah, but what's really going to darn your socks is the second response wasn't properly researched either. So perhaps the first one was? err... Hang on a second.
I asked it a question about Clint Eastwood's back catalogue & it got it wrong.
Something available in very film Db.
It's junk.
Like Web 2.0 it's been oversold to justify their rinsing of the Govt & us.
+throw in the Chinese boogeyman.
@alt-text.bsky.social ||| for anyone who is interested: You can set up an #alttext prompt by going to Settings > Accessibility > Require alt text before posting
So AI ventures have been allegedly investing billions and pilfering real people work so that an AI admits it's making up its answers because compiling these data is too much work. Sounds like a real slacker to me.
AI is just a super fast totally thorough search engine putting together what it is allowed to show. It takes items off the internet to use as templates for resumes, questions, reports etc. The key is it is fast and would take you hours to search.
Comments
Dziri said.
“On others, they’re shockingly stupid.”
https://www.quantamagazine.org/chatbot-software-begins-to-face-fundamental-limitations-20250131/
I think about Dijkstra and submarines in this context. LLM:Thinking :: Submarine:Swimming.
“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
― Edsger W. Dijkstra
Article you posted - multiplying 4-digit numbers is hard, but writing a program to multiply 4-digit numbers is easy. LLM==A new tool to learn.
It is not hard. We have a "tool to learn" with. Mostly above the shoulders, and between the ears. 🙂
(Your birdsong images are beautiful. Anyone reading this, go check out aguasonic's post history)
This is a lie.
What are the prompts we should enter to use up the maximum amount of computing power from these AI systems? With the goal of maximizing cost to these companies?
https://hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts
Second-order and third-order effects can be far reaching
Systems are complex
A simple answer is not necessarily an elegant answer
"I guessed"
You’re more than just a means to an end. You should do what it takes to keep it that way before you end up being replaced by that machine.
This is exactly what AI should be used for, as a tool to do mundane work. It's not opinion it's providing here.
You're basically saying don't use calculators or buy bread, do the sums by hand and bake your own.
Your take only makes sense in 1 situation, education.
Use it to do the mundane work that you understand thoroughly enough to be able to do yourself and can double check the machine’s work. Otherwise you risk the machine confidently lying to you and you’ll be none the wiser.
When it decides independently "I need to write a script to answer this" and then only notices downstream it needs data, it hallucinates it, which only kinda-sorta works if the data was well-represented in the training set
There’s a ton of things it’s bad at. But also it’s uniquely great at things like George’s analysis. If you loaded the data, Billboard script would probably work, too! (But you have to know that.)
it doesnt know what it doesnt know but it'll try to answer regardlesss.
https://www.npr.org/sections/money/2018/04/04/599560851/stop-collaborate-and-listen
The first engines debuted in 2018 my results are astonishing.The early and that the new Encylopedia for many like Me👋😊
Most intelligent people's thoughts on AI sound like MAGA on reality.
I'm an AI postgrad with Stats/CS degrees and 25+ years work - and don't "know" AI. It's moving so fast.
New generation has chain of thought reasoning models that retrain LLMs with multi-stage reinforcement learning not to hallucinate plausible nonsense but correct responses.
DeepSeek is better at resoning/maths/coding/logic than other models - and shows its step by step reasoning.
It's not as good at creative tasks - writing, conversation, general knowledge etc.
Real Stupid.
Something available in very film Db.
It's junk.
Like Web 2.0 it's been oversold to justify their rinsing of the Govt & us.
+throw in the Chinese boogeyman.
A probabilistic outcome is unlikely to yield deterministic results
https://bsky.app/profile/zyearth.com/post/3livmscvarc23
Avoid using them for search, and avoid using them for data analysis - they're the wrong tool for the job
It is a modern Philosopher's Stone and Sam A is the stalling alchemist that is running out of time, hoping that bitumen and sulphur will do the trick.
A educated human mind is the brains to growth.
Keep a poisoned apple at the ready.