Okay, so, Cuban is getting a fair bit of hate and also a fair bit of support (that's not the point, don't get hung up on it) but there are very few people at least willing to engage with AI systems on an exploratory level.
It starts as play, but it can lead to some serious research.
It starts as play, but it can lead to some serious research.
Reposted from
Mark Cuban
Of course it makes mistakes. And over time those mistakes will lesson.
But you can ask for its sources. You can review those sources. And you can question those sources to the model
You can also take the output of one model and use it as input to another , and ask it questions
But you can ask for its sources. You can review those sources. And you can question those sources to the model
You can also take the output of one model and use it as input to another , and ask it questions
Comments
I use Xs Grok even though I hold serious doubts about a musk information source
But for a rough response to a question its ok
Then there are sources showed in web sites and posts which have treads and the treads have their own sources and web sites as well
Its all work
It says everything with complete confidence, and it says it like an encyclopedia entry. It always comes across as plausible... and it doesn't know how to say "I don't know."
The First Draft takes the longest and always sucks, so the quality doesn't matter... and it's actually pretty reliable to summarize.
Summarizing transcripts of a meeting that no one reads is pixel and semiconductor genocide.
https://bsky.app/profile/legal.reuters.com/post/3lii3aubnz22o
“Play with, research & develop” isn’t “sink $50bn into while creating massive energy demands before it actually works.”
https://www.bbc.com/mediacentre/2025/bbc-research-shows-issues-with-answers-from-artificial-intelligence-assistants
Tbh the bigger issue is that it's all called AI, when the one that causes true damage is Gen AI.
I'm more interested in what I said 😁
I think it’s important people learn the difference between “ai chats” “ai llm” “ai art/music/video” “ai API”
I honestly wish ai wasn’t even called ai because it contains no intelligence, but here we are
As a chat prompt for getting information by parsing the entirety of the internet, ripping people’s art/voice/music for money (not laughs) and nefarious acts with likeness, it’s bad
AI is nothing more than a sophisticated search engine that is only as accurate as its database.
And in the eyes of regular people who work with their hands, it’s another Ponzi scheme for pump & dump investors. A much bigger worry than learning about another computer program.
The danger of this is if it's 70% correct (being generous here) then it will cause Major gaps in knowledge that don't even look like gaps until bad theory is put into practice.
The Shared-Learning Model (SLM) represents a breakthrough approach, enabling adaptive, context-aware intelligence that evolves alongside user interactions.
1️⃣ Adaptive Coordination – Learns how individuals and networks organize, communicate, and refine their workflows.
2️⃣ Contextual Memory – Retains key discussions, decisions, and strategic shifts across both personal and shared contexts.
4️⃣ Sovereign Intelligence – Ensures that all adaptations and refinements remain user-controlled, with no external mediation.
I have used copilot for coding quite a bit though and the results are pretty meh. It is often flat out wrong. I find that ChatGPT actually does a better job. Even though the results are hilariously obviously stolen.
I have thoughts about proper compensation for folks whose knowledge or skill or talent fueled the training of these machines.
But there are such things as awarding compensation to those harmed or whose work was used. And there are some cool, SUPER COOL, ways of doing it.
Do ya like compound interest, my fluffy friend?
But I do believe there's a way to right the wrongs that have been done and it involves: compound interest and generational wealth.
It's an inherently inefficient system that necessarily consumes more than it produces in most use cases.
But the “zero education” remark went too far. It suggests informed critical thinking is no longer necessary.
Exercising judgment is requisite when using ANY tool.
What Cuban says influences a large audience. He should know better.
I was concerned with it with a little with medical devices, mostly in terms of moving from devices that would alarm and require human interaction/intervention …
Our work was specifically focused on interoperability of networks of devices …
My work focused on risk management and requirements engineering (yes, there is such a thing).
How does someone write testable verifiable and validateable specs and requirements for a self-modifiable/configurable menagerie of interoperating …
Like an air traffic control system, for instance?
Think Sean Duffy is working with people who think about such things?
Or is he working with Peter Pan Musk and his Lost Boys who find themselves in their critical IT systems Neverland?
My concern is with people …
Obviously you should dbl check anything important, but it has some awesome potential
It doesn't know any sources, can only hallucinate what they could be.
The minute it queries the Internet, it is not an AI LLM functionally speaking, it's just making a search call since those likely have no weights.
It also admits that other study's show no correlation between AI use and critical thinking drops. Maybe you are the one using AI too much. 🤔
Most of them aren't. This is controversial within the industry, from what I know. Most of them are in my experience, user misunderstandings based on how the machines attention mechanisms function.
There's also research modality questions.
Some of us designed/built expert systems from scratch. Don't need to play when our serious research in numerous areas extends far beyond what AI will ever likely reach.
Every hard question asked it's failed.
I’ve also had it search resumes to help find someone (among existing employees) with a particular skill who can work on a project. AI works if used correctly.
Sounds weird, I know. But it's not homework. Part of how I do my research is just "playing" with the machine.
I explored them last year at work. At best, they created more work (I needed it bc of work volume); at worst, it was really problematic but I caught it bc I checked.
Do otherwise independent AI systems ever work together on a problem? If so, do they share the same knowledge bases? Do they have their own? Can they modify them? Do they share context as well as info? Etc.
It's a shit tech that requires too much energy for less than mediocre results. It's like defending 1880s electric cars.
1) AI is obvs the future and I can do nothing about it, but
2) THE ENVIRONMENT AND THE ARTS. 😩😭
But that’s unrealistic. They’ll make me work on an “organic produce farm” overseeing unmedicated, depressed teens. 😩 (Maybe they do need some dance, though…)
It's best when used to assist humans doing the cognitive awesomeness we do.
The AIs make mistakes executing but they understand exactly what you're looking for.
Source: I use AI to segment employees according to how they’ll behave on the change curve.
https://finance.yahoo.com/news/20-ways-mark-cuban-makes-154635906.html
Fan-fucking-tastic.
The technology just fundamentally does not work that way.
https://www.itgovernanceusa.com/blog/moveit-breach-over-1000-organizations-and-60-million-individuals-affected
#ITGovernance
They offer speed and convenience to many people, yes. They do so by gutting the existing businesses and culture, and by offloading all of the true costs.
I disagreed with Cuban about his overselling of LLM capabilities. I’m generally pretty pro AI and think it can be positive for society.
But it’s important to recognize the limitations.
(intentionally or otherwise...)
a workforce who must be mainly employed to write bullspit reports
- which is what you're always going to get from machines that can correct 'lessen' to 'lesson'...
Irony Engine much...?
https://deepmind.google/technologies/alphafold/
You can't just shrug and say "well what's done is done" - there has to be accountability and a serious reckoning.
Make Comparisons,
Do Due Diligence, practice critical thinking.
AI can do (or be trained to do) those things. Its no replacement for a human minds ingenuity.
It's a tool/resource like an encyclopedia newspaper, history book that can write for itself
Here is an example of why it is not trustworthy yet.
https://bsky.app/profile/legal.reuters.com/post/3lii3aubnz22o
People with deep knowledge have engaged, and found it lacking. How much time from experts do we expect to waste doing free testing for companies that are flush with cash?
After a year of use I've only seen it work well for template filling and boilerplate code.
A) The answers are always scraped from the links, so I could have just gone to the links.
B) They're unreliable, so I need to go to the links.
C) Wikipedia is significantly better for learning.
However, Cuban talks about using it as a learning tool, and it's just nowhere near as reliable an information source as an open-sourced massively curated system like Wikipedia.
But I like that Mark engages with AI with curiosity and openness. That's where a lot of brilliant research begins.
I'm only asking because I find it to be an incredible learning tool. Not suggesting it should replace anything else.