I genuinely believe that the single biggest mistake you can make in 2025 is to underestimate or dismiss AI.
The best models are much MUCH better than those of a year or so ago. And they're still improving, fast.
(Let's circle round in 2026 and discuss this post, if you're less than convinced now.)
The best models are much MUCH better than those of a year or so ago. And they're still improving, fast.
(Let's circle round in 2026 and discuss this post, if you're less than convinced now.)
Comments
I’m not convinced that large language models do a lot for society. And they have very high environmental costs.
Not great at creating stuff but useful for summarising and reviewing.
It doesn't matter if AI is intelligent or not. If it exhibits behaviour typical of intelligence and ends up doing so better than humans, then no matter what the actual underlying mechanism is, that will turn out to be a gigantic deal for our future.
I use tons of AI tools: O1, GPT4, Claude-3.5-Sonnet, Gemini 2.0 Flash, Grok, Mistral, Llama, Dall-E-3, StableDiffusion, Suno, Udio etc etc.
So you can embrace it, or shun it with eyes wide open, but don't dismiss it as a fad because you will be wrong.
Anyone can have access to the same advanced tools as anyone else for free, $20 or $200 a month.
And there are so many leading models, nobody has a monopoly.
May change. But that's the situation now.
It's already happening: all the major players keep leapfrogging each other with better yet cheaper models.
Filtering the silly con valley stuff from the useful info, the AI isn't much help...
Well, if they complete the ascent to transcendence in this decade I'll admit I underestimated our new gods.
With any luck the bubble bursts before it drives everyone to destitution at the feet of immortal tech kings, though.