DeepSeek took off in part because, in a first, it offered a reasoning model that explains to you what it's doing. Its success shows how little AI product teams have done so far to make their products appealing to normal people: https://www.platformer.news/deepseek-design-user-interface-chain-of-thought-ai/
Comments
Or in this case - run!
OpenAI doesn't allow that -- simply by being SaaS they get all your data.
And I haven't sworn at DeepSeek as I most often do at the others.
I don't think that's accidental.
The byproduct is training the AI via human supervision.
The growth curve is phenomenally fast.
You know that the background reasoning is deliberately not shown because most users find it a total PITA, right?
They don't trust us... and I'm not that surprised.
Will probably know in a few weeks from the happiness of the world if it really has beaten chatGPT in usage
the "thinking" output doesn't really offer as much insight into how the model arrived at its answer as people would like to think.
it will still hallucinate and create wild unexplained conjectures with its reasoning.
https://www.tomsguide.com/computing/online-security/is-deepseek-safe-to-use
OpenAI doesn't allow that, so it's less secure than DeepSeek.
Generative models prior to them did not "reason" or specifically organize their output into a series of preprompts to improve its own accuracy. UX is definitely secondary
This is the case with R1 as well.
We look like we have no idea what we are doing. Why? Because it's true. Oligarchs don't govern.
Yeah, let it be trained to measure moisture in cornfields, wash dishes, and repetitive tasks, but it should be kept away from creative tasks. Those are for humans.
I noticed a HUUUUUGE decline in quality of answers, especially since mid October 2024.
Did not follow instructions, summarized when specifically told not to, even EMBELLISHED answers!!! I asked it why, answer: “sorry, I understand ur frustration”