Happy to see at least some language critical evaluation of the current AI hype in the discussion around the “abilities” of LLMs. Many of the problems start with using the wrong kind of language to describe what these artifacts supposedly “do”. Human activity is invested in every step of the way.
Reposted from Andrew Mercer
Right. I think the problem is terms like “reasoning” which suggest consciousness and agency.

The latest boosting algorithms also do many of those things but I would call it reasoning.

It’s an iterative optimization algorithm with backtracking.

Comments