I think LLM’s should be doing program synthesis in about 90% of cases where labs + co’s are instead reaching for “inference-time compute”
Reposted from
Gergely Orosz
The biggest problem is how tasks where these things "lie" are ones when they need to output a LOT of tokens.
Midway, they stop.
Call it lazy: or perhaps cost efficiency.
You know what doesn't "stop" midway? Regex, traditional programming & non-LLM operations like find+replace.
Midway, they stop.
Call it lazy: or perhaps cost efficiency.
You know what doesn't "stop" midway? Regex, traditional programming & non-LLM operations like find+replace.
Comments