@cglambdin.bsky.social had a post recently about the exec excuse "our strategy is fine, we have an execution problem."
In the absence of feedback loops, every strategy appears fine. With ChatGPT, fine strategy can be generated at never-before-seen rates!
Execution is a strategy's first real test.
In the absence of feedback loops, every strategy appears fine. With ChatGPT, fine strategy can be generated at never-before-seen rates!
Execution is a strategy's first real test.
Comments
To be successful, a product has to work well and appeal to users. But the AI's idea only needs to pass the inspection of a few middle-aged white guys.
So the AI's "pass rate" is high, and the team's pass rate is low.
Abstracted from context, this creates the impression of a competent AI and incompetent employees.
One is that AI can *do tasks* but expertise is not just a string of tasks.
Two is hallucination; the tasks it does do, it doesn't do well.
When a manager gives vague instructions to a team, that team can at least use their human brains to extrapolate, fill in the gaps, ask follow-up questions, push back on requirements. The process of building creates clarity.
At *best* you now need a long QA process to understand what the AI actually made.