A system with strictly defined roles (patron/librarian or boss/employee, or silos like "the business"/"the delivery team") creates a structure where upstream participants are entirely shielded from the consequences of their low-quality outputs, as long as they created *something.*
Comments
And the prevailing system allows GenAI users to shove the something downstream without any kind of quality control, consequence-free.
Copilot for Office has been described as producing slides akin to a middle school child, but for a certain class of supervisor, that's an improvement
In the absence of feedback loops, every strategy appears fine. With ChatGPT, fine strategy can be generated at never-before-seen rates!
Execution is a strategy's first real test.
To be successful, a product has to work well and appeal to users. But the AI's idea only needs to pass the inspection of a few middle-aged white guys.
So the AI's "pass rate" is high, and the team's pass rate is low.
Abstracted from context, this creates the impression of a competent AI and incompetent employees.