I know people using co-pilot extensively in a work environment. But when you say: “The output of AI tools is improving all the time, but errors are still common and increasingly difficult to spot”, they aren’t errors. LLM’s aren’t designed to be right or wrong, just seem human. And that’s a worry
Comments