Clio is Anthropic's new system for identifying AI risks that it hadn't thought to look for — what it calls the unknown unknowns. I talked with team that built it and share for the first time the top three ways people use Claude https://www.platformer.news/how-claude-uses-ai-to-identify-new-threats/
Comments
A copy-paste error from a draft/another software, maybe?
Would love to know if the other AI companies have something similar. I don't remember seeing reporting on that though I am sure they all analyse 'usage cases' and I don't understand why they don't make it public