Tools trained on past or failed peace agreements tend to recommend approaches that have been tried & failed, & provide only shallow explanations about the political barriers to peace.
Comments
Log in with your Bluesky account to leave a comment
Tools that predict or recommend are far from objective, “scientific” technologies — they invariably reflect the worldviews of their creators, prioritizing some social sciences theories over others, & are fed on biased or inadequate training data.
But these so-called “AI” tools are different versions of machine learning & large language models (LLMs) that emerged from the field of natural language processing (NLP).
As @timnitgebru.bsky.social explained: “Even if you could accurately detect markers of emotion, that doesn't translate into being able to detect someone’s internal emotional state. And even if these models could do that, they would be extremely unethical.”
Many of these products (and they are products – most are also used for marketing purposes) tap into a desire for simple, seemingly “scientific” solutions to problems of power, politics, & global inequities.
Comments
But human languages are ambiguous, relational, & embedded in diverse cultural contexts.