"Why ‘open’ AI systems are actually closed, and why this matters"
When policy is being shaped, definitions matter.
@davidthewid.bsky.social @meredithmeredith.bsky.social @smw.bsky.social #AI #AITransparency #AISafety #AIEthics #OpenAI
https://www.nature.com/articles/s41586-024-08141-1
When policy is being shaped, definitions matter.
@davidthewid.bsky.social @meredithmeredith.bsky.social @smw.bsky.social #AI #AITransparency #AISafety #AIEthics #OpenAI
https://www.nature.com/articles/s41586-024-08141-1
Comments
“Open” AI: A term sparking debates about transparency, innovation, & corporate power. Are “open” systems really open, or are they tools for consolidating AI leadership? A deep dive into the structural nuances of AI openness from an intriguing paper. Let’s unpack.
The paper critiques how AI borrows “openness” from open-source software but fails to adapt it fully. Openness in AI is defined by transparency, reusability, and extensibility—but these rarely disrupt corporate control. What does “open” really mean?
Transparency in AI: Publishing weights, data, and model documentation can foster accountability. Yet, even “transparent” systems hide the emergent behaviors of AI, limiting true understanding. Accountability ≠ transparency.
Reusable AI allows 3rd parties to adapt and fine-tune existing systems. Sounds great, right? But market access bottlenecks controlled by tech giants stifle equitable innovation. Reusability, like transparency, exists within a power hierarchy. Economics can curtail innovation.
The ability to build on AI models? A double-edged sword. While extensibility promotes innovation, it’s often free R&D for big tech as they capitalize on fine-tuning done by others. Whose benefit is it, really?
Big players like Microsoft, Meta, and Nvidia dominate AI. Openness does little to shift this power dynamic. The resources—data, labor, computing—needed to scale AI remain tightly held by these few.