π’NEW: 'Open' AI systems aren't open. The vague term, combined w frothy AI hype is (mis)shaping policy & practice, assuming 'open source' AI democratizes access & addresses power concentration. It doesn't.
@smw.bsky.social, @davidthewid.bsky.social & I correct the recordπ
https://nature.com/articles/s41586-024-08141-1
@smw.bsky.social, @davidthewid.bsky.social & I correct the recordπ
https://nature.com/articles/s41586-024-08141-1
Comments
Good to see work like Olmo2 and Nous DisTrO too. Momentum is building.
https://allenai.org/blog/olmo2
https://bsky.app/profile/nousresearch.com/post/3lcdkywrpck2k
Agree but not it's not just rhetoric from well-funded companies training models...
Solutions for less concentration might be both political (inequality?) and technical (R&D for efficient models) as well.
Right now. "they" own the rails.
We don't care about what training data or hardware made it. We just want the weights and a license that says we can do our project.
I care about the fact that he made the kernel open and with a permissive license.
And the reality of what 'open source' AI is/isn't doesn't support these bigger claims, even as they're being used to move laws and billion$. So specificity really matters.
If the release the training data, so what? It's not like we could retrain the foundation anyway. Nor would we get anything from that other than the same thing they released anyway.
We care about what we can actually do with things.
It's not data we lack to make foundations. It's compute.
It just sounds like "free software" to me (with all the associated risks).
Unfortunately #OSS often gets misused π (also in non-AI space), and I think some companies might try to attach themselves to it because of it popularity and wide community.
Help me, Meredith Kenobi: youβre my only hope!