I would put this even more strongly: open source AI is probably our only realistic chance to avoid a terrifying increase in concentration of power. I do not want to live in a world where the people with all the money also have all the intellectual power.
Reposted from
Nathan Lambert
The most realistic reason to be pro open source AI is to reduce concentration of power.
Comments
Search for Prime Intellect
That's actually why this is, to me, one of the real present day risks of AI. I worry less about super intelligence and more about monopoly.
Maybe not if:
1. Another AlexNet moment that causes another major HW/trade secret switch. And that would only last a short while.
2. DL gains slow massively; efficient algos make up the difference
New open GPU algos would just get adopted.
Secondly, this assumes ANNs are the best possible method, forever. We have no evidence for that.
Also I think Alex's code was much faster for deep conv nets.
Beyond that, my concern is parsimony. Backprop on NNs is about the most naive method possible for real learning. Sadly, it sorta works, and scales out, so we stick with it.
But organizations like AI2 are going further and are sharing data, how-to guides, etc. I don't have a budget in the millions, but I expect to be able to profit from their guidance in building my own models.
I’d rather we invest in better algorithms, and many corporations will resist that, because they’re also compute vendors, so it’s against their interests.
A few datasets and models do not a commons make.
OSAI is there to help legitimate Private AI,
and help "normalise" the theft of IP/breach of copyrights,
muddy the waters, and offer protection to the majors/giants.
At the end of the day - resources are needed,
and OS < Private for hardware etc.
If the tech/code is limited to the majors,
there's only a few that need to be regulated.
How do you envision regulation for hundreds/thousands,
many running tiny rigs etc.?
(I still believe OpenAI was formed solely for this purpose),
and the majors pushing OSAI are doing it for such reasons,
so we're kind of stuck with it,
so ...
... yeah, the best we'll get is OSAI
as we aren't going to get legal protection :(
I'm just concerned that it's normalising stealing content etc.
I've asked numerous folk on Twitter (big and small AI co.s),
and not one has every replied/confirmed that it's all permitted.
:(
* Kill / subjugate
I thought we had a robust consensus that generating words and pixels is ok unless specific dangers are proven.
It's not like they'll just hook up ChatGPT or an off the shelf open source model and press go.
If a foundation model isn't going to be certified and safe to use in highly regulated industries, but it may be fine for other things, and can be used as a base model for further fine tuning to make it "safe"
1/2
the difference I can imagine is the degree of testing is very different if theres a huge combination of possible outputs (like we see in LLMs)
Testing a XOR gate vs an LLM.
You can fully specify all inputs and outputs for the XOR gate. Hard with LLM.
But maybe I'm taking safety a little too literally.
2/2
https://pluralistic.net/2024/11/18/rights-without-power/#careful-what-you-wish-for
Removing doing copyright will do exactly nothing to improve this because the limiting factor is intent, not implementation.
Even if it were technically feasible, the investors who've put billions into AGI will extract everything they can back out of it. That money will come from us
At the same time, the automated labor will be vastly more efficient.
Yes, the consequences will be dire, and a few places may avoid it, but overall everything will be as bad as you predict
I have a feeling they're going to be skeptical
"Post-scarcity" is just fancy academia speak for Socialism and Communism