This isn't a black or white question either.
Most "ethical AI" companies are "fine-tuning" open source models with conventual data, but the training data of the base model itself is obfuscated.
You can't turn back time, but companies need to address this and be transparent about it.
Most "ethical AI" companies are "fine-tuning" open source models with conventual data, but the training data of the base model itself is obfuscated.
You can't turn back time, but companies need to address this and be transparent about it.
Comments
That's an oxymoron. Those are mutually-exclusive terms.
The whole reason I'm in the AI business is to prove that you can make superior models by compensating those involved in creating the data (in both royalties and dividends).
So far the majority of our artists make more from licensing their models than they do from Spotify.
And I know how ML/AI works.
I need to go back home and rethink my life.
My ask is that this is universally owned and addressed to make royalties and consent a standard moving forward.