A colleague of mine responded to my previous text: "with closed source, you can only trust, not verify". I admit, it's a compelling argument for open source AI. But what happens when the ability to verify doesn't translate into actual verification?
https://www.dialethics.io/p/the-glass-house-fallacy
https://www.dialethics.io/p/the-glass-house-fallacy
Comments