More seriously, I get that being responsible is good and virtuous, but with the crazy incentives of competition in AI and tech in general, why would any rational company _not_ exploit the hype?
If we'd avoided the 'learning' label, we'd still be hearing 'but can it think?' Now we just get 'is it conscious?' Guess we leveled up the existential questions! 😃
Yeah, language is key bc it conditions interpretations and the whole POV, but I mean it in both ways e.g. you can miss the forest (intelligence) by focusing too much on the trees (math/code). M. Levin supports the agential POV even when there's no true agency if that helps our predictive skills.
Comments
More seriously, I get that being responsible is good and virtuous, but with the crazy incentives of competition in AI and tech in general, why would any rational company _not_ exploit the hype?