Yes there's a bias from the training data, that will always be there, but ChatGPT has an additional model stopping it from generating certain things and has prebaked responses on some topics.
Comments
Log in with your Bluesky account to leave a comment
Comments
https://arxiv.org/abs/2203.09509
It shows current automated tools get it wrong, but accuracy can improve by tweaking the parameters
So some subtle racism that went under the radar can now be flagged, which may make some of you uncomfortable