I appreciate the people who are doing empirical research that demonstrates the problems with this sort of thing.

What I’d really love to see is a #philsci take on the question. What is the epistemic value of an LLM simulated research subject?

Has anyone written anything on this?
Reposted from CREST Sociology
NEW PREPRINT by CREST Sociology's @scoavoux.bsky.social and @ppraeg.bsky.social: "Machine Bias. Generative Large Language Models Have a Worldview of Their Own"

LINK: doi.org/10.31235/osf...

Comments