emily bender gave a whole talk about how LLMs are not a valid subject of study for computational linguists
Reposted from Ethan Mollick
Are linguists paying a lot of attention to LLMs? Because this seems like a fascinating finding with large implications: LLMs share highly abstract grammatical concept representations, even across unrelated languages, so even models trained mostly on English do well in other languages.

Comments