emily bender gave a whole talk about how LLMs are not a valid subject of study for computational linguists
Reposted from
Ethan Mollick
Are linguists paying a lot of attention to LLMs? Because this seems like a fascinating finding with large implications: LLMs share highly abstract grammatical concept representations, even across unrelated languages, so even models trained mostly on English do well in other languages.
Comments
IP theft and mass layoffs "because AI will do it all now" also did not help.
I guess for less online people it's easier to just evaluate the tech on its merits.
https://security.stackexchange.com/a/262579
"feed the machine sentence-shaped encrypted audio garbage and have it spit out an English document" as long as you can split the data into 'word-frames'