I know a lot of the misconceptions about LLMs are just political conflicts playing out, but I wonder if we couldn’t have avoided some of the bullshit if we had understood this technology as “guessing” rather than “answering”
Comments
Log in with your Bluesky account to leave a comment
I reckon a big part is that it was very much intentionally Marketed:tm: in a way that let laypeople infer that it was smartly answering questions in a way that it isn't (without explicitly lying about it), a lotta slightly more savvy people I know are fed up with it based on those grounds
Comments