i read this less than an hour ago and i just saw someone get a discord LLM summary that misattributed who said something, so now i can more confidently say i'm pretty skeptical of "turn text into less text" too
I see this as a great example of how LLM use is difficult and subtle. Summarizing a longer chat, but where attribution or particular details isn't important, or it's okay to elide some info? Probably fine! But is attribution important, or a subtle point critical? Probably not so fine!
What's frustrating is that LLMs are pitched as some magic tool where you don't need to think, when effective use is extremely hard because of how wild and wacky and often unintuitive they can be
i think there is a disconnect between people building LLM tools, where from their perspective the appeal might be "i have millions of little problems and can get a 98% success rate here!", versus people having LLMs foisted on them, where the experience is "my phone just lies to me twice a day"
and it's not like it was solving a problem that would be unfeasible for a human team to do on their own. it's trying to save someone from reading a conversation their friends had
Very useful post Laurie - thanks. I find they are useful at traditional NLP tasks too - entity recognition, sentiment analysis etc. I suppose it’s a type of “turn text into less text” use case..
It’s not really a refutation of your point but I think LLMs are quite good at turning a small amount of text (bullet points and short not well written sentences) into a more structured complete text. Agreed that you can’t use that directly without human intervention (proof reading at the very least)
Comments
I too wrote something on these lines today 😆
https://bsky.app/profile/nico.santini.nz/post/3lg7nsmvybc2k
Thanks.
Here are my thoughts on it https://bsky.app/profile/nico.santini.nz/post/3lg7nsmvybc2k
🙎🏾♀️But…where does data go?
💁♂️✨The Cloud✨
🙎🏾♀️But what IS The Cloud?
💁♂️The server that stores your data!
🤦🏾♀️So…then it’s technically *not* serverless?
🙅♂️…get out.