this is something I’ve pointed at in conversations about the affordances of LLMs being, currently, broken: most “normal person” users I talk to tend to use short, conversational prompts, because the input box is small and the interface looks like a chat

this does not lead to good output
Reposted from Ed
"if you do the reverse and ask for more information when giving it little, you are cueing it to draw from the training set, which is all untrustworthy free association that is useful for getting the gist of the universe but not your specific topic"

Comments