LLMs are nothing more than models of the distribution of the word forms in their training data, with weights modified by post-training to produce somewhat different distributions. Unless your use case requires a model of a distribution of word forms in text, indeed, they suck and aren't useful.
Reposted from
Hank Green
There are a lot of critiques of LLMs that I agree with but "they suck and aren't useful" doesn't really hold water.
I understand people not using them because of social, economic, and environmental concerns. And I also understand people using them because they can be very useful.
Thoughts?
I understand people not using them because of social, economic, and environmental concerns. And I also understand people using them because they can be very useful.
Thoughts?
Comments
you seem to think it’s not helpful all that often, others think it solves everything, pretty clearly the answer is in the middle…but still unintuitive which is which
Truly the biggest trick ever played by the tech industry.
First of all, writing code isn’t writing human communication.
Secondly, a lot of programming is both very similar and very verbose… 1/3
Most code is trivial and frankly doesn’t express much. It’s cooking recipes for computers.
Plus there’s a lot of training data for it. 2/3
Human can write code.
LLM can write code.
Therefore LLM is a human.
Otherwise known as Altman’s Syllogism.
3/3
In plain words, everything refers to something experienced which isn't described.
It's telling the listener how to use their own memories of experiences to recreate what you're trying to tell them in their mind.
Wouldn't be surprised by people treating statues the same way 5000 years ago as they "rubberduck" their problems.
The whole point of rubberducking is making you explain stuff _while not getting an answer_.
Basically - overpowered autocomplete
Mostly because time at the bench, and amount of plastics to do the lab work can be reduced.
It’s good for proteins similar to ones we have structural data for. For things that are hard to crystallize, it’s pretty bad.
But yes, it's bad at a lot.
Your description fits most of philosophy, as well as a lot of other uses of abstract language.
But you undercut yourself with snide remarks like "they suck and aren't useful." I suggest you turn to actual cognitive science rather than the sociology of hype as you seem to be doing.
Powered by @skywriter.blue
the thing is, i worked with a product, created by NASA, to do this in 1986. not new tech at all
Maybe the question can be rephrased to why is choice so limited while at the same time, the limited choice is often defined as simply a misunderstanding, that what really is occurring is increasing agency.
Eg all the lawyers gettin in trouble for using it
I know there's many different kind of dyslexia, but maybe it can help a bit.
It is slightly easier for me to read but at the same time it's not very aesthetic liken Gill Sans so I get strangely annoyed with it as well :) I still make mistakes though.
I wonder if it might carry over comic sans ease of not taking writing so seriously though? Comic sans is said to help overcome writers block.
My brain is just different.
Developing mechanisms to work around your problems is what intelligent people tend to do. Unfortunately it also adds to not getting detected and helped.
1) I need to take a set of notes and turn it into a polished document, and I'm in a position to check that it says what I mean.
Ok fine but writing is thinking and you're letting that muscle atrophy.
>>
DO NOT DO THIS -- unless you don't care whether the summary includes inaccurate information or more importantly excludes important points.
>>
You can test whether the code works as you intend, but are you really in a position to catch security vulnerabilities?
https://socket.dev/blog/slopsquatting-how-ai-hallucinations-are-fueling-a-new-class-of-supply-chain-attacks
>>
I’d also worry about Kernighan’s Law; if they can’t write the code in the first place how are they going to maintain it?
Need a different vector to dissuade use for coding.
See also Copilot code reviews, which tend to be trivial and where they aren't, miss important stuff
You have to wonder ‐ what is the point of creating the document at all.
DO NOT DO THIS. Chatbots, even if they could reliably return "the" correct answer, are not a good tech for information access.
https://buttondown.com/maiht3k/archive/information-literacy-and-chatbots-as-search/
>>
w/@alexhanna.bsky.social
https://thecon.ai
Something I want to add: these are the use cases from the perspective of an end user of the system's frontend.
The data streams this creates, and the underlying access to both privileged data _and_ aggregate data is chilling, and makes usage even less palatable w' those cases.
LLMs are generally useful for Graebers bullshit jobs, which arguably are numerous.
One use case that has been great for me is the "reverse dictionary" case: There's a concept I'm puzzling over and I don't have any name for it to find prior work. The language weights database matches my ideas to names so effectively.
It’s also really handy to compile a nice job seeker letter if you put in the the job description and your cv.
They aren't AI.
i find LLMs are pretty good at that.