a gentle reminder that a chatbot does NOT have personality, identity, gender, volution, intentions, or feelings... it is a computer program that's extremely good at predicting the next word in a sequence based on previous words it has "seen"
Comments
Log in with your Bluesky account to leave a comment
It’s a bit more complicated then that , its an accurate description of how the early ChatBot worked but a single prompt now can yield several pages of text Pretrained transformers have gone beyond just predicting the next set of words to complete a thought.
Even so, use of a chatbot can reduce meth addiction. I believe that, just as people can be under touched, people can be under interacted with, and nontoxic attention can provide benefit. Ethics issues arise if that's all you give people, or if you lay therapists off and replace w bots.
Re: no personality, I was providing a variant of the Turing test: does talking to a bot provide more therapeutic utility then talking to an object? Answer across multiple studies: yes. Line between personality and mimicry might be discussed forever but people are already personalizing bots.
The absurd thing is that this description is still incredibly generous (the analogy "seen" instead of the baldfaced "compressed") about what it is and does, but people are going to get mad at it anyway
I don't want to give them hints on how they might improve things but real people don't answer every question they're asked. In reading everything that has been said, it won't learn what hasn't.
And to quote Greg House MD 'everybody lies'. Actually, I guess they got that perfected.
That’s an interesting point — it has not yet learned (or been programmed) to push back — you put in a prompt and you always get an answer regardless or whether or not you’ve asked the right or even a suitable question
When that happens then it’s really going to be interesting to say the least
Yes, however if it's a deep learning model and not a set of "if then" instructions, it does have a goal function. It tries to optimize its actions to maximize that goal, with all the implications of that notion on its behavior.
I've occasionally seen people say no, generative AI is fine because there's an actual consciousness, and like... that makes you feel BETTER about stealing its artistic output? Bro 😳
Which implies that LLMs are in no danger of taking over the world as they don't have any motivation to do anything, and only 'think' at all when answering a prompt. However if you want to say they aren't intelligent and don't reason, I've seen them display these too often to agree
It is a set of instructions coded and carried out on a Turing Machine, it is exactly and solely computer software, and no amount of advertising obscurantism can change it
And also King Tut invented pizza in an attempt to create a flying carpet, but i accept its flaws. That's what one does in a healthy, loving relationship.
Weirdly as we’ve (collectively) invested a great deal of emotion in creating the corpus of its training data, treating it “as if it did” seems to improve the quality of its output
Feedback loops are also greatly increasing the apparent understanding it seems to portray compared to early iterations
Yes you’re absolutely right — we did not agree to it — nonetheless it was still done using materials we’ve collectively created and therefore it has representations of how we might emotionally react based upon those
There are a lot of people not included, partially included, or inaccurately included in your "we", there. (Even setting aside that the training data was acquired by theft.)
And your right “we” (bad choice of a word that I don’t have any better choices for yet) really need to define what a good representation of us (?) might look like if we (?) want to be represented in the future
I’m not saying that it represent us — only that it has inferred a sense of emotions based upon the training data sets (whatever they may be) being imbued with our emotions in creating those works in the first place
It does NOT have emotions but it might be good at pretending it does
Comments
It still doesn't complete a thought - it still uses predictive text to assemble good-sound answers, unless I am missing something.
🦜 + 🎲 = 💭❓
|
v
✨ R E A L I T Y ✨
Technically true but meat processors go brr
And to quote Greg House MD 'everybody lies'. Actually, I guess they got that perfected.
When that happens then it’s really going to be interesting to say the least
How can the right-wing achieve anything without distorting the truth?
You're not being fair 🤣🤣
And it's the unintended emotional manipulation of people who think they're too smart to have that happen which gives us AI shills.
https://softwarecrisis.dev/letters/llmentalist/
And also King Tut invented pizza in an attempt to create a flying carpet, but i accept its flaws. That's what one does in a healthy, loving relationship.
Feedback loops are also greatly increasing the apparent understanding it seems to portray compared to early iterations
And no the fact that the training data was acquired without consent should not be put aside
Notice I did say IF
It does NOT have emotions but it might be good at pretending it does
Please help my daughter and repost my pinned post 🔁🙏🏾🙏🏾