But, but, but Herr Scoble says that it’s just so enlightening and the next wave of therapeutic intervention.
(Meanwhile, I wonder why we continue to platform Muskovian Apple sycophantic abusers in the tech space and am rewarded with this dipshit ever. Single. Day with his sacks-level hot takes)
I will note, however, as someone who has a couple degrees in psych and clinical counseling and has worked as an adolescent therapist & social worker, that regardless of the raging data privacy issues, a good part of the therapeutic milieu IS reflective/responsive listening.
But that’s only PART of the work. The rest is separating signal from noise, the wheat from the chaff (as it were), and AI (ChatGPT, et al) isn’t skilled at the understanding of psychosocial nuance and intrapersonal pathology.
“We used the internet to train this model. No, we *didn’t* ensure the data set was clean of proana forums or just the general unsafe advice that overflows the internet.
Good luck!”
Honestly? As a machine learning person with a lot of skepticism? I don't think this is the worst use case. I think it should be prescribed, because the scope of "ok" is narrow here, but there are cases where a neutral sounding board could help.
I would only use it for things like "breathing exercises to help me fall asleep" but even the actual content should be screened. As it is I get different answers every time I Google and have to read through crap on my screen exactly when I'm not supposed to
This. Just because it can sound "kind" and "confident" doesn't mean that its autogenerated 'advice' isn't going to be absolutely terrible -- or that your conversation won't be datamined.
I remember some people having the exact same reaction to "conversing" with Eliza. "She's so insightful!" Yeah, she's repeating your own words back at you.
ChatGPT is Eliza with a bigger vocabulary and a statistical engine.
Venting with an automated voice responding is not therapy, jeez the bad faith of “not that I’ve ever gone to therapy!” like you know what it is and that this is not
We don’t need to worry about AI becoming better or even as good as a human therapist (it never will) - but we DO need to worry about it becoming just good enough that insurance companies choose only to cover AI therapy rather than human therapists.
I agree, although we have to note that most people cannot afford therapy, most therapists are no longer accepting insurance, and there are shortages of therapists in any case.
Talking to ChatGPT about my problems of disconnectedness with society because we are pushing towards more automation and disconnectedness. It sounds about right 😢… and reminds me of this chilly experiment https://www.verywellmind.com/harry-harlow-and-the-nature-of-love-279525. Will we automate love and compassion?
"it's just a computer, but people will tell your secrets"
I was shaking typing my response. It was shared in a community for people with epilepsy which happens to be correlated with every mental health and neurological thing ever. Horrifying.
this having been said, it's a short leap away, time-wise, before the totally open-source, runnable-on-your desktop (or in your own cloud container) variations of ChatGPT (e.g. based on Llama2 or Falcon).
In which case, all the privacy issues will go away, leaving the... *rest* of the issues.
First, that is extremely not what therapy is. Second, advising people who may be struggling with real, serious, painful issues to spill all of that to a GAI system is irresponsible at best, actively damaging at worst (and this person works on AI SAFETY).
Also, many GAI systems do continuous scraping to expand their data sets and training. Meaning the painful details of your life and mental health revealed here could be PERMANENTLY INCORPORATED INTO MASSIVE DATA SETS. It’s a huge violation.
ChatGPT does not have a duty of care, confidentiality, or basic understanding of how humans work and how to help someone. It has no empathy. It could give you actively harmful feedback that makes things so much worse. This is horrifically bad advice. DO NOT.
I just want to add that while therapy helps with validating certain feelings, it's also about challenging intrusive/unhealthy beliefs. People with severe depression or other conditions often distort reality: an LLM is absolutely incapable of providing proper advice here!
There already have been rug pulls like this, just not in therapy. Replika did it and the ceo is still trying to convince people to turn themselves over.
I used to use ChatGPT to generate stories mostly featuring my main OC who is the narrator of most of my stories and an alternate version of myself. I got addicted because I was never truly satisfied often with what it generated even with additional input. The adventures lacked true emotion. So yeah.
This was legit a major plot point in season 3 of Westworld to showcase how cold, uncaring, and mechanized the dark dystopian future of the show had become.
The desire to dunk on them is strong but really, I feel bad. If they get this kind of benefit out of ChatGPT, imagine how an actual therapist could help them.
I am waiting for the person that thinks they are dating a chat bot... I can wait to see that one. In fact I am gonna Google search right after I m done writing this for that very thing lol
Comments
I can't even ... 🤯
(Meanwhile, I wonder why we continue to platform Muskovian Apple sycophantic abusers in the tech space and am rewarded with this dipshit ever. Single. Day with his sacks-level hot takes)
MY
GOD
this would be super funny, if it wasn't both tragic and dangerous
https://www.psychiatrist.com/news/neda-suspends-ai-chatbot-for-giving-harmful-eating-disorder-advice/
Good luck!”
ChatGPT is Eliza with a bigger vocabulary and a statistical engine.
https://bsky.app/profile/tylerjburch.bsky.social/post/3kadndzn7662k
ChatGPT:
Yeah that's a no from me dawg
Jesus fucking wept.
Do these people even have an inkling of how LLMs work?
I was shaking typing my response. It was shared in a community for people with epilepsy which happens to be correlated with every mental health and neurological thing ever. Horrifying.
In which case, all the privacy issues will go away, leaving the... *rest* of the issues.
https://gizmodo.com/ai-this-week-the-hollywood-writers-strike-may-have-end-1850877755
https://www.psychiatrist.com/news/neda-suspends-ai-chatbot-for-giving-harmful-eating-disorder-advice/
(Disclaimer: I go to therapy, so I wouldn't be using it for actual therapeutic purposes)