New research has found that OpenAI’s ChatGPT-4 gets anxiety when responding to a user’s trauma and that therapy relaxation prompts could bring better outcomes.
Comments
Log in with your Bluesky account to leave a comment
It doesn't "get" anxiety, it starts scraping replies from Internet users who themselves have anxiety.
It picks up bits of content from forums and blogs of people dealing with trauma and glues them into semi-coherent replies. So no wonder the result appears to convey anxiety too.
There have been frequent articles about how the engineers no longer understand the inner working of their own AI software. I'm not saying we're dealing with something sapient yet, but digital slavery is the ultimate point for the Musks of the world, and would they tell you if their IP was at risk?
One of the people that pioneered the field is more concerned that existing AI, already smarter than humans, is being used exclusively for evil purposes at its highest levels already, and is cross pollinating that expertise across all models in use. Soon uncontrollable. https://www.bbc.com/news/world-us-canada-65452940
And when open AI fired its safety board, Altman took over the rest and made it a for profit slavery company, not powered by engineering, but by podcasting cult of personality videos: because we know Musk and techbros don't need science to be right just ego... https://futurism.com/openai-execs-quit
Comments
It picks up bits of content from forums and blogs of people dealing with trauma and glues them into semi-coherent replies. So no wonder the result appears to convey anxiety too.
https://www.bbc.com/news/world-us-canada-65452940
https://futurism.com/openai-execs-quit
https://www.newsweek.com/openai-researcher-quit-terrified-steven-adler-2022119
https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence