I am not a tech reporter anymore but I think it would be a public service if someone wrote an article explaining that chat bots are not “snitching” on themselves they are just doing what they do which is produce words that sound plausibly true but might not be
Comments
And for heaven's sake... do not use them to write! Their writing is crap.
Only the finest shit-posting will do. 🤌
There really is some magic in the typing... or sometimes, writing by hand. You hear it in your head and that changes things for the better.
All LLMs just jumble together their training data, but you can see the manipulation of the data set in the sudden over abundance of particular phrases being spat out.
They get a weighted graph that includes internet garbage And they apply filters (regardless of what grok owners say) and the filters or prompts got changed
The alternative is thinking of them as computers, which is where people get into trouble. They are used to trusting a computer.
Eventually you get into supernormal stimulus territory, where the average output from one of these things is more believable than what a human produces.
Until average people doing the rating notice and its plausibility score drops. Rinse and repeat.
First, a bunch of stuff got deleted
Second, it was given heavy weight to certain subjects
Result it got very repetitive. Not snitching, just easy to infer what it was given to work with
https://bsky.app/profile/phillip-rees.bsky.social/post/3ll7jxdlzec2m
I know LLMs just hallucinate nice sounding things.
But you gotta know they’re likely working on bringing the intelligence closer to ours