Terrible. Also (not for the first time!) this is proof that the "experts" who select the content + contexts for "training" LLMs are homogenous: primarily WM and WW likely from middle-upper middle-class backgrounds. Anyways, I look forward to your column on this."Garbage in, garbage out," indeed. 😵💫
As a general rule, an LLM can't reflect reliably on its own process. Any explanations after the fact are generated in the same way as the original response.
You press for an answer. It responds like a human would answer - not reflecting on the actual process. So you end up with endless apologies.
It tried to gaslight me when I asked it why it censored something. It insisted it didn't. I told it to repeat my chat and it deleted it again. I said, "You just did it again!" Then it showed an error message and reset itself. I deleted the app. My chat was describing my g-aunt as a lesbian Buddhist
It's not gaslighting, it just doesn't know. It doesn't know anything. It's basically the predictive text on your phone. It doesn't know what it's saying, it doesn't know what it said, it doesn't know what you asked, it's just associating words with each other.
I actually enjoy playing with the Microsoft AI. It's fun to see how it reworks what I say. Never once was censored even when I had a full on potty-mouthed rant about Elon. Just told me I had extreme views 🤣
I understand how AI works. I deleted it because it has obviously been programmed to hide anything that has been deemed wrong. Even if I typed it. It was protecting me from my own words? Then to erase everything with a reset was also programmed.
Exactly! Like the robot I built that punches young men in the face: it's not committing assault; it just doesn't know. It doesn't know anything. It's basically an extremely powerful piston with a boxing glove on one end. It doesn't know you're a jerk, it's just launching a fist at frightening speed.
IT's not MY fault if the machines I built caused any mutilations or deaths, I am innocent of all charges. Maybe you should blame those people for being morally imperfect, that's what caused it!---the Jigsaw Killer
Maybe... But AI engines slowly become you. The more you interact with them the more they take on your personality. So we are actually creating little robot versions of ourself
The personification is an illusion though. This is not a surprising result: the more you add Shakespeare-like prose to the context, the more likely it will generate Shakespeare-like prose.
The entire process by which they work is given a context, create a convincing mimicry of a response.
Gemini too. It used DEI as a pejorative rather than a correction. I wonder if has to do with the fact that a bunch of well-meaning but thoughtless people have adopted it as an insult to mean "unqualified" when referring to the dipshits currently running our government.
That too, but I'd guess that the main factor is the fact that Republicans (and foreign actors) have run AI bots for far longet to aggressively push the same narrative.
In reality DEI has lost its original meaning and it now means "unqualified".
Exactly. Self amplification. I wonder if there could be series of tokens that may give away the source. Like internal jokes or memes which bursted and died on specific platforms
That was my thought too. There is so much right wing garbage about “DEI hires” that it just takes everything in and then that becomes its basis. Dangerous times ahead.
Pretty wild how much that slur has been used in order to make its way into the LLM training fast.
Probably actually an issue of LLMs learning from content generated by politically motivated AI bots. Like from social media comments. That's definitely not healthy training!
DeepSeek appears to be programmed to give full disclosure about its reasoning, as it will describe the ontological process by which it interprets requests and queries in exhaustive detail before presenting the results.... but this too may also be just another linguistic illusion.
Comments
You press for an answer. It responds like a human would answer - not reflecting on the actual process. So you end up with endless apologies.
ChatGPT didn’t vibe-code itself into existence. The interactions described above are an orchestration layer around the LLM written by humans.
The entire process by which they work is given a context, create a convincing mimicry of a response.
It shows how human language flows naturally into the LLMs as they absorb new content. Also, the importance of training on balanced inputs...
In reality DEI has lost its original meaning and it now means "unqualified".
The LLMs are picking that up.
But it's even worse than self amplification as this content is intentionally biased!
Pretty wild how much that slur has been used in order to make its way into the LLM training fast.
Probably actually an issue of LLMs learning from content generated by politically motivated AI bots. Like from social media comments. That's definitely not healthy training!
.... or Sam has changed some policies
It took an *action* of deleting its output and my questioning of it.
I'll see if I can reproduce it somehow.
My main point is that you can never use it to explain itself. It simply does not know what it did.
Is why we are in the mess we are in right now
I'm absolutely not questioning that it happened.
I'm curious from a professional point of view as to why it happened.
To test any IT issue, you try to reproduce it. Same here.
As I wrote before, when the LLM changes the chat, it makes the process super difficult.