Never assume the AI is neutral, or that it has all the facts.
You have to supply confirmed facts step by step, and you can ask for implications. The answers will depend more on you, but still need double-checking.
Wherever possible, intercompare AIs from different countries.
Also, phrase questions to engage broader thought. When I query Chinese models about CCP crimes against Uyghurs, asking it to score the likelihood of abuses 0-10 produces different answers than yes/no responses.
I mean it’s bad in the sense that grok could’ve been a good tool in calling out the lies but now it makes trump’s policies sound nothing like what they are or ignores all the bad and says one thing about it
These few past weeks, each time I have seen a post like "Oh, look Grok has given this or that honest answer about...", my reaction was "Of course! Nothing as stablishing a reputation of being unbiased and honest before the real party starts."
Oh 100% I never use it like I use gpt. But someone asked it a question and posted it so I went and asked the same question to see if I got the same results. And then from there I asked it more details and when it started saying what Trump is doing to Ukraine is showing strength
So the next day I went and asked gpt about what Trump has done and his values and the answers were basically different so I copied a few over to see what gpt would say and it was exactly what I thought.
Comments
You have to supply confirmed facts step by step, and you can ask for implications. The answers will depend more on you, but still need double-checking.
Even then, AI's responses can be tampered with.
A shaker full of salt, always.
Also, phrase questions to engage broader thought. When I query Chinese models about CCP crimes against Uyghurs, asking it to score the likelihood of abuses 0-10 produces different answers than yes/no responses.
https://www.newsguardrealitycheck.com/p/a-well-funded-moscow-based-global
But had to post a couple to show people because this isn’t about not understanding propaganda. I get it. I just have evidence now of it.
What did we expect?
Silence or manipulation.
M we got.
Fight it.
These few past weeks, each time I have seen a post like "Oh, look Grok has given this or that honest answer about...", my reaction was "Of course! Nothing as stablishing a reputation of being unbiased and honest before the real party starts."
LLM (not AI) is nothing but a fancy search. If you can trick search engines - you can trick LLM.