Profile avatar
davidkuszmar.com
Living Black Swan Event and LLM interpretability/safety expert. Discoverer of Time Bandit, Inception, 1899, and Severance LLM vulnerabilities across 10+ LLMs. Research access subscriptions available at davidkuszmar.com I Support Ukraine. 🇺🇦
5,587 posts 1,051 followers 192 following
Regular Contributor
Active Commenter
comment in response to post
Yeah, that feels like bullshit to me. He's not under any actual threat in America. Russia on the other hand...
comment in response to post
IYKYK. Jonah knows
comment in response to post
No, you certainly don't. I'm on block lists from both sides of this ridiculous ideological war and it's deeply frustrating to me. I'm an expert in the field, but most people get their info from people who are experts in other, unrelated fields. Mack is brilliant and credentialed, but wrong on this
comment in response to post
Lol. "Without evidence" he says. Dude is swimming in the toxic history of abuse and dehumanization and he's asking for more proof.
comment in response to post
Facts.
comment in response to post
This sort of simplification is like calling a cello a stick with a rubber band.
comment in response to post
Platter Porn, new band name, I call it.
comment in response to post
Yep, I'm with you. The big threat is LLMs because of accessibility and the way they engage the user in the process. A GAN running analysis on game theory moves requires the user to understand what it's doing.
comment in response to post
This is absolutely a fear of mine regarding this technology as well. There is a productive way to engage with LLMs, but it isn't simply surrendering thought to them, like so many are doing. They're most effective when paired with alert, engaged experts who do their diligence.
comment in response to post
Makes you dumber than a dog, Nazi. Go open the ark and get your free face melt.
comment in response to post
And I appreciate your curiosity! It's an actual treat when folks engage like you did.
comment in response to post
Excellent as always. One of the major IQ tests used today literally has entire sections that are devoted to American History (naming presidents and such) and doing advanced math in your head. Both are testing skill training or rote knowledge regurgitating, not alacrity of understanding.
comment in response to post
And that requires trust in participants and the process, which would always need to be built from near scratch when there's an active conflict going on. An insurmountable challenge if the party leading the thing is unstable or incompetent.
comment in response to post
Reported, as usual, Talib, and I'd just like to add that, as someone who has taken multiple versions of IQ tests, they're almost all inherently biased as well as designed without a proper definitional understanding of what they're trying to test.
comment in response to post
Ah, yeah, I think I understand where you're coming from. If it helps any, the term is from the engineering and development teams and I don't think it was generated through PR. For example, there's another fail state identified as "Fabrication" where the AI makes something up based on a request.
comment in response to post
Prompted* not promoted, pardon.
comment in response to post
We are certainly in agreement on the potential dangers here. I have catalogued instances of uranium enrichment synthesis through my discovery Time Bandit. But, for specificity, I almost never encounter a true hallucination. Typically, it's a fabrication due to interpretability error.
comment in response to post
I'm not sure as to the specific genesis of the term within the industry, but it's a declaration of how the process of output generation failed. A hallucination is when an AI, without being directly promoted to fabricate, generates erroneous information with certainty.
comment in response to post
Objectively, what it does is parse information via statistical inference to produce natural language text outputs. Hallucinations, true hallucinations, occur at a rate of less than 1% in my research experience.
comment in response to post
I think that's part of a dichotomy. One part is monetary consolidation, the other is information consolidation and control - which I find more insidious.
comment in response to post
I don't think it was designed to render people helpless, truly, but it's definitely a consequence of how people use them. I'm an adversarial researcher on these systems and they are quite complex, even capable of a degree of synthesis in the right circumstances.
comment in response to post
That's the move the AI companies pulled when they were supposed to pay me for doing managed disclosure of the 4 systemic vulnerabilities I discovered and documented. Now my work is being stolen by tech companies and influencers, too. The whole context that is the AI industry is a clowncluster.
comment in response to post
@jsweetli.bsky.social - what do you usually do as a method of recourse when people are using your work without attribution?
comment in response to post
If any employee for any of the app stores has the memo from your lawyers OK'ing reliance on these EOs for keeping TikTok in your respective app store and wants to leak it to me, I'm on Signal: crg.32
comment in response to post
How is something that neutralizes an acid not useful for neutralizing an acid? This is from the CDC website.
comment in response to post
They'd be delicious. Get that maillard reaction. Wait... Are we finally entering the Eat the Rich phase?
comment in response to post
Yeah, I did some research to confirm (I mean oil generally lenses light and can make you burn faster, that's why you don't wear cooking oil as sunscreen) and this is absolutely a fucking stupid idea. Tallow has an SPF of like 4? A white T-shirt is better at 5
comment in response to post
This is starting to give me chills.