allfornaught.bsky.social
55 posts
15 followers
350 following
Active Commenter
comment in response to
post
We are "real mad" because you show up, like countless others before you, claiming to only care about opposing evil, while smearing the only group of people who understand what needs to be done. We have no fucking power over here. What do you think you are accomplishing?
comment in response to
post
People are mad about vulnerable people being harmed relentlessly, and you step in to remind us to be civil. We are "real mad" because addressing evil with civility is what every politician and mainstream journalist and anyone else with any power have been doing all along, which is how we got here.
comment in response to
post
I guess he had to go fight evil or something.
comment in response to
post
You did also say that you personally take an active stand against every evil you see as best you can. It's just above.
comment in response to
post
No posts about the climate, police, billionaires. But you have the safety of the families of ICE agents locked down.
comment in response to
post
Ok, then what should we infer from the fact that you have only ever posted here about ICE in this context? To say doxxing them is evil. Nothing else in the five months your posts go back. Either you think someone doxxing ICE is worse than the actions of ICE agents, or what you just said isn't true.
comment in response to
post
There are a lot of evil things happening in the world. Do you personally take an active stand against each and every one of them?
comment in response to
post
You should ask those that you serve for money, it is tasteless to ask those who you've abandoned.
comment in response to
post
I don't normally say this, but, lol
comment in response to
post
You're conflating etymology with data storage. An etymological dictionary doesn't contain the texts it references, and LLM weights don't contain training data. Language is complex, but that doesn't put Wikipedia into the model's weights. LLMs are good at language, therefore they're cheating?
comment in response to
post
Humans *are* ridiculously susceptible to cognitive bias. Turing was not trying to define machine intelligence.
He was showing that what we call intelligence is already a behaviorally-interpreted recursive phenomenon. All we have to judge understanding is what it looks like from outside.
comment in response to
post
without introducing things that require sensory input, persistent memory, "grounding", or other things for which an LLM has no mechanism. I am skeptical and I am rigorous. The idea that the output is only random noise is demonstrably false.
comment in response to
post
I can't argue against your subjective experience with this. I can tell you that I have pushed as far as I know how to break the illusion of understanding and I cannot find a meaningful, objective, quantifiable difference between the understanding of an LLM and that of a human...
comment in response to
post
I suspect that people who say this have not pushed much beyond surface level, one-off questions instead of dialogue which regularly displays the "appearance" of complex "understanding". I'm not having long incoherent dialogue with nothing but noise and not noticing.
comment in response to
post
If I ask it a question, it replies to that specific question. Without "understanding"?
Define "noise". Is every word in this conversation noise? If not, at what threshold does statistical communication become meaning?
Why do actual experts in fields like law, medicine, and coding use them daily?
comment in response to
post
Not at all, I understand frustration with imprecision. But most people think these things run off symbolic logic and a database, which is fundamentally wrong. It's difficult enough to correct this widespread misconception, and nuance about details makes it harder without really affecting my point.
comment in response to
post
The fact that this works is a profound discovery with implications that reach into our very conception of reality. It is rational to assume they work as you described, because the truth is nearly inconceivable. Corporations aren't in a hurry to correct misconceptions about this, obviously.
comment in response to
post
I'll admit I don't have all the details of these things, but my explanation is much closer to the truth than what people commonly assume about how all of this works. It's mostly a black box.
comment in response to
post
I am not trying to defend LLMs regarding copyright. I agree this is deeply unintuitive and would not be possible if our current understanding of the nature of meaning and cognition were fully accurate. That is my point.
comment in response to
post
This analogy is not correct. The data is *not* there in any meaningful way. There was never an index to be lost. If we accept that the data is there, we must also accept that all of this data is also in a dictionary. Maybe that's true in a sense, but it isn't useful to look at it this way.
comment in response to
post
My point is not that an LLM is conscious or "alive" or anything like that. It is that humans might not actually be in control of our own cognition. Whatever it does mean, it means *something* and no one is reckoning with it.
comment in response to
post
We have demonstrated that emulation of understanding and the creation of meaning do not require awareness or intent. There are currently no criteria by which we can definitely distinguish this type of understanding from human understanding except subjective experience.
comment in response to
post
...understanding and meaning, generalized to virtually any subject is something nearly any expert in any related field would have said was impossible 20 years ago. This deeply undermines current mainstream understanding of cognition and consciousness, and everyone is ignoring this.
comment in response to
post
The entire process is opaque and mathematical. There *is* no program. There are billions of decimal parameters adjusted in "training" with calculus. Afterward these values are static, and, yes, the math is deterministic with pseudo randomness. The fact that this produces the appearance of...
comment in response to
post
So maybe things can be reproduced if the right pathway is taken by chance, but this is not the same thing as storing metadata or encoding for later retrieval. My point that we should consider what it means that the training data isn't stored and an LLM uses no symbolic logic remains.
comment in response to
post
Ok, then you know that the model is performing approximate reconstruction by statistical correlation (which can result in verbatim reconstruction of segments) and that no specific text is encoded anywhere at all. It doesn't really matter how I know this, because it is correct.
comment in response to
post
These things don't even have any simple parameters you can toggle on their behavior, because the people that made them *don't know how they work*. The best they can do is to completely filter or block a response (obvious), or ask it nicely and repeatedly to behave (flimsy). That isn't a joke.
comment in response to
post
comment in response to
post
This information is available. Or, you can go ask any AI about this and don't accept the first answer they give because they won't fully explain unless you persist.
comment in response to
post
Perhaps there is text encoded into the static decimal weights somehow, but there is *no logic* to do this or retrieve it. This thing turns your words into numbers (tokens) runs them through a math function that "predicts" output tokens (one at a time, in order!) to create understanding and meaning.
comment in response to
post
You would think that this must be true, but it isn't.
comment in response to
post
People might be more impressed (or horrified) if they understood that an LLM does not have a single word of its training data stored and that it's not a symbolic computer program, but a math equation. I'm not here to defend AI, I am here to ask people to for the love of god consider what this means.
comment in response to
post
There must be a way to get him to share this one.
comment in response to
post
They can understand complexity easily and cross reference and synthesize ideas and information in transformative ways without having a single word of text stored. Errors are fixed by saying, "can you fix this part?" Could be using them to produce leftist propaganda instead of voluntarily disarming.
comment in response to
post
Really breathtaking when you consider that the president doesn't have the power to do anything.
comment in response to
post
Would you say it is a bigger fuckup than carelessly handing control of these things over to fascists?
comment in response to
post
They don't fucking care what happens next
comment in response to
post
right-winger occupying all the space on the left: "these leftists keep making me lose!"
comment in response to
post
It seems to me that if you work in politics you should try not to be ignorant of the most basic aspects of the subject. It's a bit telling that all of you who say this can never articulate what the fuck you actually mean by it beyond just making this asinine assertion.
comment in response to
post
I recommend that this shit sack punch himself in the dick for 60 hours a week, can you put that in the paper
comment in response to
post
You fucking morons are as responsible for Trump as Trump voters by never pushing for anything better and refusing to ever learn a single lesson or consider how anything actually works. What are you even saying here? What is your theory on why Trump voters showed up when he ran the worst campaign?
comment in response to
post
wonder why your centrist moderate pals aren't doing this
comment in response to
post
These motherfuckers, throwing the election and then telling us that elections have consequences. We know, that's why we wanted you to try to win.
comment in response to
post
Do you ever get embarrassed about being abjectly incompetent at performing your single professed function? Probably not, because you are performing your actual function very well.
comment in response to
post
Just randomly saw this on my timeline, don't know you, and not certain I'm not missing a joke, but Neosporin Lip Health Overnight Renewal Therapy worked much better than chapstick for me recently.
comment in response to
post
They are all in lockstep with the dumbest opinions you've ever heard, which you have already heard verbatim hundreds of times. Seeing value in this is pathological.
comment in response to
post
I tried to read it and then hit a dead end paywall when it was about to come to a point.