zacis.me
Some guy who programs stuff and sometimes teaches. Father of three. Exiled St. Paulite living in Minneapolis. Co-creator of Taelmoor. He/Him.
313 posts
421 followers
411 following
Prolific Poster
comment in response to
post
Please, please, please join me in contacting your congressional representatives IMMEDIATELY to tell Congress to STAND UP AND ACT.
Call tool: act.pih.org/foreign-aid-...
Email tool: act.pih.org/foreign-aid-...
comment in response to
post
Emojis also have no answer for (╯°□°)╯︵ ┻━┻
comment in response to
post
I don’t know that I would place much faith in that deduction, and the prompt reporting itself should be considered akin to hearsay until otherwise verified, but fair enough.
comment in response to
post
You could potentially get Grok to divulge the contents of the prompt.
You could not get Grok to tell you if/when it was changed or by whom or for what reason. You also could never be sure if the contents were genuine or a made up derivation from online conspiracies.
comment in response to
post
Whate data will the LLM use to deduce its code has been modified?
comment in response to
post
The process has gotten more complex as they have tried too eek out small improvements, but I haven’t see anything to indicate they added something to evaluate the models own code nor a memory of what their code used to be.
comment in response to
post
Let's not generalize the discussion. I'm not speaking about the capabilities of future systems or the dangers of AI in general. I am specifically saying that a modern LLM cannot tell you whether or not its code was modified. Full stop. That is a complete misunderstanding of what these systems are.
comment in response to
post
Sounds like you think of AI/LLMs like magic. They're not. They're not Terminators either. Fancy autocomplete gets closer to the reality.
There may be unexpected consequences, and there are *parts* of the process we understand poorly, but they for sure cannot tell you about a "code intrusion".