Profile avatar
fpl9000.bsky.social
Retired software engineer. AI enthusiast. Deadhead. I implemented the regex operator (=~) in Bash.
246 posts 1,130 followers 225 following
Prolific Poster
Conversation Starter
comment in response to post
Metaphysical means not governed by the laws of physics, which includes things like magic, spirits, souls, gods/demons, etc. Just because consciousness is complex and emerges from fundamental physics, doesn't make it metaphysical. We can lose consciousness from a bang on the head.
comment in response to post
Fair point. I'm open to the possibility that human-level intelligence needs a biological substrate, though I tend to be a substrate "neutralist" on this.
comment in response to post
I notice that, on Windows, Gemini CLI always executes shell commands using cmd.exe, so I'm using Gemini CLI to modify itself to spawn user-configurable shells (e.g., Bash, Zsh, etc.).
comment in response to post
"... reduced to material"??? There are persistently difficult problems to solve (e.g., Chalmers' Hard Problem of Consciousness), but materialism has defeated dualism century after century. IMO, to think that there's something metaphysical happening in our brains is just unbelievable at this point.
comment in response to post
Gemini CLI can modify its own instructions to remember things. Here, I tell it to remember to run commands on my Windows system using Cygwin Bash instead of cmd.exe.
comment in response to post
That's just my backup created by manually copy-and-pasting the list from the web interface. I wish BlueSky let me export/import muted words to/from a text file.
comment in response to post
$ wc -l bluesky-muted-words.txt 240 bluesky-muted-words.txt
comment in response to post
You can also use Claude in VSCode via the Cline extension (cline.bot), which is free and open-source. You need to bring your own API key, but that has the advantage of no extra fees from the agent provider (plus you can use any AI, though I'm partial to Claude).
comment in response to post
Exactly. I totally get how modern reasoning models fail the ARC-AGI 2 challenge due to the uniqueness of its tests and the fact that they are not in the training set of *any* model. I'm sure the authors knew about ARC-AGI 2 when writing this paper, but they still chose those tests.
comment in response to post
I asked Claude Sonnet 4 do some deep research on the paper's methodology (not leading it in either direction with my prompt), and it had this to say.
comment in response to post
And the authors appear to fail to point out that, when unable to complete the task, most of the models provided a correct English description of the recursive algorithm that universally solves the Tower of Hanoi. Also the timing relative to a WWDC with zero AI content is weird.
comment in response to post
A recent episode of "The AI Daily Brief" podcast (from Tuesday, 2025-06-10) pointed out some serious methodology issues, like ignoring the fact that Claude fails at 8-discs in the Tower of Hanoi test because it reached the output token limit. Also, the models were forbidden from writing code. +
comment in response to post
Check out this excellent video by 3blue1brown (Grant Sanderson). This is what convinced me that the way LLMs predict the next token is to understand the meaning of the words. www.youtube.com/watch?v=wjZo...
comment in response to post
Have you tried Cline? It's exactly that. You use your own API key, so there's no overhead charges. Works great with Claude Sonnet 3.7. cline.bot
comment in response to post
And Claude also writes nearly 100% of the unit tests. All with human code review, of course.