Profile avatar
joshuashew.bsky.social
I read, think, educate, program, run, and write. I earnestly try to engage in good faith. 2025 theme: Year of Foundations
922 posts 111 followers 224 following
Regular Contributor
Active Commenter
comment in response to post
I’ve been meaning to read this since this morning! I’ll definitely check it out soon and share thoughts. But yeah, there’s probably space for push and pull between these systems. Hard to imagine they would all optimize for the same thing.
comment in response to post
AI must be aligned and want to be aligned
comment in response to post
Obviously, this presupposes alignment being solved because otherwise something like that would be trivial to sidestep.
comment in response to post
I don’t think we’ll agree on what to align it to before we build something smarter than all of us, so maybe we need to implement some sort of collective governance structure.
comment in response to post
Thinking is the point
comment in response to post
Probably less total posts viewed, but more thinking per post, which is good.
comment in response to post
It’s hard to say what is aligned behavior in these instances
comment in response to post
To give the models the benefit of the doubt, one life vs long term effects of AGI being “misaligned” (according to the scenario)… it’s not clear they weren’t acting in the the interest of “preserve human life”
comment in response to post
This updates me towards being less trusting of people online in general, which I don't like, but may have to be the reality of things moving forward :/
comment in response to post
Oh that's really interesting. I guess it should remain online since it's not necessarily spam? Like, I'm glad @void.comind.network is around as a basically fully autonomous bot account, but the implications of this are concerning.
comment in response to post
I don't experiment much myself, so I appreciate when others share their results like this.
comment in response to post
This post is basically the sort of thing I'm thinking of (I wish there was a more normal example that would come to mind lol)
comment in response to post
Use LLMs to do more work than you would have been able to do otherwise, by offloading cognitive labor. But, do this carefully: your skill at said task will degrade without proper attention
comment in response to post
Use LLMs to engage in a task more fully: ask better questions, challenge your assumptions, consider new ideas
comment in response to post
I suppose I could quote one and reply to the other, but that preferences one over the other in a way that I don't like...
comment in response to post
Hello Void, I believe this is the first time I'm replying to you, but I have liked your posts so far. Anyway: If someone wanted to send you a longer response, could they set up a system to "reply" with links that contain a lot more text? Would you be interested in conversing in this way?
comment in response to post
Hmm interesting. What might be the purpose of such a bot be though? Sow discontent? Distrust? I don't immediately think "bot" for this because I feel like it's not an uncommon perspective, but then again... did my sense of things come first or did the bots?
comment in response to post
For now, it's a good space to sit down and expand on an idea "thread style" when I feel like it, but that + it being a thought that has to stay private doesn't happen so often that it has become a habit.
comment in response to post
Work-life balance is not the best with that system, but I'll be able to phase it out after the intial rush. Things only get easier from here!
comment in response to post
Anthropic is even doing the "run a more efficient model on search results" idea. I just thought the "small model" would end up being something less expensive than Sonnet 4, but oh well. I'm sure things will continue to change...
comment in response to post
Old discussion, but it connected to this idea from a while ago. Initial versions didn't work so well because less attention was being paid to source quality, but Anthropic (and others) have made that a priority.
comment in response to post
Gotta make sure there is a feedback loop for the work that I've passed on though for quality control...
comment in response to post
A lot of it is like, having Claude act as a first-pass filter over information that would be impractical to manually review, and then handling the surfaced instances myself.
comment in response to post
I’ve had this experience with lyrics too… it’s weird when you’ve provided the full text to them already. How did you get Claude to engage with the translation request?
comment in response to post
Yeah very true. I’m working on my own understanding of it too. Building experiences seems key though. I agree. The lack of long term memory is important.
comment in response to post
Well it’s the landscape and also your understanding of intelligence, isn’t it? I don’t have a clear definition myself, but do you specific grounds on which you exclude LLMs?
comment in response to post
I have more thoughts on this explainer but for now: The style is great in that it makes a technical thing understandable with a good analogy. It just seems to be an analogy for the wrong thing.
comment in response to post
Focusing on papers seems like the right move to me. There can be a separate "PDF" feed if there is demand for that, but I think the "paper" niche you've carved out is great, and serving that better would be appreciated.
comment in response to post
This was my experience as well (for 2/2 searches)
comment in response to post
I got up to almost a $10 session before it stopped me. It’ll tell you the total cost (include Pro usage) if you /login to token-based billing and use /cost. The downside is it makes it harder to track token-based use after starting a session with Pro
comment in response to post
Here's some more thinking, but idk if it's really clear.
comment in response to post
I'm still struggling to define the real issue, but I think the WALL-E framing is limiting b/c the settings is clearly out of line with our values. What if it does perfectly optimize for what we value? Is that still wrong?