Profile avatar
phild3.bsky.social
257 posts 35 followers 24 following
Regular Contributor
Active Commenter
comment in response to post
that spooks me too. because, in an alternate reality where we _could_ reproduce the fidelity of experience offered by a live orchestra cheaply, then should we _not_ learn music? the benefits of writing are similarly 2nd order(+), harder to intuitively defend against shallow utilitarian replacement 😬
comment in response to post
The “external-ness” of reality? (Ie, that things are true and have implications outside of what we know or want?)
comment in response to post
ah fuck I didn't realize you were the original poster. obviously you're already aware of that. fuck off.
comment in response to post
it is AI
comment in response to post
SOMEONE out there went through the work of creating this graphic, and was just totally uninterested in citing any source. either totally fine making up random numbers in MS paint, OR, even crazier, doing the work of justifying those numbers with sources and just not including them. (it's the former)
comment in response to post
I like the take. My personal [political strategy obvious to me but nobody is doing yet] hot take is: *teach* morals and civics. "nobody likes being lectured to" has too long been a motivating aversion. give me a confident expert that can get us all on the same page. don't pander. address complexity.
comment in response to post
haha it's a fun toy example, but "power used = job done" isn't exactly a great heuristic either: leaves "efficiency" out of the equation (how many of those watts go to "cleaning teeth" vs e.g. "vibrating your hand")? tbh I doubt quip has any revolutionary tech that makes up the difference, but still
comment in response to post
fully agree. my point: I've seen that image 100x over the past weeks, and failed to make the "m=marijuana" connection until now. I'm disappointed that we spread the worst-faith interpretations so widely, when the _actual_ truth is already bad enough. it leaves us misinformed. I applaud your post.
comment in response to post
opposite take: large (majority?) portions of experienced reality are today absorbed as high fidelity fictionalizations (netflix streaming), and so these fictions are the basis for mental models of how the world works. anything happening has "purpose" and "narrative arc"; mere grift doesn't process.
comment in response to post
sources: dqydj.com/net-worth-pe... dqydj.com/net-worth-in... (outdated, but only used for the 99.9% data point) www.forbes.com/real-time-bi...
comment in response to post
comment in response to post
can someone give context, so people can actually run with this information? who is "they"? what is this document? there's enough info here to get the hi-fives for a dunk, but not enough to confidently build a useful point of resistance ("hey family member, did you see how in X report they did Y?")
comment in response to post
yes! we absolutely need more of these running aggregates of bullet points. there's so much shit that goes on, and poring through news stories, each with their own introduction-body-conclusion format that gets added to an unscannable pile, is untenable. results in so much going unanswered.
comment in response to post
I think that mindset goes hand in hand with the widespread feeling of being disempowered. The world is a big place that we're lucky if we have the capability to comfortably exist within; but to conform it to our will?! requires lifetime of cultivating empowerment (through education+?)
comment in response to post
gates/ballmer? Because MSFT, because obscene wealth, or is there something I missed?
comment in response to post
Also holy shit - “[AI is] just a marketing term for a slightly updated version of the automation that has been ruling our lives for years.” That take only makes sense from a person who doesn’t understand computers beyond “magic box that helps me”.
comment in response to post
LOL wth man, are you that far up zitron’s ass? you asked a question, I saw it unanswered and gave a good faith attempt from approximately the perspective you were looking for, and now you’re jumping on me with sass when you see his dismissiveness? Weak.
comment in response to post
So, no elaboration about what’s funny or wrong, then.
comment in response to post
The last point I’ll make: I think “will LLMs be powerful” is an entirely separate question to “are they a force of good”. I think the former is “obv yes” and latter is “prob no”. I think any resistance against them undermines itself by getting the first wrong.
comment in response to post
One example counter to “forever derivative” though: I remember seeing a paper where an LLM was trained to do something very specific. Unlimited training data on specific thing. Was bad. They instead first trained it on “english”, _then_ specific thing: was great. Wide cross domain knowledge helps. /
comment in response to post
Not a scientist, but have both CS and Philosophy degree + 15yrs as pro programmer, no vested interest: IMO (not uniquely so), AGI is essentially “already” here w/ LLMs. It’s “general”, essentially passes Turing test, it’s just not yet “really good” at everything. But: a matter of degree. /
comment in response to post
"well, then the graph wouldn't be easily readable". THAT'S THE POINT MOTHERFUCKER. WEALTH DISTRIBUTION IS SO FUCKED THE GRAPH IS A RIGHT ANGLE. to CUT OUT THE DATA to "make the graph look nicer" is B O N K E R S. EXTREME reality warrants EXTREME graphs, otherwise you're miscommunicating!
comment in response to post
Is this experimentally verified (that either preamble produce similarly effective improvements)? Or are you just considering the possibility? If the latter, feels kinda like saying re: chemo “curing cancer might just as well result from eating more peanut butter” lol
comment in response to post
sure: the turing test is defined to be a _sufficient_ condition, not a _necessary_ one. (that is, if something can pass it, we ought ascribe it intelligence. but we ought _not_ deny anything intelligence just because it _cannot_ pass it)
comment in response to post
I've got a degree in philosophy and computer science. the turing test has been established for decades as the best thing we've got (largely based on the concession that we know humans can think, and all other identified bars for thinking fail). it's only after we passed it that the goalposts moved.
comment in response to post
chatgpt.com/share/67cc92... (don't get me wrong, they're not _great_ puns, but they are puns) also, I tried (and failed) to be clear in my position that I'm NOT arguing that they are good, or ought be trusted as answer boxes. the only thing I object to are incorrect claims that they aren't capable.
comment in response to post
(obviously this is not counter to the thesis of the larger thread: that to treat LLMs as oracles is dangerous and grounded in lack of understanding. but the unreliability of LLMs is similar to the unreliability of a not-especially-considerate uncle. it's the canonicalization that's the problem.)
comment in response to post
yeah I wince at claims like "the models cannot reason" or "it's *just* predicting the next word". deeply reductionist. and you don't even have to go to CoT to see it: the nature of attention involves combinatoric associations similar to anything we'd call reasoning, and the output speaks for itself.
comment in response to post
ftr it is absolutely capable of making never-before-read puns. you've identified that as a meaningful bar- now what? my perspective: AI capability is independent of AI good. it's extremely capable, and arguments (incorrectly) diminishing that also diminish its danger.
comment in response to post
The Katie Johnson thing is widely recognized to be a hoax. We have to be more disciplined with our info.
comment in response to post
for all agents which are *literally* just "a snippet of API description preamble on a prompt", then ok. if that's actually "almost every agent in the biz", then... wow lol. says more about the biz than MCP as new tech.
comment in response to post
?
comment in response to post
if there's secret sauce Z (e.g., some data flow that is context specific, like claude-plays-pokemon), then data<-agent(X,Y,Z)<-agent(X) with implicit Y changes nothing meaningful (data<-agent(X,Z)<-agent(X) )