rkeelan.bsky.social
Writer of fantasy and science fiction | Programmer | He/Him
52 posts
19 followers
18 following
Getting Started
Active Commenter
comment in response to
post
I use both—Chrome for gmail, Google Maps, and general searching, then Edge for a variety of websites I regularly open for a specific purpose (e.g., banking, other bill payments, etc)
comment in response to
post
Most of the best parts of Star Wars over the past 40 years come from the books, games, and TV shows. People who aren't fans aren't aware that stuff exists, so they have no idea why the fans have such affection for the franchise
comment in response to
post
I thought that was where the clip was going!
comment in response to
post
Modern LLMs have hundreds of billions of parameters (maybe even trillions by now). That's alot of space to represent a lot of concepts. No one should be confident that they know when LLM performance and abilities will plateau. 8/8
comment in response to
post
For LLMs to write as coherently as they do on such a broad range of topics requires more than just knowledge of language, because language isn't precise enough.
"I saw a man in a park with a telescope."
Is the telescope in the park, or with the speaker? It's ambiguous. 7/n
comment in response to
post
Here's another intuition pump: how well would you have to know someone in order to predict what they'd say in certain situations? This is possible—maybe you can do this for your spouse or children or siblings—but you need to know them *really* well. 6/n
comment in response to
post
If you throw a ball in the air you can calculate how long it will take to hit the ground using simple math. But not just *any* math. You need the equations of motion. These aren't just random calculations. They are a model of the world encoding facts about reality. 5/n
comment in response to
post
When I saw ChatGPT, it was obvious there was more going on. This is the kind of thing that was going on. Training to predict the next word resulted in the LLMs building increasingly detailed, comprehensive, and accurate models of the world. 4/n
transformer-circuits.pub/2025/attribu...
comment in response to
post
You put all the words in a bag and count how often they show up. Maybe you count pairs of words (bigrams) or triplets (trigrams) or some other sequence length (n-grams). I have seen the results from those kinds of systems and they were not good. 3/n
comment in response to
post
Here is XKCD describing machine learning as a pile of algebra (no fault here, it's a 40-word webcomic). The serious version of this description (it's all statistics) conjures in peoples' minds the Bag of Words approach to machine learning. 2/n
xkcd.com/1838
comment in response to
post
Sure, the AI translator isn't good enough today, but AI has been improving roughly 10-fold every two years (depending on what you measure).
The trendline is not guaranteed to hold, but it also doesn't need to hold that much longer!
comment in response to
post
But there's no guarantee that it remains true! If AI gets to the point where it can do any intellectual work better or cheaper than a human, it may just destroy jobs on net.
It will be bad! And insisting that AI is bad at things it is good at, or very close to being good at, doesn't help.
comment in response to
post
If that remains true with LLM-based AI, maybe it's sufficient for the social safety net to catch the people who become unemployed. It will be unfortunate for the people losing jobs they love (and as a programmer, I may well be one of them), but that is perhaps a reasonable price to pay for progress
comment in response to
post
I also worry that AIs will soon be doing a very good job of this sort of intellectual task—so good that they will displace large swaths of the human workforce.
There's a large literature in economics showing that technological progress historically has created more jobs than it destroyed.
comment in response to
post
In a different chat, I asked DeepSeek to suggest a Hungarian title and it thought of referencing the Hungarian version of "That's Amore", which is on the right track, but again not quite there. So I wouldn't rely solely on an AI translator, but only partly because I'd expect it to do a bad job.
comment in response to
post
DeepSeek doesn't seem to be aware of the Hungarian song which the title is referencing, but it comes close:
"Translators often have to adapt titles to make them resonate with the target audience, considering cultural references, idioms, and linguistic nuances."
comment in response to
post
There's more to law and medicine than winning the case or the patient surviving, so maybe the AI lawyers and doctors won't be that good, or will only be useful in limited domains. I expect them to exist as products within a matter of years, though (unless they're regulated out of existence).
comment in response to
post
Law and medicine, for example, can both be operationalized as binary outcomes (was the case won? did the patient survive or improve?), and there's training data out there (court transcripts and decisions, patient records) for companies that are able to negotiate for access, or just steal it).
comment in response to
post
Yes, and it's possible that AI will only ever be good at coding. But I would bet against it.
Instead, I expect performance in other domains to lag that of coding, because there is less data available, or because self-play is harder, or less legible to the researchers in the AI labs.
comment in response to
post
That is: I, a rando on the Internet, can think of ways to implement a mechanism analogous to real-time real-life consequences for an AI model, so I assume it's more a matter of engineering effort and / or usefulness than of fundamental breakthroughs
comment in response to
post
It is true, though, that AIs have distinct training (with feedback) and inference (without feedback) phases, while humans are running both simultaneously.
I think this is more like an implementation detail than a fundamental limitation, though.
comment in response to
post
AIs experience negative feedback (which is at least analogous to pain, but presumably qualitatively different) during training. I'm quite confident that the AI Labs try very hard to negatively reinforce misunderstanding reality.
comment in response to
post
I am a programmer, and I am not at all optimistic about my 10+ year career prospects. If I have a job programming at that time, I assume I will mostly be wrangling AI coding bots rather than writing code myself
comment in response to
post
But you'd also want to optimize for code that humans like (because, for now at least, code is still primarily meant for human consumption), so I think you'd need humans in the loop somewhere. Unless you just have a human-comprehension model for that.
comment in response to
post
Returning to code, you could have AI programs self-play with code: have it devise problems, code up solutions, then test those solutions. When the solutions work, you positively reinforce, otherwise you negatively reinforce.
comment in response to
post
AlphaGo, the AI-Go-playing program was taught by playing many games of Go against itself and famously beat Lee Sodol using with a move that (reportedly; I'm not a Go player) almost no human would have played.
This is called self-play
comment in response to
post
This is perhaps off-topic, but AIs can learn what good code is without human supervision (i.e., entirely on its own), except insofar as the modern definition of "good code" includes "easily understood by humans."
comment in response to
post
You could hook a video camera and microphone to an LLM running on one of those Boston Dynamics robot dogs and tell it not to believe our lies, and then it would have its own independent channel to Truth. And you could probably have the LLM move the dog around, too
comment in response to
post
So I agree that current AI apps are entirely dependent on what we tell them, but that's a feature of the ChatGPT (or Claude or DeepSeek) product, not something fundamental to LLM-based AI models.
comment in response to
post
ChatGPT3 and 3.5 were token-only-prediction, but more recent models can take audio visual tokens, and Waymo reportedly has something like car-operation-token prediction for its self-driving cars.
comment in response to
post
I can't say for certain, but I think that, mathematically speaking, the human mind does largely reduce down to sense-input prediction (plus long term memory storage and other infrastructure, anyway).
comment in response to
post
I cannot do it justice in a Tweet, but there is a theory in neuroscience, called Predictive Coding, which explains the brain's operation in terms of prediction (with apologies, here is someone else's multi-thousand word essay: slatestarcodex.com/2017/09/05/b...).
comment in response to
post
Maybe there's more going on inside the mind, but that seems a sufficient explanation, and it also fits with your description of what AIs are doing.
So I think it definitely can be as reliable as the average human in an advice dispensing-job (even if it isn't that reliable right now).
comment in response to
post
I try to tell the truth because I feel negative emotions (e.g., shame, anxiety) when I say things that are not true. Meanwhile, I can think of one notable person for whom I am quite sure this is not true, so this tendency isn't some universal, immutable quality of humans.
comment in response to
post
There's no platonic concept of "truth" pre-existing in a baby's mind waiting to be used. Rather, over time, a very strong correlation develops between a learned truth-concept and other concepts labelled as facts.
comment in response to
post
I actually think that children learn both the meaning of words (e.g., truth) and that they should communicate truthfully precisely through slow, laborious positive and reinforcement learning from their family and teachers.
comment in response to
post
In other words, I'm not at all convinced that "they have a desire to say plausible-sounding things without regard for the underlying truth," is an untrue statement about most people most of the time.
comment in response to
post
She often gets different answers from every different healthcare professional. Sometimes contradictory ones. Obviously, she's asking questions of people who don't have answers, but they never say they don't know!
So it's true you can't necessarily trust LLMs, but you also can't trust people! /2
comment in response to
post
I tested this earlier today and got a good answer.
A red? black? pill for me on human vs AI expertise is my wife's experience in the health care system. Whenever she has a question, she'll pose it to whichever nurses and doctors she happens to see that day. /1
comment in response to
post
Yes! I did not have red onions on hand for pickling the last couple of times I made pasta salad and their lack made a big (negative) difference!