Profile avatar
jdkite.bsky.social
Academic with Prevention Research Collaboration at Uni of Sydney. Cyclist. Swans fan.
275 posts 1,058 followers 326 following
Regular Contributor
Active Commenter
comment in response to post
Do they say anything about what the tech will actually be or how it will be applied?
comment in response to post
“Comedian ABSOLUTELY DESTROYS comedians absolutely destroying things”
comment in response to post
This article is based on our research in @misinforeview.bsky.social, demonstrating how alternative cancer clinics use Google ads to target people searching sensitive cancer-related queries. misinforeview.hks.harvard.edu/article/goog...
comment in response to post
TBH it’s probably cheaper to do this than to ask everyone to pay the journal subscription fee so that they can access the article online
comment in response to post
Meritocracy in action
comment in response to post
The idea of a context window is something that we should be made more aware of, for sure. There’s also ChatGPT’s overly positive nature - I suspect she could’ve uploaded her shopping list and ChatGPT would’ve told her it was brilliant and to include it in her portfolio
comment in response to post
Interesting. I guessed exactly where that was going fairly early on. My understanding is that ChatGPT doesn’t ‘read’ external input, at least not fully. It’s at best scanning it and generating a likely response based on the few words it has picked out based on the prompt the user gave it
comment in response to post
Yep. It’s fair to say that we humans think far less about important decisions than we should. “He probably means this thing I agree with, not that thing I don’t”, “sure, he said that but that was just hyperbole, he’s not going to do it”, “I normally vote for his side of politics”…
comment in response to post
I’d add that they probably did not fully think through the consequences of what he was saying he would do
comment in response to post
We find that, in 2023 (the last year we had data for when we did this), nearly one-quarter of all US deaths -- and nearly **one-half** of deaths at ages younger than 65 -- would not have happened if the US had the death rates of other rich countries, instead of uniquely high American death rates.
comment in response to post
What?
comment in response to post
Some classic A1, too
comment in response to post
Haha! Big nerd energy there Melody!
comment in response to post
So disappointing and yet entirely predictable at the same time. Very ALP areas
comment in response to post
Sure, all the evidence shows cooking classes are a waste of time and money, but what about if we lecture people about how they have diabetes because they are stupid and lazy while they at at the cooking class? Has anyone tried that before?
comment in response to post
Exactly. It’s pointless to complain about it because it’s not designed to be correct, it’s designed to produce responses. We should concentrate on fixing the actual issues with AI, not waste time shouting into the void about how it doesn’t do a thing it’s not designed to do
comment in response to post
But pointing out that they make factual mistakes misses the point. Of course they do - they don’t know what truth is. They don’t *know* anything as that is not what they are designed to do. Instead, we should focus on actual problems like fair sourcing of training data and bias in the training data
comment in response to post
It is really important to understand how LLMs work so that, if you choose to use them, you do so effectively. That does mean checking the accuracy of everything in their responses, even if they give you sources. There’s a fair chance those sources aren’t real or don’t say what the LLM claims
comment in response to post
If I asked a LLM to generate a workout plan for me or give me a structure for a presentation, I’d be annoyed if it came back with “sorry, I don’t know how to do that”. That’s hardly a useful tool, is it?
comment in response to post
Equally, they’re never going to tell you they don’t know the answer to a question because they’ve been trained to generate responses to prompts, not to generate factually correct answers to questions
comment in response to post
So, if you say you are a magic fire breathing dragon with diamond-encrusted wing syndrome and would like treatment advice, they’re not going to tell you you that you can’t possibly be a dragon and there’s no such thing as diamond-encrusted wing syndrome - they’re going to play along!
comment in response to post
They are designed to respond to user prompts by arranging words probabilistically. In other words, they are assembling their responses based on what their training tells them is the most likely collection and order of words given the prompt
comment in response to post
So, why is this a misconception? LLM stands for large language model. The key term there is ‘language’. They are not ‘knowledge’ models - they do not ‘know’ anything
comment in response to post
A disclaimer: I’m not saying LLMs or AI generally don’t have problems. They definitely do. But constantly pointing out that they get things wrong is like complaining that the car moved when you put your foot on the accelerator.
comment in response to post
The money is needed for more important things like presidential birthday military parades