Profile avatar
hwwpotts.bsky.social
UCL professor of digital health (+ COVID stuff). Lead author of the GOV.UK website on how to evaluate digital health https://www.gov.uk/government/collections/evaluating-digital-health-products
72 posts 193 followers 131 following
Prolific Poster
Conversation Starter
comment in response to post
There's a local by-election tomorrow, a Tuesday, not far from us: www.brent.gov.uk/the-council-...
comment in response to post
... one study did show activation increased. They provided extensive training for patients in use of the PHR. Suggests ensuring high engagement with systems essential to realise promises made for PHRs. This is 1st paper from Irina Osovskaya's PhD studies: always a special moment!
comment in response to post
Participants relied on personal experiences and social endorsements when judging low-risk digital health tools, while making little reference to traditional scientific evidence. However, with more high risk apps, they shifted toward wanting evidence from authoritative sources (govt, NHS).
comment in response to post
Did you see this experiment my colleague Kristina conducted not just at my university but *in my department*? She and the male instructor assumed eachothers’ names for an online course: and the difference in feedback they received was horrifying.
comment in response to post
Paper with @abifisher.bsky.social
comment in response to post
Me too please
comment in response to post
Given interest in the above, I thought you might also like to see how we are using a partial randomisation scheme to award our smallest grants - we review all applications to see if they meet a quality threshold, and then allocate the funding randomly to those that do www.nature.com/articles/d41...
comment in response to post
Thanks, hadn't seen that. It reads like cargo cult science to me. All they've shown is that if you give certain prompts, then tools generate certain texts. There is no evidence of scheming. They start from a false presumption that the LLMs are reasoning and that we can see what that reasoning is.
comment in response to post
Open AI want people to think ChatGPT is truly intelligent. They also don't mind talking up (pretend) dangers because regulation keeps out competition. We've had stories like this before and they proved to be so much hype. It's just an LLM and we know how LLMs work.
comment in response to post
But they didn't look at their chain of thought. They prompted them to generate some text and that's what the generated text said.
comment in response to post
These are just stories to serve ChatGPT's PR. There's no evidence for them. It is unclear how a chatbot could do any of these things.
comment in response to post
6/12. Phew, I am growing old gracefully.
comment in response to post
"Son's"