Profile avatar
imogen-parker.bsky.social
Associate Director, Ada Lovelace Institute. Mainly posting about AI, society and social policy. Previously IPPR, Citizens Advice, 5Rights, Fabian Society. image cred: Yutong Liu & Kingston School of Art / Better Images of AI / Talking to AI 2.0 / CC-BY
41 posts 976 followers 657 following
Regular Contributor
Active Commenter
comment in response to post
The move to 'security' centres a framing of malicious harm - bad faith actors that could use AI. These types of treats need to be addressed. But we also need to mitigate the here-and-now risks where careless AI might replicate inequality, take away jobs, or harm the environment.
comment in response to post
The addition of criminal misuse seems sensible. But I'm interested in whether others see the addition of human influence and societal resilience to make up for the loss of evaluation of 'societal impacts'. Especially as bias has been explicitly cut out of AISI's scope. www.aisi.gov.uk
comment in response to post
Now missing from their site, are previous commitments to build and run evaluations for - Societal impacts: How models could affect our social fabric, e.g. by weakening democracy, harming individual welfare and perpetuating unequal outcomes.
comment in response to post
In the list of things they are planning to evaluate, they have made some changes, adding: - Criminal Misuse - Human Influence (how AI systems could influence humans and reduce individual autonomy) - Societal Resilience (how can we make society more resilient to AI risks) But...
comment in response to post
And as ever, the unsung heroes are the @adalovelaceinst.bsky.social comms team, who have done an inconceivable amount of work in the last four weeks alongside Paris prep. Anyone else feel like they've completed 2025 already and it's time for a break?
comment in response to post
And a new research on public compute initiatives, and recommendations to ensure policy strategies deliver for society. Authored by @halcyene.bsky.social and @jaivipra.bsky.social www.adalovelaceinstitute.org/report/compu... @adalovelaceinst.bsky.social
comment in response to post
Fundamentally, low transparency has led to a drip feed of stories and concerns about DWP's algorithms. And that undermines public trust in AI across the board. In the Gov's latest survey they came second to last on openness. Hopefully this will make the case to be braver going forward. (5)
comment in response to post
It's welcome DWP did scrutinise them - and pulled the plug if they were failing to be safe, effective or fair. (Not always easy when there are sunk costs or political momentum behind rollouts). But are those learning shared across other departments. Do we need a What Works Centre for AI? (4/5)
comment in response to post
The real issue here is transparency. It's not easy for anyone - government departments included - to understand what types of AI are being used where, and to what effect. (3/5)
comment in response to post
These quiet failures sit in stark contrast to the highly optimistic rhetoric around AI. Are the right lessons being learnt and acted upon? The Government shouldn't just be looking for examples to scale (to "mainline"), they need to learn from failures - and that requires openness. (2/5)
comment in response to post
Strongly agree in the need to do the boring basics, data curation etc. to get this right.
comment in response to post
Finally, we continue to caution that big productivity numbers around the adoption of AI should be taken with a heavy dose of salt. AI is not plug and play, and benefits from its successful adoption will take work and incur costs far beyond the sticker price. (8/8) www.jrf.org.uk/ai-for-publi...
comment in response to post
And we're keen the Government expand their focus from a narrow subset of extreme risks from AI, but offer clear strategies to mitigate present harms of reinforcing systemic inequalities and accelerating power imbalances arising from AI's adoption and use. (7/8)
comment in response to post
There's real possibility for good in the National Data Library, but the Government is going to have to put public voice and public legitimacy at the heart of plans for the National Data Library for it to succeed and deliver public benefit, rather than trigger anxiety about data sharing. (6/8)
comment in response to post
Greater adoption of AI requires public license. Ministers should be attentive to the lesson from GDPDR - where we saw 3 million people pull back their health data for fear of how it might be used, particularly by private companies. (5/8)
comment in response to post
Many fall into the trap of seeing AI as needing exceptional freedom to succeed. But as @mbirtwistle.bsky.social has pointed out, it's surely right to want regulation around AI akin to cars and food safety - both to protect the public, but also to give them confidence to buy and use. (4/8)
comment in response to post
The Government's taking a big bet on AI - hoping it will boost the economy and improve public services. It's much more likely to deliver if it's on firm foundations. And that means ensuring safety and public acceptability. (3/8)
comment in response to post
Much of the plan will require careful implementation to succeed, but I agree with our Director @gaiamarcus.bsky.social that "there will be no bigger roadblock to AI’s transformative potential than a failure in public confidence." www.adalovelaceinstitute.org/news/ai-oppo... (2/8)
comment in response to post
Massive kudos to all the civil servants past and present that have pushed and pushed to get to this point. An incredible achievement and milestone.
comment in response to post
Also, they haven't published all responses. We submitted a return and got an email today to say it will be published in the next tranche in mid Jan.
comment in response to post
Great to know!
comment in response to post
8️⃣Being transparent isn’t easy – and it’s especially hard with evolving tools and terminology. But it’s worth monitoring how well the transparency register is working, and where it could be strengthened to ensure both the public and the public sector have confidence in how AI is being rolled out.