Profile avatar
imogen-parker.bsky.social
Associate Director, Ada Lovelace Institute. Mainly posting about AI, society and social policy. Previously IPPR, Citizens Advice, 5Rights, Fabian Society. image cred: Yutong Liu & Kingston School of Art / Better Images of AI / Talking to AI 2.0 / CC-BY
41 posts 976 followers 657 following
Regular Contributor
Active Commenter

X has become an extraordinary case study in irresponsible innovation. The lesson is that carelessness does not just make a technology less nice, it also makes it less useful.

I know everyone was hoping for a quiet Friday, but... UK Gov has announced a step back from AI safety, by rebranding AISI as the AI 'Security' Institute. Really thoughtful commentary from colleague @mbirtwistle.bsky.social here & short 🧵below on key changes www.ukauthority.com/articles/ai-...

Looking forward to the (only marginally less glamorous) UK leg of the Paris AI Summit today. Speaking with a great line up on AI in public services - come say hi if you're around. aifringe.org @adalovelaceinst.bsky.social

The Paris AI Action Summit is almost upon us, but will it actually move the needle towards AI that works for people and society? In our latest blog post we share some of our key questions for the Summit ⬇️ www.adalovelaceinstitute.org/blog/a-piece...

Two pieces of essential reading from Ada colleagues in advance of the AI Summit First, the excellently titled 'Delegation Nation' A primer for policymakers on Advanced AI Assistants by @juulsm.bsky.social @harryfarmer.bsky.social www.adalovelaceinstitute.org/policy-brief...

In the run-up to the French Summit, we've released a briefing on advanced AI assistants and why they should be front and center in the discussions around safety, governance and regulation. See below for details ⬇️

In Paris listening to the fifth talk in a row on global AI risks to humanity, and not one mention yet of runaway AI energy demand pushing us over the climate brink

EU ready to use 'bazooka' trade tool against Big Tech in retaliation against Trump www.ft.com/content/7303...

What are we really talking about when we talk about AI? Latest piece for the Global Government Forum 👇 www.globalgovernmentforum.com/what-are-we-...

We are doing a exciting new inquiry looking at 14-24 year olds' experiences of growing up with technology. We are looking for a partner to support a major piece of peer research as part of the programme. Please share www.adalovelaceinstitute.org/our-work/res...

🤔 The 🇩🇰 government asked us: how can democratic control over tech giants be strengthened? We - the government's expert group on tech giants - outline 7️⃣ principles and a set of guidelines for tackling economic, democratic, and security vulnerabilities:

"The regulation and governance of AI in EdTech has not kept pace with the evolution of the products, leaving pupils and schools exposed to potentially risky technologies being deployed." Hugely important work on AI and edtech in schools from @adalovelaceinst.bsky.social and @nuffieldfoundation.org

I am delighted our education and AI landscape review ‘A learning curve?’ has been published. 🎉 @adalovelaceinst.bsky.social @nuffieldfoundation.org I hope this work will generate further research and evidence to support schools, teachers, pupils and policy makers going forwards.

It was a pleasure to be apart of reviewing this report, and the findings could not come at a more important moment. As we head to the AI Action Summit, global governments face a choice - continue to let the evidence of AI's risks and harms pile up unaddressed, or take action to protect people.

You don't have to see this as a bubble to note the astonishing fragility of expectations in the AI industry www.bbc.co.uk/news/article...

Great to see this out in the open. Dropping AI pilots isn't a bad thing. It underscores the importance of testing new tech before rolling them out. Particularly in welfare, where the risks of amplifying inequalities and causing real world harm are significant. But... (1/5) tinyurl.com/6tn2r24x

this is a dystopia these are the things that happen in dystopias if you'd put into an SF novel 20 years ago that people helped others avoid brutal police raids by holding up handmade signs in online video while cheerily talking about cats... that would be a dystopian novel

Interested in the UK govt’s recent moves on AI? Have questions about how we should respond to the harms, risks and problems these systems pose? Come read this briefing by Demos, the Ada Lovelace Institute and Connected by Data

Bookmarking another AI case study which highlights real-world harms arising from a rush to roll out. AI can do incredible things -which can be socially beneficial. We're much more likely to see positive outcomes if we have effective regulation to balance profit incentives. tinyurl.com/2rythyeu

Uhm, sorry, what? Starmer thinks AI is going to DOUBLE UK productivity in less than 5 years?

Does he think you can just rock up to the jobcentre, say you're depressed and get a load of benefits? He knows that's not the case, so he's purely scapegoating

An additional prerogative with AI is that the AI industry beyond the very few makers of the biggest models really want regulation. They want assurance. They need their work to be trustworthy, otherwise they won't have the certainty they need to invest.

The UK AI Opportunity Plan and Gov's response is now live... There's lots to comb through, but a few first thoughts below 🧵(1/8). www.gov.uk/government/p... @adalovelaceinst.bsky.social

right?

Good stuff in the Pat McFadden speech about public sector reform. Test and learn / accepting uncertainty in particular. But 'tour of duty' construct is deeply unhelpful for anyone looking to build a career in government, as I wrote pre-election about my experience: richardpope.org/blog/2024/03...

A window into the blackbox - longer form thoughts on new latest batch of algorithms published by the UK Government medium.com/@imogencathe... @adalovelaceinst.bsky.social and ht @halcyene.bsky.social

Pleased to share the latest version of my paper with Arthur Spirling and @lexipalmer.bsky.social on replication using LMs We show: 1. current applications of LMs in political science research *don't* meet basic standards of reproducibility...