Profile avatar
techpolicypress.bsky.social
Technology + democracy. Visit https://techpolicy.press Join our newsletter: https://techpolicy.press/newsletter Opinions do not reflect the views of Tech Policy Press. Reposts do not equal endorsements.
1,408 posts 20,893 followers 1,070 following
Prolific Poster

"These are not black box algorithms. They are design decisions: conscious choices about what goals to prioritize, what data sources to use, and what safety measures to include."

Despite a massive judgment against NSO Group, developer of the notorious Pegasus spyware, the battle against surreptitious surveillance software is far from over. For Tech Policy Press, Tim Bernard reviews legal and policy avenues for combating spyware:

The public’s waning trust in both government and technology companies has deepened. To rebuild it, public interest technologists need to double down on putting communities at the center of how our digital systems are designed and used, Lilian Coral writes.

When it comes to platform governance, we must reframe the conversation towards mandating transparency and accountability, with a specific focus on the internal recommendation engines, writes Tech Policy Press contributing editor Amber Sinha:

Concerns about AI chatbots delivering harmful, even profoundly dangerous advice or instructions to users is growing. What legal and policy interventions are possible? Justin Hendrix asked Columbia Law School's Clare Huntington, Stanford's Robert Mahari, and Tech Justice Law Project's Meetali Jain:

Personalized power has always been central to capitalist enterprises, writes Andressa Michelotti. The difference today lies in the scale and depth of power. Today’s tech elite… dictate the terms of participation in digital life and shape the public discourse, all without democratic accountability.

Myth-making is a crucial aspect of the AI industry, and ‘black boxes’ are woven into the stories its leaders tell, writes Tech Policy Press fellow Eryk Salvaggio. The problem is that those of us outside of the AI industry don’t know what rules they are following. That’s a policy decision, he writes.

Amidst shifting European and international politics, there is growing uncertainty over the future of Europe’s AI Act and questions over digital sovereignty. Tech Policy Press associate editor Ramsha Jahangir spoke to Kai Zenner, an advisor to German MEP Axel Voss, to learn more about what to expect:

Highly recommended!

My latest @techpolicypress.bsky.social article:

My colleagues and I penned this brief piece on thresholds to help steer better governance as AI capabilities reach worrying milestones. Further recommendations discussed in our white paper here- arxiv.org/abs/2503.05812

For so long, people thought that requiring logic, accountability, and proof were so inevitable, they needed "disruption." Now, we're going to relearn how fragile architectures for realizing any intention bigger than (or averse to) short-term capital and political power grabs really are.

The US government’s interest in collaboration to advance AI safety is over, writes Serena Oduro. Years of research aimed at addressing well-documented AI harms are being cast to the wayside as innovation is being framed as the only concept that matters.

With Bill Gates winding down his foundation’s giving, the era of billionaire philanthropy is drawing to a close. Its likely successor, led by Elon Musk and other tech moguls, could blur the lines between private profit and public interest, Jeremy McKey writes.

AI is moving fast—and we’re already nearing dangerous risk thresholds, write UC Berkeley Center for Long-Term Cybersecurity AI Security Initiative researchers. If frontier models cross the line, we must be ready to say no. Independent oversight is essential to make that call.

Despite a massive judgment against NSO Group, developer of the notorious Pegasus spyware, the battle against surreptitious surveillance software is far from over. For Tech Policy Press, Tim Bernard reviews legal and policy avenues for combating spyware:

Concerns about AI chatbots delivering harmful, even profoundly dangerous advice or instructions to users is growing. What legal and policy interventions are possible? Justin Hendrix asked Columbia Law School's Clare Huntington, Stanford's Robert Mahari, and Tech Justice Law Project's Meetali Jain:

Amidst shifting European and international politics, there is growing uncertainty over the future of Europe’s AI Act and questions over digital sovereignty. Tech Policy Press associate editor Ramsha Jahangir spoke to Kai Zenner, an advisor to German MEP Axel Voss, to learn more about what to expect:

Thomas Meier and Kristina Khutsishvili contend that the challenge to democracy posed by concentrated digital power is not merely institutional, economic, or ethical, but a disruption of the very conditions for democratic citizenship.

The emphasis around the globe is increasingly on rapid adoption of AI rather than safety and regulation. Amid that trend, there's a strong case for Canada to use its G7 presidency to push for increased international governance, Matthew da Mota, Christo Hall and Emily Osborne write.

If we care about the future of public technology, we can’t afford to ignore its environmental price tag, writes @Pupak Mohebali, Ph.D. It’s time for policymakers to stop seeing AI as an abstract cloud of code and start treating it like the real-world infrastructure it is, she says.

Our weekly newsletter is out, and there's so much we covered this past week! Subscribe to stay on top of all things global tech policy! buff.ly/P9nhcJn

Anthropic CEO Dario Amodei’s recent Times op-ed on AI regulation seems like a reasonable middle ground. But it is also a reminder of a threat on the horizon: an industry-scripted federal standard that would effectively eclipse state legislation, write Kate Brennan, Sarah Myers West, and Amba Kak.

News organizations are racing to integrate AI—but balancing safety, accuracy & trust is tough. Constrained bots are safer, but less useful. More flexible ones are risky. There are no easy answers, only hard tradeoffs, write Elise Silva, Madeline Franz, and Sodi Kroehler:

"What’s missing is moral clarity—a more profound sense of why we are building these technologies, and for whom. We need something more enduring than a risk framework: a moral lighthouse."

"Democracy does not only require rules and representation; it requires citizens capable of virtue. Reclaiming that capacity is the most urgent countermeasure to the rising empire of the tech elite."

After a recent court order, OpenAI is now required to retain the very data many of its users believed to be most private. This introduces serious privacy risks, especially for vulnerable users like victims and survivors of domestic violence, Belle Torek writes.

AI infrastructure may exacerbate past harms that previous fossil fuel industries have imposed upon communities of color, excusing its extractive and harmful impacts in the name of progress and innovation, writes Kapor Foundation tech policy associate Cecilia Marrinan.

While it is critical to prepare for the effects of AI on employment, these efforts should be grounded in present-day evidence rather than speculative futures, writes Natalia Luka. buff.ly/WYMovSC

A raft of new search powers in a recent bill focused on border security, “do more to expand the state’s power to access private data in Canada than any law in the past decade,” writes Robert Diab.

Our weekly newsletter drops tomorrow! Subscribe to stay updated on all things global tech policy! buff.ly/P9nhcJn

If we care about the future of public technology, we can’t afford to ignore its environmental price tag, writes Pupak Mohebali. It’s time for policymakers to stop seeing AI as an abstract cloud of code and start treating it like the real-world infrastructure it is, she says.

If AI deepens inequality, disempowers people, or displaces civic participation, it is not delivering the future we want—no matter how advanced the technology may be, or how much money some individuals can make from it, writes Michael L. Bąk. buff.ly/jIps6RS

OpenAI must commit to more than what it has disclosed so far for its restructuring to preserve the public interests that it originally promised to steward, write Michael Dorff and Tyler Whitmer:

Thomas Meier and Kristina Khutsishvili contend that the challenge to democracy posed by concentrated digital power is not merely institutional, economic, or ethical, but a disruption of the very conditions for democratic citizenship.

With their extreme wealth, control over information infrastructures, and proximity to political power, the billionaire owners of Big Tech companies can shape the information ecosystems that democracies depend on to make decisions, writes Jamie Hancock, digital policy researcher at Demos:

Courts are recognizing that manufacturers of such digital products have the same responsibilities to provide safe products to their users as manufacturers of physical goods, especially when those users are children, writes Ariel Fox Johnson.

News organizations are racing to integrate AI—but balancing safety, accuracy & trust is tough. Constrained bots are safer, but less useful. More flexible ones are risky. There are no easy answers, only hard tradeoffs, write Elise Silva, Madeline Franz, and Sodi Kroehler:

The emphasis around the globe is increasingly on rapid adoption of AI rather than safety and regulation. Amid that trend, there's a strong case for Canada to use its G7 presidency to push for increased international governance, Matthew da Mota, Christo Hall and Emily Osborne write.

A raft of new search powers in a recent bill focused on border security, “do more to expand the state’s power to access private data in Canada than any law in the past decade,” writes Robert Diab.

Why do Trump and some of his most fervent tech billionaire backers want to take land and create privately-controlled zones? Gil Duran says we’re watching the rise of a new anti-democratic extremism—networked, crypto-financed, and cloaked in the language of freedom. buff.ly/eOk0L2j

Six months into 2025, democracy is under threat. Justin Hendrix, Ramsha Jahangir, and Dean Jackson spoke to experts from around the world to understand how they make sense of this moment. buff.ly/n8zs8y1

Three pieces on the politics of information worth reading in connection to one another - First, political suppression - @prateekwaghre.com in @techpolicypress.bsky.social on how "a large part of information suppression on the internet is state-driven" www.techpolicy.press/indias-infor... 1/3

Interested in reporting on data centers? @pulitzercenter.bsky.social, @lighthousereports.com & @techpolicypress.bsky.social are trying to understand what support journalists need to rigorously report on data centers. Take the survey: https://twp.ai/9PThXW