Profile avatar
felixsimon.bsky.social
Research Fellow in AI and News, Reuters Institute, Oxford University | Research Associate & DPhil, Oxford Internet Institute | AI, news, misinfo, tech, democracy | Affiliate Tow Center, CITAP | Media advisor | My views etc… https://www.felixsimon.net/
1,077 posts 7,198 followers 776 following
Regular Contributor
Active Commenter
comment in response to post
🫡
comment in response to post
Fair, but the problem here is in my view more often that people share/cite studies that support their argument or their point of view REGARDLESS of quality, without having read them properly (and sometimes stretching them beyond what they say), including bad papers that have undergone peer review
comment in response to post
And to the argument: “But what if people share non-peer reviewed papers or they get picked up in the media?” (the assumption being that low quality research gets widely shared)…
comment in response to post
And before people say “ah but it’s different in maths and physics”…Econ as a social science does it too and it seems to work? Pol Sci same?
comment in response to post
The two times I’ve tried my hand at a preprint so far were bruising from the “getting it into a journal” perspective but the papers itself were better for it because the open feedback was 1) more numerous, 2) more varied and 3) no less scathing/critical
comment in response to post
The fact that some elements of it aren’t flawless, that certain grand predictions haven’t materialised, and that “AI” (as a field and technology) is beset by deep and real issues doesn’t equate to AI being “just hype” – an assumption I encounter far too frequently elsewhere.
comment in response to post
Seeing examples and side-by-side comparisons with what was possible only two years ago and what is possible now really makes this obvious. And the volume of applications, both existing and nascent, is impressive.
comment in response to post
Be that as it may, it was hard not to leave with the feeling that the AI space is undeniably set to expand (and yes, I am aware that this is what happens when you go to an industry event).
comment in response to post
3) The focus is overwhelmingly on broader markets and enterprise applications; news as an industry holds little appeal for the majority (and isn’t really talked about either)
comment in response to post
(2) Regulation and state capacity is viewed with deep skepticism. What academics and other sectors (like creators and others affected by AI, but often with little say) highlight as crucial issues often appear as mere nuisances to many AI practitioners – copyright being an example
comment in response to post
That said, the “vibe” is different: (1) Optimism reigns supreme. The prevailing sentiment is markedly more bullish than what I am used to.
comment in response to post
And lo and behold, the folks I chatted with & the speakers I listened to, from across the U.K. and EU AI ecosystem, were as reasonable and down to earth as the delegates you'd encounter at, say, a journalistic industry conference (contrary to what you read in some news reporting)
comment in response to post
So, what’s the antidote? You go, you listen, you speak to people on the ground. Agreement on all fronts isn’t necessarily the goal, but learning absolutely is.
comment in response to post
Just as parts of the AI community sometimes engage only superficially with, say, social science research, the inverse holds true as well. But interesting as it is to build your picture by mainly following the likes of Sam Altman, this, of course, also fosters a skewed perspective
comment in response to post
There’s a distinct (sometimes snobby) disinterest, it seems, in truly engaging with this community beyond social media follows of top “AI leaders” and reading a bit of tech coverage.
comment in response to post
…often seems to suffer from a lack of exposure to (and engagement with) the actual builders and appliers of AI (with some notable exceptions).
comment in response to post
As in: What motivates someone to move effortlessly (and without deep expertise) from pontificating about e.g. geopolitics, populism, to Covid, to AI? Are they convinced by the wares they sell? And who pays for this so-called "expertise" and why?
comment in response to post
hahahha
comment in response to post
Like sure this is all awful at the end of the day but let the people have some fun for once.
comment in response to post
Affirmative
comment in response to post
(And what Meta can do, Google can do, too. Or at least one would assume so)
comment in response to post
Seems like this is true for advertising with the slight wrinkle that it might be even easier here because “in advertising, we don't have to care about truth” as one executive recently put it to me.
comment in response to post
I argued (in what feels like ages ago) that AI in news not just expanded platform control over means of distribution but also to a degree over production.
comment in response to post
Network effects are gonna network effect I guess
comment in response to post
The best I could muster on a Thursday eve after a glass of white wine 😅
comment in response to post
A big thanks to our speakers Gill Whitehead, Adam Mahdi, @chrismoranuk.bsky.social , @Jonathan Hvithamar Rystrøm, Zeynep Pamuk & Scott Hale, our many guests, and to @ballioloxford.bsky.social, @michelledisser.bsky.social & @amyrossarguedas.bsky.social for all the logistical support. Stay tuned.
comment in response to post
The upshot? Agentic AI promises dazzling productivity gains (if it materialises remains to be seen), but without good governance and public-interest guard-rails we could be headed for trouble.
comment in response to post
And the public sphere does not get spared either, with agents potentially personalising news to an ‘audience of one’, threatening publishers’ business models (and perhaps shared democratic deliberation) in the same breath.
comment in response to post
Regulators, meanwhile, are scrambling; a “pick-and-mix” mosaic of principles, sector codes and voluntary standards leaves vast latitude to the very firms building AI agents (and the underlying system).
comment in response to post
Some of the trade-offs involved are no longer theoretical: big banks now run systems that draft complaint letters, chase fraud and may soon pre-empt customers’ gripes. Healthcare providers are thinking about systems that could outperform doctors on some diagnoses…but not on all
comment in response to post
From thermostats to agents AI systems, we traced the spectrum of what “agentic” means, learned that greater capability often walks hand-in-hand with greater opacity, forcing us to weigh usefulness against a thicket of risks.
comment in response to post
Also I guess everyone can now see what I’m up to next week but hey 🤷‍♂️
comment in response to post
Makes one wonder if the deal was struck with Amazon because: 1) They offered more $$$? 2) They offered better conditions for the NYT? 3) They are not seen as cannibalising the NYT's traffic (unlike e.g. Google, OpenAI, Perplexity)? 4) All or some of the above together