felixsimon.bsky.social
Research Fellow in AI and News, Reuters Institute, Oxford University | Research Associate & DPhil, Oxford Internet Institute | AI, news, misinfo, tech, democracy | Affiliate Tow Center, CITAP | Media advisor | My views etc…
https://www.felixsimon.net/
1,059 posts
7,195 followers
774 following
Regular Contributor
Active Commenter
comment in response to
post
As in: What motivates someone to move effortlessly (and without deep expertise) from pontificating about e.g. geopolitics, populism, to Covid, to AI? Are they convinced by the wares they sell? And who pays for this so-called "expertise" and why?
comment in response to
post
hahahha
comment in response to
post
Like sure this is all awful at the end of the day but let the people have some fun for once.
comment in response to
post
Affirmative
comment in response to
post
(And what Meta can do, Google can do, too. Or at least one would assume so)
comment in response to
post
Seems like this is true for advertising with the slight wrinkle that it might be even easier here because “in advertising, we don't have to care about truth” as one executive recently put it to me.
comment in response to
post
I argued (in what feels like ages ago) that AI in news not just expanded platform control over means of distribution but also to a degree over production.
comment in response to
post
Network effects are gonna network effect I guess
comment in response to
post
The best I could muster on a Thursday eve after a glass of white wine 😅
comment in response to
post
A big thanks to our speakers Gill Whitehead, Adam Mahdi, @chrismoranuk.bsky.social , @Jonathan Hvithamar Rystrøm, Zeynep Pamuk & Scott Hale, our many guests, and to @ballioloxford.bsky.social, @michelledisser.bsky.social & @amyrossarguedas.bsky.social for all the logistical support. Stay tuned.
comment in response to
post
The upshot? Agentic AI promises dazzling productivity gains (if it materialises remains to be seen), but without good governance and public-interest guard-rails we could be headed for trouble.
comment in response to
post
And the public sphere does not get spared either, with agents potentially personalising news to an ‘audience of one’, threatening publishers’ business models (and perhaps shared democratic deliberation) in the same breath.
comment in response to
post
Regulators, meanwhile, are scrambling; a “pick-and-mix” mosaic of principles, sector codes and voluntary standards leaves vast latitude to the very firms building AI agents (and the underlying system).
comment in response to
post
Some of the trade-offs involved are no longer theoretical: big banks now run systems that draft complaint letters, chase fraud and may soon pre-empt customers’ gripes. Healthcare providers are thinking about systems that could outperform doctors on some diagnoses…but not on all
comment in response to
post
From thermostats to agents AI systems, we traced the spectrum of what “agentic” means, learned that greater capability often walks hand-in-hand with greater opacity, forcing us to weigh usefulness against a thicket of risks.
comment in response to
post
Also I guess everyone can now see what I’m up to next week but hey 🤷♂️
comment in response to
post
Makes one wonder if the deal was struck with Amazon because:
1) They offered more $$$?
2) They offered better conditions for the NYT?
3) They are not seen as cannibalising the NYT's traffic (unlike e.g. Google, OpenAI, Perplexity)?
4) All or some of the above together
comment in response to
post
Shocked how well it works
comment in response to
post
The latter
comment in response to
post
Yup. Almost killed two recent publications of mine at two different journals— both of which said they accepted them in their guidance 🙃 they got through thanks to kind editors but it was a fight
comment in response to
post
Yup agree with that and would argue that both these things are true at the same time: individual responsibility for your actions AND institutional mechanisms that prevent such cases :)
comment in response to
post
Thanks, although I would say that journalists have some individual responsibility, too, in how they use AI systems. E.g. in this case the fact that it wasn't (?) intentional & that the freelancer seemingly did not know that AI's could hallucinate does not fully exonerate them from checking output
comment in response to
post
Thank you, Jan. That’s very kind of you to say.
comment in response to
post
The Sun-Times at least seems to have drawn consequences from this (chicago.suntimes.com/press-room/2... ) which strike me as reasonable and well-advised.
comment in response to
post
…for some tasks – but here is the rub: For some tasks, but not for others. Helping journalists and the public to understand and distinguish one from the other needs to be part of the way forward.
comment in response to
post
Railing against their general use as some do is fine – we need criticism of the political economy around them (copyright, dependency, and all that jazz) and their irresponsible use. But at the same time, people (including journalists) are using them and they are clearly useful…
comment in response to
post
My initial reaction to this case was: "Really, this is still happening?" Quite indicative of my own blind spots. I had simply assumed that most journalists would know by now that this particular use of an AI chatbot comes with risks. Well, I was wrong!
comment in response to
post
What can we learn from this case? We really need a better education of journalists on all levels (from freelancers to executives) about how these systems work and for what (and where they fail). This does not just apply for hallucinations but also questions around sensitive data.
comment in response to
post
Yes, the AI created errors. But that it was allowed to end up in print is to blame on oversight structures. This is, btw, not just an AI thing as e.g. the 2018 case of Claas Relotius and completely invented magazine pieces in Germany demonstrate: www.bbc.co.uk/news/world-e...
comment in response to
post
It is also too easy to just blame the technology here and raises broader question about why such syndicated material is not acknowledged as originating from a third party and not subjected to more thorough verification and review.
comment in response to
post
Regardless, it demonstrates that journalists working with such tools at such a basic level and for the generation of information (regardless of one’s opinion on whether these systems should be used for such tasks in news at all) must double-check the output*
(*exceptions apply)
comment in response to
post
When I was interviewed I argued that while this particular use of AI by the freelancer was concerning, we shouldn't be so fast in assuming this was intentional. And by now someone has come forward and admitted that they had made an error (which is good and should be applauded).