Profile avatar
giadapistilli.com
Principal Ethicist @hf.co | Philosophy Ph.D. @sorbonne-universite.fr
208 posts 2,363 followers 242 following
Regular Contributor
Active Commenter
comment in response to post
La réponse est peut-être justement dans notre capacité à nous poser cette question. L'IA excelle dans le "comment", mais c'est l'humain qui définit le "pourquoi". Notre rôle n'est pas de rivaliser avec les machines, mais d'orienter leur développement vers ce qui fait sens pour notre société.
comment in response to post
Curious to hear your thoughts: how can we balance technological innovation with meaningful user sovereignty over digital identity?
comment in response to post
Should we reimagine consent for the AI age? Perhaps we need dynamic consent systems that evolve alongside AI capabilities, similar to how healthcare transformed from physician-centered authority to patient autonomy.
comment in response to post
The "consent gap" in AI is real: while we can approve initial data use, AI systems can generate countless unforeseen applications of our personal information. It's like signing a blank check without knowing all possible amounts that could be filled in.
comment in response to post
Il momento più emozionante è stato quando una studentessa del liceo mi ha confidato che il mio percorso in filosofia ed etica dell'intelligenza artificiale l'aveva ispirata a intraprendere lo stesso cammino. Ne è valsa la pena anche solo per aver acceso questa scintilla in una giovane mente.
comment in response to post
À propos d'Open-R1 : huggingface.co/blog/open-r1
comment in response to post
Exactly! Very good point.
comment in response to post
Yes on both your fears. Regarding your last point, some people are living a very rich social life but still turn to “artificial relationships” — with all their dangerous pitfalls.
comment in response to post
Looking forward to reading it!
comment in response to post
I suggest you get familiar with my research and every public mention before assuming stuff just from one single post.
comment in response to post
Well, that’s rude. I am just asking questions here. You can scroll away.
comment in response to post
Well, it’s not the case. I am just asking questions, I am not defending one idea or another.
comment in response to post
The thing is, I am not sure that would be enough. People get attached even when they consciously know they're dialoguing with something inanimate. We do agree on the anthropomorphizing stuff; what I was questioning here was the authenticity of those human feelings.
comment in response to post
Fascinating! Can you explain to me what "xorientation" is?
comment in response to post
Maybe. But I am unsure about the "it can't give you anything back" bit. People feel unjudged, and it's literally talking back to you. It is a simulation, but aren't some person-to-person relationships also simulations and unrequited?
comment in response to post
Apparently I am not a competent ethicist if I still do philosophy.
comment in response to post
You assume I don't know things just because I am asking broader philosophical questions, which call for reflection and don't need an engineering answer. Philosophy is meant to question what seems obvious.
comment in response to post
I've been in the AI industry since 2019, thanks for the mansplaining. I am a philosopher first, so of course I am going to question those things. No need to be aggressive here.
comment in response to post
It's exhausting, really.
comment in response to post
Totally! I have been there since 2014, and if I didn't need it for work, I'd be already deleting my account.
comment in response to post
@giadapistilli.com made that point in @kashhill.bsky.social's piece. AI chatbots "string[] words together in an unpredictable manner ... and it’s impossible for moderators to imagine beforehand every possible scenario." Which is why we can't trust them as our agents, let alone agents for our heart.
comment in response to post
As an ethicist, I believe we need to thoughtfully examine how these tools are reshaping human relationships, while keeping in mind there are always companies behind these machines, working to drive engagement and revenue.
comment in response to post
It's fascinating to see Dr. Brandon's insights on the neurological basis of these connections, while Prof. Inzlicht's research reveals how AI can sometimes show more empathy than human crisis responders.
comment in response to post
12/12 Want to know more? Read the full blog post here: huggingface.co/blog/ethics-...
comment in response to post
11/12 This isn't just theoretical - at @hf.co we're already building tools for responsible agent development, including smolagents, the AI Cookbook, and specialized interfaces.
comment in response to post
10/12 What's needed next: - Rigorous evaluation protocols - Better understanding of societal impacts - Improved transparency - Clear disclosure mechanisms - Open source and community-driven development
comment in response to post
9/12 Our main recommendation: don't develop fully autonomous agents. The ability to write and execute unrestricted code is too risky. Instead, focus on semi-autonomous systems with clear constraints and human oversight.
comment in response to post
8/12 Particularly thorny: making agents more "human-like" might make them easier to use, but can lead to overreliance and inappropriate trust. Plus, as agents get more interconnected, their actions can have cascading effects.
comment in response to post
7/12 For example, agent consistency could reduce human bias in decision-making. But it might also perpetuate systemic biases at scale, and tracking consistency requires extensive data collection - raising privacy concerns.