Profile avatar
evijit.io
Appled Policy Researcher at HuggingFace 🤗 and Researcher at University of Connecticut. AI Ethics/Safety.
167 posts 2,291 followers 661 following
Regular Contributor
Active Commenter

I'll be at the AAAI Conference in Philadelphia this week, where I am part of two accepted papers: 🧵

I had a couple of press mentions this week 🧵: 1. I spoke to Shraddha Goled for Tech Circle on the harms of Openwashing and better open standards! We also talked about @hf.co's Open R1 Project.

New work: Protecting Human Cognition in the Age of AI - with Anjali Singh, Karan Taneja, and Klara Guan. We claim that overreliance on GenAI models disrupt traditional learning pathways. We suggest best practices for better teaching, testing, and learning tools to restore these paths:

Last Friday, I spoke on a panel at the MIT Sloan AI Conference. I discussed the broken AI Harm reporting landscape, the importance of evals, safe harbors, structured disclosures, and our proposed Coordinated Flaws Disclosure framework as a path forward. Great questions and thanks for having me!

Thanks for covering our paper, Jasper! #Agents #Autonomy Read on Machine👇

Contrary to popular belief, unlike my colleagues I’m not in Paris for the AI Action Summit because I’m in India for my cousin’s wedding and taking (literally) 500 pictures per day

Congrats to the entire @roost-tools.bsky.social team for their successful launch! It's been fantastic to see this project take shape, open tools are very much needed if we're to develop technology that is safer for all - glad to be a partner with @hf.co 🤗 huggingface.co/blog/yjernit...

I love research using first principles because one of my students was working on this applied research project on limited data and compute and ended up inventing a cool new fine tune scaling lemma as a side quest! Very Deepseek energy 🙂‍↕️

I’ll be speaking about Coordinated Disclosures at a panel at the @mitsloan.bsky.social AI Conference tomorrow morning! See you there :)

Position: Fully Autonomous AI Agents Should Not be Developed New paper with @mmitchell.bsky.social , @sashamtl.bsky.social and @giadapistilli.com Do read the full paper, but I wanted to do a summary thread to talk about what we did :)

A group of us came together to challenge the dominant narrative of AI research - paper out today. We posit that “AGI” is as nebulous to define as its supposed benefits, and we argue that principled scientific, engineering and societal needs should drive AI research instead. Read 👇

New piece out! We explain why Fully Autonomous Agents Should Not be Developed, breaking “AI Agent” down into its components & examining through ethical values. With @evijit.io, @giadapistilli.com and @sashamtl.bsky.social huggingface.co/papers/2502....

I was quoted in a new article on @businessinsider.com about the Stargate project and competitive hardware moats. I reject the false premise of compute = utility. Teams like DeepSeek have made progress creatively with fewer resources, made possible via openness and collaboration (on @hf.co)!

New article! I had a really nice chat with @sharongoldman.bsky.social about Stargate. We talked about the importance of public AI infra, the concentration of power, the need for openness, and how this mad dash for AGI can siphon resources from issues that can be solved with tech right now.

Just submitted my first ever last author paper to @facct.bsky.social - reviewers please don’t let it flop 🥺❤️

Prima facie, the StarGate Project ($500B in private funding to boost AI Infrastructure in the US, with OpenAI, Microsoft, Oracle, Softbank and others being key players) seems great...

🧵 1/12 New blog post alert! With @mmitchell.bsky.social, @evijit.io and @sashamtl.bsky.social we talk about what appears to be one of the biggest shifts happening in AI: agents. These aren't your typical chatbots - they're systems that can take autonomous actions based on high-level goals.