Profile avatar
notmahi.bsky.social
Building generally intelligent robots that *just work* everywhere, out of the box, at NYU CILVR. Previously at MIT and visiting researcher at Meta AI. https://mahis.life/
13 posts 1,792 followers 441 following
Regular Contributor
Conversation Starter

Ever struggled with multi-sensor data from cameras, depth sensors, and other custom sensors? Meet AnySense—an iPhone app for effortless data acquisition and streaming. Working with multimodal sensor data will never be a chore again!

We just released AnySense, an iPhone app for effortless data acquisition and streaming for robotics. We leverage Apple’s development frameworks to record and stream: 1. RGBD + Pose data 2. Audio from the mic or custom contact microphones 3. Seamless Bluetooth integration for external sensors

Just found a new winner for the most hype-baiting, unscientific plot I have seen. (From the recent Figure AI release)

One reason to be intolerant of misleading hype in tech and science is that tolerating the small lies and deception is how you get tolerance of big lies

Can we extend the power of world models beyond just online model-based learning? Absolutely! We believe the true potential of world models lies in enabling agents to reason at test time. Introducing DINO-WM: World Models on Pre-trained Visual Features for Zero-shot Planning.

My advisor warned me that academics trend towards bitterness. He encouraged me to intentionally resist this, remember where I came from, and never forget the privilege of getting to spend a life working with knowledge and ideas. He too said that bitterness and resentment is easy.

New paper! We show that by using keypoint-based image representation, robot policies become robust to different object types and background changes. We call this method Prescriptive Point Priors for robot Policies or P3-PO in short. Full project is here: point-priors.github.io

Modern policy architectures are unnecessarily complex. In our #NeurIPS2024 project called BAKU, we focus on what really matters for good policy learning. BAKU is modular, language-conditioned, compatible with multiple sensor streams & action multi-modality, and importantly fully open-source!

Since we are nearing the end of the year, I'll revisit some of our work I'm most excited about from the last year and maybe a sneak peek of what we are up to next. To start of, Robot Utility Models, which enables zero-shot deployment. In the video below, the robot hasnt seen these doors before.

I'd like to introduce what I've been working at @hellorobot.bsky.social: Stretch AI, a set of open-source tools for language-guided autonomy, exploration, navigation, and learning from demonstration. Check it out: github.com/hello-robot/... Thread ->

Turns out aria-glasses are a very useful tool to demonstrate actions to robots: Based on egocentric video we track dynamic changes in a scene graph and use the representation to replay or plan interactions for robots 🔗 behretj.github.io/LostAndFound/ 📄 arxiv.org/abs/2411.19162 📺 youtu.be/xxMsaBSeMXo

A reminder for folks in financial need: many PhD applications have application fee waivers, those waivers are not super onerous, and they are usually granted (at least at the two schools I'm familiar with). Please take advantage of them.

On one of the first projects I supervised in my PhD, a student repeatedly ignored suggestions to commit and then accidentally deleted the project at the end of the semester. Please use git! There are even "fun" games you can use to learn it: learngitbranching.js.org

Interesting article but the author drank the Kool-Aid and never sought out other viewpoints: “Foundation models like GPT-4 have largely subsumed [previous] models that help robots with planning and vision, and locomotion and dexterity will probably soon be subsumed, too.”

I'll be presenting AnySkin at the Stanford Center for Design Research today at 2pm! Stop by for a chat and try the sensor out! More info: any-skin.github.io

A reminder that many feeds here are non algorithmic so reposting is more helpful than it is on twitter

This week's #PaperILike is "Robots for Humanity: In-Home Deployment of Stretch RE2" (Ranganeni et al., HRI 2024). This is probably the most inspiring robot video/demo that I've ever seen. Video: www.youtube.com/watch?v=K2U7... Paper: dl.acm.org/doi/abs/10.1...

PSA: Can we use more bar charts in ML papers? I can't recall the last time I wanted to compare dozens of numbers in a table to two decimal places. A visualization makes it much clearer whether claimed differences are significant.

Hi, I am new here and would love to interact with folks interested in robotics & AI. A bit about myself: I currently run a robotics lab at NYU, where we are building general-purpose robots. Some of our latest projects are on www.lerrelpinto.com. If you have questions ask away!

Very excited about this new project, DynaMem. It allows our robots to function in previously unseen environments, performing long-horizon manipulation tasks. Most importantly it *generalizes*, meaning you can try it out on a wide variety of homes and on different objects. (4x video)