stonet2000.bsky.social
PhDing @UCSanDiego @HaoSuLabUCSD @hillbot_ai on scalable robot learning, reinforcement learning, and embodied AI. Co-founded @LuxAIChallenge to build AI competitions. @NSF GRFP fellow
http://stoneztao.com
78 posts
2,686 followers
196 following
Regular Contributor
Active Commenter
comment in response to
post
(scripted, not autonomous), drawing of our projects logo. doubt any IL algorithm can solve this accurately.
comment in response to
post
both haha. Any problem in RL can also surface in MARL right, but now combine that with variations in the multi agent aspect and that’s a lot of different things
comment in response to
post
curse of dimensionality in problem choice
comment in response to
post
their project website: dextrah-rgb.github.io
comment in response to
post
now with this work, mujoco playground, maniskill/sapien etc. I think we might really just GPU sim our way to more general purpose manipulation lol
hope to share our own work doing a similar zero shot rgb deployment with low cost robots soon!
comment in response to
post
perhaps not maliciously intentional. Eg one might comment on an issue and fix it, but then wait for the original poster to close the issue themselves. Inflates issues count by a bit.
comment in response to
post
i know of some instances where people purposely leave issues up to make the github look more active
comment in response to
post
a non empty github issues is ironically a good indicator open source is working and people are using it
comment in response to
post
i use lettucemeet usually, will take a look at this one!
comment in response to
post
i’ve been “forced” to include videos of humanoids doing manipulation when in fact it is just standard bi-manual manipulation
as a young phd student it seems i can’t really avoid this…
comment in response to
post
works well for me as a PhD student. As much I want to collaborate / mentor interns still need some days to think for myself!
comment in response to
post
this is pretty cool. Is this chart manually made by you? Or AI generated
comment in response to
post
another is i find very few chinese students (chinese nationals / chinese ethnicity) on bsky. Found this kind of evident when the whole neurips drama with picard occurred bsky was pretty quiet
comment in response to
post
Yeah it’s clear it’s not realistic. Their rigid body solver is also not sufficient either it seems although they claim it’s a bug (which suggests their demo manipulation videos do not run on their open source code). That being said does there exist highly accurate fluid / soft body sims?
comment in response to
post
It’s true their fluids solver is very simple but maybe the coupling part is a bit non trivial. Seems for each object pair they check which solvers are used for the objects and then run specific code to handle their interactions: github.com/Genesis-Embo...
know anything about this part?
comment in response to
post
Thanks for sharing! Was initially a bit scary to try and verify their claims on short notice when there were so many senior faculty and researchers involved. I was afraid maybe I messed up my calculations by too much, or misread their documentation etc
Hindsight though I am happy I spoke out
comment in response to
post
oops bsky post is this one bsky.app/profile/ston...
comment in response to
post
Unfortunately it severely overstates many benchmark numbers and is actually slower than existing GPU simulators.
Full report of issues here: x.com/stone_tao/st...
comment in response to
post
Some of their prior work is quite nice. They did not need to overstate things here… I and others suspect something else is going on that may have forced this release like this
comment in response to
post
This is super helpful for a non-sim person, thanks for the perspective!
comment in response to
post
Thanks for taking the time to do this, Stone! Really helps to hear an expert's perspective on all of this.
comment in response to
post
it’s a travesty Genesis has more github stars than i think all the major simulation repos combined…
comment in response to
post
Full report shared here: bsky.app/profile/ston...
comment in response to
post
One might ask what differs this from eg Behavior-1K or Robocasa. ManiSkill-HAB supports RL as it runs fast, and comes with the most diverse set of mobile manipulation demonstrations in sim (others have magic grasp, are stationary manipulation, or have limited data diversity)
comment in response to
post
This works build upon the original M3 work by @Jiayuan_Gu @AIatMeta now adding actual manipulation instead of magic grasp, faster simulation, tons of demos and baselines, and nice ray traced/photorealistic renders!
comment in response to
post
unfortunately not yet. It is only fast for simple scenes but for more realistic scenarios (robust locomotion, manipulation) it seems to be slower (rendering or state only simulation) than NVIDIA isaac lab and my labs simulator maniskill. Sharing a report next week with accurate benchmark numbers
comment in response to
post
Moreover some may ask "Isn't this a NeurIPS challenge, why is it still running." There were many delays (sickness), but it's done now. We plan to run it to completion and do not plan to end it during NeurIPS, it's already one of the most popular competitions! 8/8
comment in response to
post
Competition Page: kaggle.com/c/lux-ai-sea...
Code: github.com/Lux-AI-Chall...
7/8
comment in response to
post
Lux AI is a passion project started during undergrad with @BovardDT as a more accessible alternative to MIT Battlecode and is now a massive community of competitive AI programmers! I have never had a project run so long even when I'm doing my PhD on unrelated topics 😂 6/8
comment in response to
post
This was led by @BovardDT and I, with contributions from @akarshkumar0101 and my advisor @haosu_twitr on meta learning/RL that shaped the research direction. Finally a big thank you to @kaggle for our nearly 4-year long collaboration on large-scale multiagent AI competitions 5/8
comment in response to
post
The challenge also can quantitatively (instead of qualitatively) evaluate humans vs AI. How do RL agents behave compared to humans when it comes to meta learning? We often get 10k+ submissions (majority is rule-based) and millions of games per season so we have the data! 4/8