Profile avatar
atlaswang.bsky.social
https://vita-group.github.io/ 👨‍🏫 UT Austin ML Professor (on leave) https://www.xtxmarkets.com/ 🏦 XTX Markets Research Director (NYC AI Lab) Superpower is trying everything 🪅 Newest focus: training next-generation super intelligence - Preview above 👶
110 posts 1,868 followers 3,227 following
Regular Contributor
Active Commenter

This is going to kneecap science in this country for years. www.nature.com/articles/d41...

One of my PhD students got their visa revoked. I know of other cases amongst my AI colleagues. This is not what investing in US leadership in AI looks like. www.aljazeera.com/news/2025/4/...

🚀 Thrilled to announce SPIN-Bench!🚀 We all love seeing how smart LLMs can be-solving complex math, crafting beautiful text, and coding effortlessly. But how well do they handle real-world strategic complexity, cooperation, & social negotiation? Can they play well when things get tricky? Not quite!

Just had a meal that gifted me two rare treasures: 1️⃣ Meeting someone infinitely wiser than me. 2️⃣ They weren’t cold or mean—just gently showed me where I could grow. "To learn truth at dawn, I’d die content by dusk." ✨ Humility tastes better with kindness. #Gratitude #LifeLessons

This accidental photo feels distinctly #american —industry and technology advancing, sometimes hesitantly, beneath the weight of religion and firearms

🚀 Thrilled to announce SPIN-Bench!🚀 We all love seeing how smart LLMs can be-solving complex math, crafting beautiful text, and coding effortlessly. But how well do they handle real-world strategic complexity, cooperation, & social negotiation? Can they play well when things get tricky? Not quite!

I think some people hear “grants” and think that without them, scientists and government workers just have less stuff to play with at work. But grants fund salaries for students, academics, researchers, and people who work in all areas of public service. “Pausing” grants means people don’t eat.

Since taking absence from my university role, students thrive even better: consistent conference productivity (NeurIPS ICLR etc basically on autopilot); two new PhD fellowships (NVIDIA, IBM); multiple paper awards; and most recently, several successful job placements including a new professor (HKU)

I love this adorable lady’s narrative and hope it seen by more people 【Asians being seen 第二则-哔哩哔哩】 b23.tv/GZ4h0HO

current price to buy or invest a #GenAI startup: is it already too inflated?

Me: I’m feeling like my life on a busy shaky ferry😭 My junior student: Maybe a busy shaky ferry in new waters is better than a calm ferry always in the same waters #IAmEducated

medium.com/@brawlingoce...

The not-so-great-yet human intelligence wishes y’all Happy Holidays! 🎁🎄

Hard at work on the AAMAS rebuttals. Yes reviewer 2 we.are looking at you

The not-so-great-yet human intelligence wishes y’all Happy Holidays! 🎁🎄

With o3 rocketing to (kinda) super-intelligence overnight, I’m prepping my fallback gig: selling “hand-crafted” research to anyone nostalgic for the pre-AI era. Turns out boutique history may be the only job left! #FutureProof #AIRevolution

Excellent post about the recent OpenAI o3 results on ARC (& other benchmarks). I don't know how @natolambert.bsky.social manages to write these so quickly! I highly recommend his newsletter. www.interconnects.ai/p/openais-o3... I am (more slowly) writing my own take on all this, coming soon.

@ruisicai.bsky.social was awarded NVIDIA graduate fellowship blogs.nvidia.com/blog/graduat... It marks the 8th PhD fellowship brought back to VITA group in the last 4 years🎇🎆 alongside NSF GRFP, Apple, Amazon, Adobe, IBM, Qualcomm, and Snap When students are too compelling, PI feels useless :(

(1/n) My favorite "optimizer" work of 2024: 📢 Introducing APOLLO! 🚀: SGD-like memory cost, yet AdamW-level performance (or better!). ❓ How much memory do we need for optimization states in LLM training ? 🧐 Almost zero. 📜 Paper: arxiv.org/abs/2412.05270 🔗 GitHub: github.com/zhuhanqing/A...

Everyone’s saying #GenAI tools ‘unleash human creativity’— the new politically correct mantra! But are they empowering us, or just tricking us into creating better training data for themselves? 🤔 Who’s really learning here? #HumanVsMachine

fortune.com/2024/12/09/n... It’s exciting to see another pitch on neurosymbolic AI by @fortune.com , after one nice by @reuters.com last month I don’t believe we are already in the monotonic deceasing “convex optimization”-like path of AI. It’s always more fun to back & forth like good non-convex!

(1/n) My favorite "optimizer" work of 2024: 📢 Introducing APOLLO! 🚀: SGD-like memory cost, yet AdamW-level performance (or better!). ❓ How much memory do we need for optimization states in LLM training ? 🧐 Almost zero. 📜 Paper: arxiv.org/abs/2412.05270 🔗 GitHub: github.com/zhuhanqing/A...

After using ChatGPT Pro ($200 paid) to help me write an ML data loader for a complex format, I’ve realized one thing: there’s no going back… It’s like having a 24/7 genius intern who doesn’t complain, never sleeps, and somehow gets me. 🤖💻 #AIcoding #GPT