Profile avatar
ak-nain.bsky.social
Sr. ML Engineer | Keras 3 Collaborator | @GoogleDevExpert in Machine Learning | @TensorFlow addons maintainer l ML is all I do | Views are my own!
141 posts 902 followers 133 following
Regular Contributor
Active Commenter

I want to share my latest (very short) blog post: "Active Learning vs. Data Filtering: Selection vs. Rejection." What is the fundamental difference between active learning and data filtering? Well, obviously, the difference is that: 1/11

What if you want to control the length of CoT sequences? Can you put a budget constraint at test time for the reasoner models while maintaining performance? This latest paper from CMU addresses these two questions via RL. Here is a summary of LCPO in case you are interested:

Matryoshka Quantization: Another fantastic paper from GDM! MatQuant came out last week. It was a very refreshing read. Here is a summary in case you are interested:

1/3 Two years ago, we started a series on Diffusion Models that covered everything related to these models in-depth. We decided to write those tutorials covering intuition and the fundamentals because we could not find any high-quality diffusion tutorials then.

JanusPro is here, the next generation of the Janus model, with a few surprises (even for me!). I liked JanusFlow a lot, but the JanusPro 1B is what caught my eye. Here is a summary of the paper in case you are interested:

I read the R1 paper last night, and here is a summary cum highlights from the paper (technical report to be more precise)

Everyone has heard enough about the scaling inference-time compute for LLMs in the past month. Diffusion models, on the other hand, have an innate flexibility for allocating varied compute at inference time. Here is a summary of how researchers at GDM exploit this property: 👇

I just finished reading the DeepSeekv3 paper. Here is everything you need to know about it: 👇 x.com/A_K_Nain/sta...

I just finished reading one of the latest papers from Meta Research, MetaMorph. Except for two things (both not good), it is an okay paper, simple, concise, and to the point. Here is a quick summary in case you are interested: x.com/A_K_Nain/sta...

Proud to see the release of Veo V2! deepmind.google/technologies... "Veo has achieved state of the art results in head-to-head comparisons of outputs by human raters over top video generation models"

What if I tell you you can train a SOTA Gaze estimation model in 1 hour on an RTX4090 GPU? Too good to be true? I was also skeptical of that claim made in the Gaze-LLE paper, but it is true. DINOv2 FTW! I finished reading the paper, and here is a summary : x.com/A_K_Nain/sta...

Can you pre-train and fine-tune your VLMs in FP8? Can you get more than 2x efficiency with some simple tricks? Nvidia presents NVILA, an efficient frontier VLM that achieves all of the above. I finished reading the paper, and here is a summary in case you are interested:

I am back to writing math-heavy yet intuitive blog posts. Almost two years ago, I wrote the diffusion tutorials with a similar intention. This time, I am targeting the fundamental concepts of LLMs and MLLMs. And here is the first post in that direction: Rotary Position Encodings. Enjoy reading! 🍻

1/2 Google DeepMind announced PaliGemma 2 last week. It is an upgrade of the PaliGemma open Vision-Language Model (VLM) based on the Gemma 2 family of language models. What does this generation of PaliGemma bring to the table? I finished reading the technical report, and here is a summary:

Gemini 2.0 (if we are calling it 2.0 now) will be an interesting development. Why? It will be a good indicator of "Do we need test-time compute for now, or is there more left to juice out the transformers with some neat tricks?"

Though the TTT used by the winners in ARC Prize 2024 definitely gave a huge boost to the performance and is a promising direction, I personally feel that in a few years we would have a solid model that will do all the tricks in a single forward pass. And it won't be a LLM

Launch day! 💥💥 venturebeat.com/ai/emergence...

Nvidia presents Star Attention to improve LLM inference efficiency over long sequences. I was skeptical when I read the abstract the day it was published, but now that I have read the full paper, I think this is another good research x.com/A_K_Nain/sta...

Okay I like the idea of this app, but TBH this platform needs to step up to become what we need it to be. As of now: 1. Laggy 2. Half the time the tabs don't work 3. Feed is still broken 4. No bookmarks yet 5. Hyperlinks work randomly

The multimodality space is now evolving in a much better way. The focus has shifted to finding the bottlenecks and fixing things on the fundamental level. This paper from **Apple** introduces **AIMv2**, and effort is in a similar direction, except that they only do it for the autoregressive models.

Shameless plug but this is all you need to understand the fundamental of diffusion models: magic-with-latents.github.io/latent/posts...

Generative World Explorer from John Hopkins University: an egocentric world exploration framework that allows an agent to mentally explore a large-scale 3D world arxiv.org/abs/2411.11844

Nvidia presents Hymba, another hybrid of attention and SSMs but for small family models: arxiv.org/abs/2411.13676