Profile avatar
rishav84ia.bsky.social
PhD student at NUS; Differential Privacy and Machine (Un)Learning Trying to stop machines from learning too much about us comp.nus.edu.sg/~rishav1
84 posts 151 followers 470 following
Regular Contributor
Active Commenter

Took an Indian train after nearly a decade—felt like a time machine back to childhood. Nostalgia hit hard!

I wish Bluesky rolled back its TikTok type scrolling video feed. I come here to read my peers opinions on things, not to get trapped in doomscrolling

Good news to start the day: our paper with my student Joy Yang, Themis Gouleakis, and Yuhao Wang, got in #AISTATS2025! We give (near) tight bounds for *Gaussian mean testing under truncation*: given censored data, how to test if a high-dim signal subject to white noise is significant or not?

Everything is linear time in the lon̷g run

Our paper “Laplace Transform Interpretation of Differential Privacy” is on this accepted papers list!

Now folks are waking up to what privacy means, and the tech guys (who have known it all along because they’re the ones that sold the lie) are pissed people are asking for the same data privacy and security they’ve been taking for themselves since the beginning — because it kills their business model

Apple is shady about its data collection practices

Let me tell you about what LLMs in Singapore can do

With @adamsmith.xyz and @thejonullman.bsky.social, we have compiled a set of profiles of 29 people in the "foundations of responsible computing" community ("mathematical research in computation and society writ large") who are on the faculty job market. Link: drive.google.com/file/d/1Hyvg... 1/3

Christmas fruitcake with my fiancée’s family 🥰🥰

📢 Machine unlearning is hyped right now. But guess what? The widely accepted mathematical definition of unlearning doesn't hold up in the real world. Here's a crucial detail that many research papers and surveys overlook: adaptive deletion requests break standard unlearning guarantees! 🤯

This looks interesting 👀! Sample complexity for generating differentially-private synthetic data is quite a relevant problem in the industry.

New paper on why machine "unlearning" is much harder than it seems is now up on arXiv: arxiv.org/abs/2412.06966 This was a huuuuuge cross-disciplinary effort led by @msftresearch.bsky.social FATE postdoc @grumpy-frog.bsky.social!!!

Skipping #NeurIPS this year for a very special reason: #Engaged 🎉

‘tis the season for self-promotion. If you’re an artist, illustrator, maker, writer, publisher, creator of any kind who DOES NOT and WILL NOT use generative AI, drop your shop links below so people can support real artists this Christmas.

o1 is starting to sound like HAL 9000

Enjoyed reading this paper, which builds on last year's NeurIPS Outstanding Paper by Steinke et al on one-run (ε, δ)-DP auditing, extending it to the f-DP framework. Excited to see the growing momentum in DP auditing research!

In a couple of years we'll be like ...

Great post that captures the tension between classic ML approaches and modern deep learning while acknowledging the nuances of both. “Working with LLMs doesn’t feel the same. It’s like fitting pieces into a pre-defined puzzle instead of building the puzzle itself.” www.reddit.com/r/MachineLea...

Simple attack to see if DALL-E was trained on a public image:

A short and sweet 3-page paper showing that applying Schrödinger’s equation to optimize Fisher-information privacy under a utility constraint leads to an uncertainty principle—just like Heisenberg’s, but trading off privacy and utility instead of position and velocity!