Profile avatar
juliakruk.bsky.social
NLP, CSS & Multimodality💫 Graduate Researcher @Stanford NLP | Research Affiliate @Georgia Tech | Data Scientist @Bombora 📍New York, NY 👩‍💻 https://j-kruk.github.io/
19 posts 358 followers 869 following
Getting Started
Conversation Starter
comment in response to post
Come chat with us at NeurIPS 2024 🎉 📍 West Ballroom A-D #5211 ⏰ Wednesday Dec 11th, 11 a.m. — 2 p.m. PST.
comment in response to post
🥳 This work was an amazing collaboration between @gtresearchnews.bsky.social and @stanfordnlp.bsky.social. 🙏 Huge thank you to @judyh.bsky.social, @polochau.bsky.social, and @diyiyang.bsky.social for their guidance!
comment in response to post
In addition to the Semi-Truths dataset, we release our pipeline to enable the community to create custom evaluation sets for their unique use cases! Please interact with our work on: 🤗HF: huggingface.co/semi-truths 👾Github: github.com/J-Kruk/SemiT...
comment in response to post
Every image is enriched with attributes quantifying the magnitude of change achieved. Evaluating performance on these attributes provides insights into detector biases. đź’ˇ UniversalFakeDetector suffers >35 point performance drop on different scenes, and >5 points on magnitude of change.
comment in response to post
đź”§ To control what is changed in an image and how, we use semantic segmentation datasets that provide real images, entity masks, and entity labels. We perturb entity & image captions with LLMs, then apply different diffusion models and augmentation techniques to alter images.
comment in response to post
🚀 We present Semi-Truths, a dataset for the targeted evaluation and training of AI-Augmented Image Detectors. It includes a wide array of scenes & subjects, as well as various magnitudes of image augmentation. We define “magnitude” by size of the augmented region and the semantic change achieved.
comment in response to post
An attacker may keep most of the original image, and only change a localized region to drastically change the narrative! 🔍 One such case is known as “Sleepy Joe”, where a video of Joe Biden was changed only in the facial region to make it appear as though he fell asleep at a podium.
comment in response to post
Detecting AI-Generated images that can be used to spread misinformation is an impactful area of research in Computer Vision. 🤔 However, the majority of the SOTA systems are trained exclusively on end-to-end fully generated images, or on data from very constrained distributions.
comment in response to post
Hi!! Would love to be added, thanks
comment in response to post
If there’s still room, would love to be added! Thanks for creating this
comment in response to post
Hi! I would love to be added! Thanks
comment in response to post
Hi! Would love to be added - thanks so much
comment in response to post
This is such a great resource - thanks so much for creating this!