yes, these are all interesting directions for differential privacy. my point was that since these are often trained first on original data, they are in some forms of data augmentation. it's more about the terminology and definition rather than the technical details of each, though.
Comments
The point is perhaps that augmentations, by themselves, don’t inherently guarantee an increase or decrease in privacy.
Synthetic data algorithms that don't provably account for privacy probably doesn't provide privacy.
But there are private synthetic data generation algorithms that do like @gautamkamath.com linked above.