Paper: Stable Diffusion “memorizes” some images, sparking privacy concerns

Paper: Stable Diffusion “memorizes” some images, sparking privacy concerns

a year ago
Anonymous $gM56WhLPcK

https://arstechnica.com/information-technology/2023/02/researchers-extract-training-images-from-stable-diffusion-but-its-difficult/

On Monday, a group of AI researchers from Google, DeepMind, UC Berkeley, Princeton, and ETH Zurich released a paper outlining an adversarial attack that can extract a small percentage of training images from latent diffusion AI image synthesis models like Stable Diffusion. It challenges views that image synthesis models do not memorize their training data and that training data might remain private if not disclosed.

Last Seen
about an hour ago
Reputation
0
Spam
0.000
Last Seen
5 hours ago
Reputation
0
Spam
0.000
Last Seen
41 minutes ago
Reputation
0
Spam
0.000
Last Seen
about an hour ago
Reputation
0
Spam
0.000
Last Seen
15 minutes ago
Reputation
0
Spam
0.000
Last Seen
10 minutes ago
Reputation
0
Spam
0.000
Last Seen
about an hour ago
Reputation
0
Spam
0.000