AI models spit out photos of real people and copyrighted images

Stable Diffusion is open source, meaning anyone can analyze and investigate it. Imagen is closed, but Google granted the researchers access. Singh says the work is a great example of how important it is to give research access to these models for analysis, and he argues that companies should be similarly transparent with other AI models, such as OpenAI’s ChatGPT. 

However, while the results are impressive, they come with some caveats. The images the researchers managed to extract appeared multiple times in the training data or were highly unusual relative to other images in the data set, says Florian Tramèr, an assistant professor of computer science at ETH Zürich, who was part of the group. 

People who look unusual or have unusual names are at higher risk of being memorized, says Tramèr.

The researchers were only able to extract relatively few exact copies of individuals’ photos from the AI model: just one in a million images were copies, according to Webster.

But that’s still worrying, Tramèr says: “I really hope that no one’s going to look at these results and say ‘Oh, actually, these numbers aren’t that bad if it’s just one in a million.’” 

“The fact that they’re bigger than zero is what matters,” he adds.

#models #spit #photos #real #people #copyrighted #images

0 Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like