The city of San Francisco has filed a sweeping lawsuit against 18 websites and apps that generate unauthorized, deepfake nudes of unsuspecting victims on Thursday.

The complaint—published with the plaintiffs’ service names redacted—targets the “proliferation of websites and apps that offer to ‘undress’ or ‘nudify’ women and girls.” It asserts that collectively, the sites have been visited over 200 million times in the first six months of 2024.

“This investigation has taken us to the darkest corners of the internet, and I am absolutely horrified for the women and girls who have had to endure this exploitation,” said San Francisco City Attorney David Chiu in announcing the lawsuit. “Generative AI has enormous promise, but as with all new technologies, there are unintended consequences and criminals seeking to exploit the new technology.

AD

“This is not innovation—this is sexual abuse,” Chiu added.

Although celebrities like Taylor Swift have been frequent targets of such image generation, he pointed to recent cases that have surfaced in the news, involving California middle school students.

“These images, which are virtually indistinguishable from real photographs, are used to extort, bully, threaten, and humiliate women and girls,” the city announcement said.

The rapid spread of what is known as non-consensual intimate imagery, or NCII, has prompted efforts by governments and organizations worldwide to curtail the practice.

AD

“Victims have little to no recourse, as they face significant obstacles to remove these images once they have been disseminated,” the complaint says. “They are left with profound psychological, emotional, economic, and reputational harms, and without control and autonomy over their bodies and images.”

Even more problematic, Chiu notes, is that some sites “allow users to create child pornography.”

The use of AI to generate child sexual abuse material, or CSAM, is especially problematic, as it severely hinders efforts to identify and protect real victims. The Internet Watch Foundation, which tracks the issue, said known pedophile groups are already embracing the technology, and that AI-generated CSAM could “overwhelm” the internet.

A Louisiana state law specifically banning CSAM created with AI went into effect this month.

Although major tech companies have pledged to prioritize child safety as they develop AI, such images have already found their way into AI datasets, according to researchers at Stanford University.

The lawsuit calls for the services to pay $2,500 for each violation and cease operations, and also demands domain name registrars, webhosts, and payment processors to stop providing services to outfits that create deepfakes.

Stay on top of crypto news, get daily updates in your inbox.