If those Stable Diffusion models haven't been taken down yet, I suspect they will be very shortly. It appears the damned things were trained on child porn.
AI image-generators are being trained on explicit photos of children, a study shows
I think all existing models are going to be yanked offline if they haven't already been. But there are a lot of contaminated ones already circulating out there that they can't take back.
AI image-generators are being trained on explicit photos of children, a study shows
Until recently, anti-abuse researchers thought the only way that some unchecked AI tools produced abusive imagery of children was by essentially combining what they’ve learned from two separate buckets of online images — adult pornography and benign photos of kids.
But the Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions that’s been used to train leading AI image-makers such as Stable Diffusion. The watchdog group based at Stanford University worked with the Canadian Centre for Child Protection and other anti-abuse charities to identify the illegal material and report the original photo links to law enforcement. It said roughly 1,000 of the images it found were externally validated.
The response was immediate. On the eve of the Wednesday release of the Stanford Internet Observatory’s report, LAION told The Associated Press it was temporarily removing its datasets.
....
As an example, Thiel called out CivitAI, a platform that’s favored by people making AI-generated pornography but which he said lacks safety measures to weigh it against making images of children. The report also calls on AI company Hugging Face, which distributes the training data for models, to implement better methods to report and remove links to abusive material.
But the Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions that’s been used to train leading AI image-makers such as Stable Diffusion. The watchdog group based at Stanford University worked with the Canadian Centre for Child Protection and other anti-abuse charities to identify the illegal material and report the original photo links to law enforcement. It said roughly 1,000 of the images it found were externally validated.
The response was immediate. On the eve of the Wednesday release of the Stanford Internet Observatory’s report, LAION told The Associated Press it was temporarily removing its datasets.
....
As an example, Thiel called out CivitAI, a platform that’s favored by people making AI-generated pornography but which he said lacks safety measures to weigh it against making images of children. The report also calls on AI company Hugging Face, which distributes the training data for models, to implement better methods to report and remove links to abusive material.
I think all existing models are going to be yanked offline if they haven't already been. But there are a lot of contaminated ones already circulating out there that they can't take back.
Comment