A new open-source AI image generator capable of producing accurate images from any text prompt has shown surprisingly rapid appreciation in its first week. Stability AI is a stable distribution, high-fidelity but capable of running on off-the-shelf consumer hardware, now used by art-breeding services like Artbreeder, Pixelz.ai, and others. But the unrefined nature of the model meant that not all uses were completely above board.
For the most part, the usability issues are above board. For example, NovelAI is experimenting with Stable Diffusion to produce user-generated art on its platform that can accompany AI-generated stories. Midjourney has launched a beta that taps a stable transmission for better photo-realism.
But steady circulation has also been used for less savory purposes. On the scandalous discussion board 4chan, where the model was leaked earlier, several threads were devoted to AI-generated nude celebrity art and other pornographic images.
Emad Mostak, CEO of Stability AI, called it “unfortunate” that the model was released on 4chan and emphasized that the company is working with “leading ethical experts and technologists” to secure and responsibly release other methods. One of these strategies is an adjustable AI tool, the Safety Classifier, which is included in a general distribution software package that tries to find and block offensive or unwanted images.
However, the security classifier can – by default – be disabled.
Stable distribution is very new territory. Other AI art generation systems, such as OpenAI’s DALL-E 2, have implemented stricter filters for pornographic images. (The open-source Stable Diffusion license prohibits certain applications, such as exploiting minors, but the model itself is technically unbound. Those two capabilities can be dangerous when combined, allowing bad actors to create “deep fakes” of porn — at worst. It can perpetuate abuse or implicate a person in a crime they did not commit.
Unfortunately, women are more likely to fall victim to this. In the year A 2019 study found that 90% to 95% of deep fake deals are women. This bodes ill for future AI systems, says Ravit Dothan, an AI ethicist at the University of California, Berkeley.
“I’m concerned about the consequences of synthetic images with illegal content — it exacerbates the illegal behavior that’s being portrayed,” Dothan told TechCrunch in an email. [exploitation] Increase the creation of a real child [exploitation]? Will it increase violence against girls?”
Abhishek Gupta, principal researcher at the Montreal AI Ethics Institute, shares this view. “We need to think about the life cycle of an AI system, which includes post-use and monitoring, and how we can think of controls that reduce damage even in the most extreme situations,” he said. “This is especially true when it comes to powerful skills. [like Stable Diffusion] Such a system has gone into the wild where it can cause real harm to those who might use it, for example, by creating content that is offensive to the victim.
A preview last year featured a nurse’s tip, where a father took a picture of his son’s swollen genital area and sent it to the nurse’s iPhone. The photo was automatically uploaded to Google Photos and flagged by the company’s AI filters as child sexual abuse material, leading to the man’s account being disabled and an investigation by the San Francisco Police Department.
If a legitimate photo can thwart such a detection system, experts like Dotan say, there’s no reason why deeper illusions can’t be created with a system like Stable Diffusion — and at scale.
“Human-created AI systems, however well-intentioned, can be used in unanticipated and preventable harmful ways,” Dothan said. “I think developers and researchers often appreciate this point.”
Of course, the technology to create deep fakes has been around for some time, AI-powered or otherwise. In the year According to a 2020 deep fake detection company Sensity report, hundreds of clearly deep fake videos featuring female celebrities are uploaded to the world’s largest porn sites every month. The report estimates the total number of deep lies online at around 49,000, with more than 95% of them being sexual. Actresses including Emma Watson, Natalie Portman, Billie Eilish and Taylor Swift have been the targets of deep-fakes since the AI-powered face-swapping devices went mainstream several years ago, and some, including Kristen Bell, have objected to what they believe to be sexism. Exploitation.
But stable distribution represents a new generation of – surprisingly – if not perfect – convincing fake images with minimal work by the user. It’s easy to install, with just a few configuration files and no need for a high-end graphics card costing several hundred dollars. More efficient versions of the system that can run on the M1 MacBook are also being worked on.
Sebastian Burns, Ph.D. A researcher in the AI team at Queen Mary University of London, he thinks the big differences in the potential for automation and custom image generation are as stable a distribution – and major problems. “Most harmful images can already be prepared by conventional methods, but it is done by hand and requires a lot of effort,” he said. “A model capable of producing near-photorealistic recordings may provide opportunities for personalized blackout attacks on individuals.”
Burns fears that private photos deleted from social media could be used to create a stable distribution or any model targeting images of pornography or illegal activities. Of course there is a precedent. In the year Indian investigative journalist Rana Ayub became the target of Indian nationalist trolls after she reported on the rape of an eight-year-old Kashmiri girl in 2018, some of whom created a deeply sexual film of her face on another man’s body. His profound lies were shared by the leader of the national political party, BJP, and the harassment of Ayub required the United Nations to intervene.
“Stable distribution offers enough customization to pay off automated threats against individuals or publish fake but potentially damaging images,” Burns continued. “We’ve seen people get robbed after their webcam was remotely accessed. That infiltration may no longer be necessary.
With stable distribution in the wild and already being used to create pornographic images – some of which are non-consensual – it may be incumbent on image hosts to take action. TechCrunch has acquired OnlyFans, one of the leading adult content platforms, but had not heard back as of press time. A spokeswoman for Patreon, which allows adult content, said the company has a policy against deepfakes and does not allow images that “reuse images of celebrities and place non-adult content in an adult context.”
If history is any indication, however, enforcement can be uneven—in part because few laws protect against serious scandals related to pornography. And although the threat of legal action has pulled some sites for objectionable AI-generated content, there’s nothing stopping new ones from popping up.
In other words, says Gupta, it’s a brave new world.
“Creative and malicious users can abuse the capabilities. [of Stable Diffusion] “It’s cheaper than training a whole model, using less resources to process data to generate content that is reasonably objectionable, and then publish it to places like Reddit and 4chan to drive traffic and grab attention,” Gupta said. “Controls like API rate limits, security controls on output types returned from the system are at risk when such capabilities are out in the ‘wild.'”