TO PORN OR NOT TO PORN? Mainstream generative AI companies are strict in their use of filters and guardrails to stop users from generating pornography and other explicit content, but OpenAI, the developper of ChatGPT and DALL-E, is considering allowing this practice. There will be some restrictions, though, including a ban on deepfake creation.

OpenAI’s recently released Model Spec document reveals that the company’s once-hard stance against generating porn and other NSFW material could soon soften.

“We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT,” OpenAI writes. “We look forward to better understanding user and societal expectations of model behavior in this area.”

The company says that the kind of content users would be allowed to “responsibly” create includes erotica, extreme gore, slurs, and unsolicited profanity. Currently, OpenAI’s rules prohibit any sexually explicit or suggestive content.

OpenAI model lead Joanne Jang, who worked on the Model Spec document, emphasized to NPR that users still won’t be able to create anything potentially illegal, and that deepfakes such as the Taylor Swift images that spread on X definitely won’t be allowed – Microsoft made changes to its text-to-image tool Designer after explicit fakes of the singer appeared across the internet.

“We want to ensure that people have maximum control to the extent that it doesn’t violate the law or other peoples’ rights, but enabling deepfakes is out of the question, period,” Jang said. “This doesn’t mean that we are trying now to create AI porn.”

Jang added that OpenAI wanted to start a conversation about whether erotic text and nude images should always be banned from its AI products.

When asked if OpenAI users could one day create images considered AI-generated porn, Jang said, “Depends on your definition of porn. As long as it doesn’t include deepfakes. These are the exact conversations we want to have.”

We’ve seen plenty of examples of people bypassing security filters and limitations placed on generative AI services, including the recent adversarial attacks. The fear is that by weakening filters in its products, OpenAI will be making it even easier to create the likes of deepfakes and illegal material.

Critics say the fact OpenAI is even considering going down this path makes a mockery of its mission statement to produce safe and beneficial AI. Renee DiResta, a research manager with the Stanford Internet Observatory, put forward an alternative point of view, telling NPR that OpenAI offering legal porn is a better alternative than people using sketchy open-source models that could create illegal content.