Home Artificial Intelligence DALL·E 3 is now available in ChatGPT Plus and Enterprise

DALL·E 3 is now available in ChatGPT Plus and Enterprise

DALL·E 3 is now available in ChatGPT Plus and Enterprise

We use a multi-tiered safety system to limit DALL·E 3’s ability to generate potentially harmful imagery, including violent, adult or hateful content. Safety checks run over user prompts and the resulting imagery before it’s surfaced to users. We also worked with early users and expert red-teamers to discover and address gaps in coverage for our safety systems which emerged with latest model capabilities. For instance, the feedback helped us discover edge cases for graphic content generation, similar to sexual imagery, and stress test the model’s ability to generate convincingly misleading images. 

As a part of the work done to arrange DALL·E 3 for deployment, we’ve also taken steps to limit the model’s likelihood of generating content within the sort of living artists, images of public figures, and to enhance demographic representation across generated images. To read more concerning the work done to arrange DALL·E 3 for wide deployment, see the DALL·E 3 system card.

User feedback will help ensure that we proceed to enhance. ChatGPT users can share feedback with our research team through the use of the flag icon to tell us of unsafe outputs or outputs that don’t accurately reflect the prompt you gave to ChatGPT. Listening to a various and broad community of users and having real-world understanding is critical to developing and deploying AI responsibly and is core to our mission.

We’re researching and evaluating an initial version of a provenance classifier—a brand new internal tool that will help us discover whether or not a picture was generated by DALL·E 3. In early internal evaluations, it’s over 99% accurate at identifying whether a picture was generated by DALL·E when the image has not been modified. It stays over 95% accurate when the image has been subject to common varieties of modifications, similar to cropping, resizing, JPEG compression, or when text or cutouts from real images are superimposed onto small portions of the generated image. Despite these strong results on internal testing, the classifier can only tell us that a picture was likely generated by DALL·E, and doesn’t yet enable us to make definitive conclusions. This provenance classifier may grow to be a part of a spread of techniques to assist people understand if audio or visual content is AI-generated. It’s a challenge that may require collaboration across the AI value chain, including with the platforms that distribute content to users. We expect to learn an incredible deal about how this tool works and where it could be most useful, and to enhance our approach over time.


Please enter your comment!
Please enter your name here