Home Artificial Intelligence Reducing bias and improving safety in DALL·E 2

Reducing bias and improving safety in DALL·E 2

Reducing bias and improving safety in DALL·E 2

In April, we began previewing the DALL·E 2 research to a limited number of individuals, which has allowed us to raised understand the system’s capabilities and limitations and improve our safety systems.

During this preview phase, early users have flagged sensitive and biased images which have helped inform and evaluate this latest mitigation.

We’re continuing to research how AI systems, like DALL·E, might reflect biases in its training data and other ways we will address them.

Through the research preview now we have taken other steps to enhance our safety systems, including:

  • Minimizing the chance of DALL·E being misused to create deceptive content by rejecting image uploads containing realistic faces and attempts to create the likeness of public figures, including celebrities and distinguished political figures.
  • Making our content filters more accurate so that they’re more practical at blocking prompts and image uploads that violate our content policy while still allowing creative expression.
  • Refining automated and human monitoring systems to protect against misuse.

These improvements have helped us gain confidence in the power to ask more users to experience DALL·E.

Expanding access is a vital a part of our deploying AI systems responsibly since it allows us to learn more about real-world use and proceed to iterate on our safety systems.


Please enter your comment!
Please enter your name here