
OpenAI, alongside industry leaders including Amazon, Anthropic, Civitai, Google, Meta, Metaphysic, Microsoft, Mistral AI, and Stability AI, has committed to implementing robust child safety measures in the event, deployment, and maintenance of generative AI technologies as articulated within the Safety by Design principles. This initiative, led by Thorn, a nonprofit dedicated to defending children from sexual abuse, and All Tech Is Human, a company dedicated to tackling tech and society’s complex problems, goals to mitigate the risks generative AI poses to children. By adopting comprehensive Safety by Design principles, OpenAI and our peers are ensuring that child safety is prioritized at every stage in the event of AI. To this point, we’ve made significant effort to attenuate the potential for our models to generate content that harms children, set age restrictions for ChatGPT, and actively engage with the National Center for Missing and Exploited Children (NCMEC), Tech Coalition, and other government and industry stakeholders on child protection issues and enhancements to reporting mechanisms.
As a part of this Safety by Design effort, we commit to:
-
Develop: Develop, construct, and train generative AI models
that proactively address child safety risks.-
Responsibly source our training datasets, detect and take away child sexual
abuse material (CSAM) and child sexual exploitation material (CSEM) from
training data, and report any confirmed CSAM to the relevant
authorities. -
Incorporate feedback loops and iterative stress-testing strategies in
our development process. - Deploy solutions to handle adversarial misuse.
-
Responsibly source our training datasets, detect and take away child sexual
-
Deploy: Release and distribute generative AI models after
they’ve been trained and evaluated for child safety, providing protections
throughout the method.-
Combat and reply to abusive content and conduct, and incorporate
prevention efforts. - Encourage developer ownership in safety by design.
-
Combat and reply to abusive content and conduct, and incorporate
-
Maintain: Maintain model and platform safety by continuing
to actively understand and reply to child safety risks.-
Committed to removing recent AIG-CSAM generated by bad actors from our
platform. - Spend money on research and future technology solutions.
- Fight CSAM, AIG-CSAM and CSEM on our platforms.
-
Committed to removing recent AIG-CSAM generated by bad actors from our
This commitment marks a very important step in stopping the misuse of AI technologies to create or spread child sexual abuse material (AIG-CSAM) and other types of sexual harm against children. As a part of the working group, we’ve also agreed to release progress updates yearly.