Featured Image for Article

Major AI Vendors Commit to Combating Nonconsensual Deepfakes and Child Sexual Abuse Material

The White House has announced that several major AI vendors have committed to taking significant steps to combat nonconsensual deepfakes and child sexual abuse material. Adobe, Cohere, Microsoft, Anthropic OpenAI, and data provider Common Crawl have pledged to responsibly source and safeguard the datasets they create and use to train AI from image-based sexual abuse.

Commitments by AI Vendors

These organizations, excluding Common Crawl, have also committed to incorporating feedback loops and strategies in their development processes to prevent AI from generating sexual abuse images. Furthermore, they have agreed to remove nude images from AI training datasets when appropriate and depending on the purpose of the model.

Notably, these commitments are self-policed. Several AI vendors, such as Midjourney and Stability AI, opted not to participate in this initiative. OpenAI’s pledges, in particular, raise questions since CEO Sam Altman mentioned in May that the company would explore how to responsibly generate AI porn.

White House’s Broader Effort

The White House has touted these commitments as a win in its broader effort to identify and reduce the harm of deepfake nudes. This initiative aligns with ongoing efforts to address ethical challenges and ensure the responsible use of AI technology.

In summary, while the commitments by major AI vendors mark a positive step towards combating nonconsensual deepfakes and child sexual abuse material, the initiative’s success will depend on the continuous and rigorous enforcement of these self-policed measures.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *