Meta’s New Method to Deepfakes: More Labels, Fewer Takedowns
Meta is the parent company of Facebook, Instagram, and WhatsApp. It has declared an essential change to its strategies on AI-generated content and manipulated media in response to reproach from its Oversight Board. It is an independent body that reviews and approves Meta’s content moderation policies.
Starting next month, Meta will label a broader range of such content, including a “Made with AI” badge for deepfakes. Deepfakes are artificial media in which a person in an existing image or video is replaced with someone else’s likeness using artificial intelligence techniques. These influences can be so convincing that they can deceive the public, leading to distortion and confusion.
In addition to the “Made with AI” badge, Meta will provide additional context for content operated in ways that could trick the public on important issues. This could lead to more content being branded. This is crucial in a year with many elections worldwide. The purpose is to help users make informed decisions about what they see and share.
However, Meta will only apply labels to deepfakes with “industry standard AI image indicators” or if the uploader reveals it as AI-generated. This policy change could result in more AI-generated content and operated media remaining on Meta’s platforms. It’s a change from a policy of removing such content to providing clarity and additional context.
This change is expected to be fully applied by July. It’s part of Meta’s constant efforts to balance the benefits and risks of AI and deepfake technology. While these technologies can potentially create new forms of creative expression, they also carry significant challenges for content moderation and misinformation.
Read More: AI Startups Shine at Y Combinator’s Winter 2024 Demo Day
Read More: Brave Reveals AI Assistant Leo for iOS Users