Meta Announces Plan to Label AI-Generated Contents

In a significant move aimed at combating the rising threat of deepfakes, Meta, the parent company of Facebook and Instagram, announced on Friday its intention to introduce labeling for AI-generated content starting in May.

This decision comes amidst growing concerns from users and governments regarding the proliferation of manipulated media and its potential to deceive and misinform.

Addressing the need for transparency and accountability, Meta stated that it would no longer remove manipulated images and audio that do not violate its rules. Instead, the company will implement labeling and contextualization measures to provide users with information about the authenticity of the content, while also safeguarding freedom of speech.

This strategic shift follows criticism from Meta’s oversight board, which called for urgent revisions to the company’s approach to manipulated media, given the rapid advancements in AI technology.

With the 2024 elections looming globally, there are heightened fears of AI-powered disinformation campaigns influencing voter perceptions.

The new labeling system, dubbed “Made with AI,” will identify content created or altered using artificial intelligence across various formats, including video, audio, and images. Content deemed at high risk of misleading the public will receive more prominent labels, enhancing user awareness and trust.

Monika Bickert, Meta’s Vice President of Content Policy, emphasized the importance of providing transparency and additional context to address manipulated content effectively.

The implementation of these labels aligns with an agreement reached in February among major tech companies to combat manipulated content aimed at deceiving voters.

Under the new standard, AI-generated content will remain on the platform unless it violates other Community Standards, such as hate speech or voter interference.

This approach reflects Meta’s commitment to balancing free expression with the need to curb the spread of harmful misinformation.

The rollout of AI-generated content labeling will occur in two phases, commencing in May 2024, while the removal of manipulated media based solely on the previous policy will cease in July.