Meta will be labeling a wider range of video, audio, and image content as “Made with AI” from May 2024 when they detect industry standard AI image indicators or when people disclose that they’re uploading AI-generated content.
Meta To Label AI-Generated Content, Pictures, and Videos
Meta is making changes to the way they handle manipulated media on Facebook, Threads, and Instagram based on the feedback from the Oversight Board. It recommended Meta update their approach to reflect a broader range of content that exists today and provide context about the content through labels.
Meta’s existing approach is too narrow since it only covers videos that are created or altered by Artificial Intelligence to make a person appear to say something they didn’t say. As AI technology is quickly evolving, people have developed other kinds of realistic AI-generated content like audio and photos.
Meta’s policy review process also suggested these changes, which are based on public opinion surveys and consultations with academics, civil society organizations, and others.
Also Read: Facebook Meta AI Tool Fails to Create Accurate Images like Google Gemini
Meta already adds “Imagined with AI” to photorealistic images created using their Meta AI feature. Now Meta’s “Made with AI” labels on AI-generated video, audio, and images will be based on their detection of industry-shared signals of AI images or people self-disclosing that they’re uploading AI-generated content.
The Oversight Board also argued that Meta will unnecessarily risk restricting freedom of expression when they remove manipulated media that does not otherwise violate our Community Standards. So they recommended a “less restrictive” approach to manipulated media, like labels with context.
Meta agrees that providing transparency and additional context is now the better way to address manipulated content. They also determine that digitally created or altered images, videos, or audio create a particularly high risk of materially deceiving the public on a matter of importance, Meta may add a more prominent label so people have more information and context.
Meta will remove content, regardless of whether AI or a person creates it, if it violates our policies against voter interference, bullying, harassment, violence, and incitement, or any other policy in our Community Standards.
Meta also has a network of nearly 100 independent fact-checkers who will continue to review false and misleading AI-generated content. When fact-checkers rate content as false or altered, Meta shows it lower in Feed so fewer people see it, and adds an overlay label with additional information. In addition, it rejects an ad if it contains debunked content, and since January, advertisers have to disclose when they digitally create or alter a political or social issue ad in certain cases.
Meta plans to start labeling AI-generated content in May 2024, and they will stop removing content solely based on our manipulated video policy in July. This timeline gives people time to understand the self-disclosure process before Meta stops removing the smaller subset of manipulated media.
Also Read: Why Meta Platforms Emerge as a Strong AI Investment Alternative to Nvidia
This post was last modified on April 5, 2024 8:38 pm
Rish Gupta is an Indian entrepreneur who serves as the chief executive officer (CEO) of…
Are you looking to advance your engineering career in the field of robotics? Check out…
Artificial intelligence is a topic that has recently made internet users all over the world…
Boost your learning journey with the power of AI communities. The article below highlights the…
Demystify the world of Artificial Intelligence with our comprehensive AI Glossary and Terminologies Cheat Sheet.…
Scott Wu is the co-founder and Chief Executive Officer of Cognition Labs, an artificial intelligence…