Breaking News

Meta Announces Implementation of ‘Made with AI’ Labels Starting Next Month

Meta Announces Implementation of 'Made with AI' Labels Starting Next Month

Meta, the parent company of Facebook, announced significant policy changes regarding digitally created and altered media ahead of the upcoming US elections, which are expected to test its ability to combat deceptive content generated by new artificial intelligence (AI) technologies.

Starting next month, Meta will introduce “Made with AI” labels to AI-generated videos, images, and audio posted on its platforms, expanding its previous policy that addressed only a limited range of doctored videos, according to Monika Bickert, Vice President of Content Policy at Meta.

Additionally, Meta will implement separate and more prominent labels for digitally altered media that poses a “particularly high risk of materially deceiving the public on a matter of importance,” regardless of whether AI or other tools were used to create the content.

Also Read: Meta Faces Backlash Over Alleged Sharing of Facebook Users’ Messages with Netflix

This new approach represents a shift in Meta’s treatment of manipulated content, moving from solely removing select posts to keeping the content accessible while providing viewers with information about its creation process.

Previzusly, Meta announced a plan to detect images created using generative AI tools from other companies by embedding invisible markers in the files, but did not specify a start date. The new labeling approach will apply to content across Meta’s platforms, including Facebook, Instagram, and Threads.

Meta will immediately begin applying the more prominent “high-risk” labels, with the aim of addressing concerns about deceptive media ahead of the November US presidential election. Tech researchers warn that the election may be influenced by new generative AI technologies, as political campaigns increasingly use AI tools, pushing the boundaries of platforms’ guidelines.

In February, Meta’s oversight board criticized the company’s existing rules on manipulated media as “incoherent” after reviewing a video of US President Joe Biden posted on Facebook last year. The altered footage wrongly suggested inappropriate behavior by Biden and was allowed to remain on the platform, as Meta’s existing policy only addressed AI-generated content or videos manipulating speech.