Skip links

Microsoft, Meta detail plans to fight election disinformation in 2024

Microsoft and Meta have very different initiatives to combat misinformation in 2024, slated to be a busy election year all over the globe, but whether they’ll be effective is another issue.

For its part, Microsoft announced a five-step election protection strategy it’ll roll out in the coming months “in the United States and other countries where critical elections will take place in 2024,” Microsoft president Brad Smith and VP technology for fundamental rights Teresa Hutson said in a blog post.

More than 40 nations will hold national elections in 2024.

Microsoft’s first initiative is the Content Credentials service, where Redmond will roll out digital watermarking metadata for images and videos as designed by the Coalition for Content Provenance and Authenticity (C2PA), of which Microsoft is a member.

“Users can attach Content Credentials to their images or videos to show how, when, and by whom the content was created or edited, including if it was generated by AI,” Microsoft said. Content Credentials will enable campaigns “to assert that an image or video came from them while protecting against tampering by showing if content was altered after its credentials were created.”

The feature will be available as a private preview in (northern hemisphere) spring of 2024, and will initially only be available to political campaigns. 

As The Reg noted previously, Microsoft’s Content Credentials watermarking tech, which it created with Adobe and other organizations in the C2PA, relies heavily on whether apps displaying marked content can process the metadata that labels it as such. Chrome doesn’t recognize it, so the “cr” symbol that marks an image won’t show up in the world’s most popular web browser at all.

Additionally, there’s nothing to stop bad folk from stripping the metadata, which Adobe told us can easily be recovered via its Content Credentials Cloud – assuming the publisher hosting the manipulated content is connected to said cloud, of course. 

Content Credentials are designed to protect campaigns from having their media misused, but Microsoft made no mention of how it might police the spread of malicious misinformation through its services, including LinkedIn, which has had problems with AI generated fakery in recent years. We’ve asked the company for clarity. 

In addition to giving campaigns the ability to watermark their media, Microsoft has also formed a “Campaign Success Team” that will advise political campaigns on how to “navigate the world of AI, combat [and] cyber influence campaigns.” 

The Windows maker also plans to create an Election Communications Hub where security teams will be made available to election authorities, and plans to partner with organizations that label news sources as authoritative to ensure “[Bing] queries about election administration will surface reputable sites.” 

Additionally, Microsoft plans to “use our voice as a company to support legislative and legal changes that will add to the protection of campaigns and electoral processes from deepfakes and other harmful uses of new technologies.” It’s starting by throwing its weight behind the bipartisan “Protect Elections from Deceptive AI Act” introduced in the US Senate in September, which hasn’t moved since being assigned to committee.

Meta mashing misinformation for the masses, maybe

Microsoft didn’t say whether Content Credentials would be required for political campaigns, and didn’t mention policing advertisements on its services. Meta’s announcement was entirely along those lines.

“Advertisers will have to disclose whenever a social issue, electoral, or political ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered,” Meta said in a blog post published today.

The move will affect any media that depicts a real person doing or saying something they didn’t actually do, fake or altered images of real events, and/or “realistic-looking” imagery of people or events that didn’t actually happen. Changes that “are inconsequential or immaterial to the claim” in the ad don’t need to be flagged.

“Meta will add information on the ad when an advertiser discloses in the advertising flow that the content is digitally created or altered,” the company said. “If we determine that an advertiser doesn’t disclose as required, we will reject the ad and repeated failure to disclose may result in penalties.”

The change will go into effect in 2024, Meta said, and will apply globally.

Just last month, however, Meta’s oversight board ruled that a video of US President Biden manipulated to make it appear as if he was inappropriately touching his granddaughter was allowed to remain on Facebook because, despite the fact it was altered to mislead, it didn’t technically violate the rules.

The company’s manipulated media policy says users shouldn’t post footage created using AI, including deepfakes, that “merges, combines, replaces, and/or superimposes content onto a video, creating a video that appears authentic,” as well as prohibiting content that would “likely mislead an average person to believe a subject of the video said words that they didn’t say.”

So, doctored or not, the fake video was allowed to remain. 

As for enforcement of its new rules, Meta said it planned to rely on “independent fact-checking partners” to review media for fake content that slipped through the company’s new disclosure requirement.

Meta hasn’t yet answered our questions about the announcement. We approached other social media firms to learn if they had any new election disinformation initiatives to share, and will update when we hear back. ®

Source