Facebook starts fact checking photos/videos, blocks millions of fake accounts per day

Facebook has begun letting partners fact check photos and videos beyond news articles, and proactively review stories before Facebook asks them. Facebook is also now preemptively blocking the creation of millions of fake accounts per day. Facebook revealed this news on a conference call with journalists about its efforts around election integrity that included Chief Security Officer Alex Stamos who’s said to be leaving Facebook later this year.

Stamos outlined how Facebook is building ways to address false identities and fake accounts, false audiences grown illicitly or pumped up to make content appear more popular, acts of spreading false information, and false narratives that are intentionally deceptive and shape people’s views beyond the facts. “It’s important to match the right approach to each of these challenges” says Stamos, explaining that Facebook customize its solutions to these problems for different countries around the world.

Articles flagged as false by Facebook’s fact checking partners have their reach reduced and display Related Articles showing perspectives from reputable news oulets below

Samidh Chakrabarti, Facebook’s product manager for civic engagement also explained that Facebook is now proactively looking for foreign-based Pages producing civic-related content inauthentically. It quickly removes them from the platform if a manual review by the security team finds they violate terms of service, which he said can prevent divisive memes from going viral. Facebook first piloted this tool in the Alabama special election, but has now deployed it to protect Italian elections and will use it for the U.S. mid-term elections.

Advances in machine learning allow Facebook “to find more suspicious behaviors without assessing the content itself” to block millions of fake account creations per day “before they can do any harm”, says Chakrabarti.

Facebook implemented its first slew of election protections back in December 2016, including working with third-party fact checkers to flag articles as false. But those red flags were shown to entrench some people’s belief in false stories, leading Facebook to shift to showing Related Articles with perspectives from other reputable news outlets. As of yesterday, Facebook’s fact checking partners began reviewing suspicious photos and videos which can also spread false information. This could reduce the spread of false news image memes that live on Facebook and require no extra clicks to view, like doctored photos showing the Parkland school shooting survivors ripping up the constitution.

Normally, Facebook sends fact checkers stories that are being flagged by users and going viral. But now in countries like Italy and Mexico in anticipation of elections, Facebook has enabled fact checkers to proactively flag things because in some cases they can identify false stories that are spreading before Facebook’s own systems. “To reduce latency in advance of elections, we wanted to ensure we gave fact checkers that ability” says Facebook’s News Feed product manager Tessa Lyons.

With the mid-terms coming up quick, Facebook has to both secure its systems against election interference, as well as convince users and regulators that it’s made real progress since the 2016 presidential election where Russian meddlers ran rampant. Otherwise, Facebook risks another endless news cycle about it being a detriment to democracy that could trigger reduced user engagement and government intervention.