Facebook is taking on another battle prior to the upcoming midterm elections, fighting false voter information. Veuer’s Natasha Abellard has the story.
As voters cast their ballots in the midterm elections, social media companies have an additionalÂ concern: protecting that process.Â
After reports of fake accounts and false news stories infiltrating social networks during the 2016 presidential election, companies like Facebook and Twitter have doubled down on efforts to prevent election manipulation.
At stake is not only the validity of information found on their platforms, but also the trust of their users.Â
Business Insider Intelligenceâ€™s Digital Trust Report 2018, released in August, reported that more than 75 percent of respondents said Facebook was â€œextremely likelyâ€ or â€œvery likelyâ€ to show them deceptive content like â€œfake news, scams or click bait.â€ Twitter didnâ€™t do much better, with more thanÂ 60 percent of respondents agreeing the platform has deceptive content.
In January 2017, reports emerged that foreign entities like the Russia-based Internet Research Agency used social media platforms to spread false and divisive information throughout the 2016 campaign. By September 2017, Facebook announced it had linked over 3,000 political ads run on its platform between 2015 to 2017 to Russia. Facebook later said over 10 million users had been exposed to the ads.
This past September, Facebook and Twitter executives testified before Congress about accusations that foreign operatives’ use of their platforms could have affected the outcome of the presidential election.
SpokesmenÂ for both Facebook and Twitter said in the aftermath of 2016 election, the companies have ramped up efforts to identify and remove fake accounts and protect users from false information.Â
Yoel Roth, Twitter’s head of site integrity, saidÂ the company has cracked down on “coordinated platform manipulation,” or people and organizations using Twitter to misleadÂ other users and spread false information.
More: Twitter, Lyft, Bumble and Tinder: How tech and social media companies may change the election this year
More: Twitter sheds more users amid purge, but revenue and profit surpass expectations
More: ‘Fake social,’ ‘fake search’ are the new ‘fake news’ as Trump attacks tech ahead of midterms
During the 2016 campaign, misinformation appeared online in the form of fake accounts and online publications that spread hyper-partisan views, among others. Leading up to the NovemberÂ midterms, experts say the techniques are similarÂ but the people spreading misinformation have gotten smarter.Â
The social networks have, too.Â
“We havenâ€™t seenÂ a fundamental shift in what (the bad actors) are doing.Â But in 2016 it was like breaking into a house with the door wide open, and now thereâ€™s at least a dog inside that’s going to bark,”Â said Bret Schafer, a social media analyst at the Alliance for Securing Democracy, a bipartisan national security advocacy group.
Schafer said social networks’ efforts to protect their platforms and users have created a “layer of friction” that makes it more challenging to carry out misinformation campaigns.Â Efforts include cracking down on â€œbad actorsâ€ who use fake accounts to spread misinformationÂ and requiring political advertisers to verify their identity by providing a legitimate mailing address.
Facebook has developed a multifaceted approach to elections integrity.Â The company has nearly doubled its security team ahead of the 2018 midterms, and is taking a more proactive role in identifying “coordinated inauthentic behavior,” according to spokeswoman Brandi Hoffine Barr.Â
“We now have more than 20,000 people working on safety and security, we have put in place advanced systems and tools to detect and stop threats, and developed backstops … to help address any unanticipated threats as quickly as possible,” Hoffine Barr said.
Many of the company’s efforts begin with detecting and removing false accounts. In May, Facebook said it disabled nearly 1.3 billion fake accounts in the first half of 2018.Â As those accounts are oftenÂ the source of false information on the site, Facebook said it’sÂ combating the spread of false news by removing them.Â
Facebook also announced in October that it had removedÂ 559 pages and 251 accounts for breaking the platform’sÂ rules for “spam and coordinated inauthentic behavior,” which includes creating large networks of accounts to mislead other users. On Facebook, that can look like people or organizations creating false pages or fake accounts.Â
More: Facebook aim to fight election manipulation misses a big problem, critics say
More: We read every one of the 3,517 Facebook ads bought by Russians. Here’s what we found
Hoffine Barr described Facebook’s work as ” continuous effort,” and noted that the company isn’t working in isolation.
“Ahead of the upcoming midterm elections, we are working closely with federal and state elections officials, as well as other technology companies, to coordinateÂ our efforts and share information,” she said.Â
Two weeks before the midterms, Facebook uncovered a disinformation campaign from Iran that attempted to sow discordÂ over hot-button issues like President Donald Trump and immigration. There’s currently no evidence the campaign was tied to the Iranian government.Â
Twitter has also taken action against bad actors, recently purging accounts the company had previously locked for “suspicious changes in behavior.”Â In an Oct. 1 blog post, Twitter executives detailed three “critical” areas of its efforts to preserve electionÂ integrity.
The first, an update to Twitter’s rules, includes expanding what Twvitter considers a fake account. The company currently uses a number of criteria to make that determination, including whether the profile uses stolen or copied photos and provides intentionally misleading profile information.Â The second category is described as “detection and protection,” and entails identifying and spam accounts, as well as improving Twitter’s ability to ban users who violate its policies.Â
The most visible efforts fall under “product developments.” From giving users control over the order of their timelines to adding an elections labelÂ for candidates’ accounts, this category is all about helping Twitter’s users stay informed.Â
Roth said the company is alsoÂ sharing information on “potential state-backed operations” with researchers to learn more about attacks on election integrity.Â
“Our goal is to try and stay one step ahead in the face of new challenges going forward,” Roth said.Â “Protecting the public conversation is our core mission.”
Because these efforts by social networking companies are fairly new, it can be hard to gauge their effect. Facebook is highlighting aÂ recent study from Stanford University and New York University, in whichÂ researchers found userÂ interactions with fake news sites declined by more than half on Facebook after the 2016 election.
Schafer said one of the big differences from 2016 is the decline in automated activity. He said Twitter in particular has gotten “significantly more aggressive” about shutting down bots.Â
Still, he said social networks are in a “tricky situation” when it comes to regulating content. Too much regulation and they are criticized for suppressing viewpoints. Too little and their own platform rules aren’t enforced.
“Unless we want to get them into the space of actively regulating content and making judgment decisions on what is and what is not factually accurate, you need to accept thereâ€™s a certain amount of ‘bad activity’Â thatâ€™s going to happen on the platform,” Schafer said.
And while those companies areÂ cracking down on purveyors of misinformation, others are emphasizingÂ tools to help users distinguish fact from fiction.
On Oct. 2, New York-basedÂ tech startup Our.News has launched a Google Chrome and Firefox browser extension called Newstrition that provides users background information on media publishers for any given article, as well as third-party fact checking.
The toolÂ was developed in conjunction with the Newseum and the Freedom Forum Institute, both nonprofits dedicated to preserving the First Amendment.
More: Donald Trump calls for more civility as he attacks media and Democrats at Charlotte rally
Unlike traditional fact-checking tools that label articles as true or false, Our.News CEO Richard ZackÂ said the Newstrition tool lets people look at validated background information for any given article and make that determinationÂ themselves.
â€œOne of the things we found as we did research and talked to people …Â is that the public feels like theyâ€™re not part of the (news) process,â€ Zack said. â€œThey feel like theyâ€™re not being heard in a lot of ways.â€
Newstrition also invites users to give their opinion on news articles through a public ratings system. Much like Amazon or Yelp! reviews, individual responses are then aggregated to show the â€œpublic consensus,â€ Zack said.
â€œWeâ€™re actually saying, â€˜We want to hear your thoughts. We want to know what you think. You are a part of the process,â€™â€ he said.
The extension has already received thousands of downloads, as well asÂ attention from media companies. Zack said although Our.News is not at liberty to discussÂ specifics, the company is talking with several major news publishers about integrating the tool on their websites.
â€œIf people canâ€™t figure out what to believe, then that undermines the entire First Amendment,â€ Zack said.Â â€œIt undermines the freedom of the press as an institution.â€
Going forward, social networks and experts say one thing is clear: this is far from theÂ end of misinformation campaigns. Schafer said that although the issue is especially timely during an election year, those efforts are going on behind the scenes every day.
“It’s not as if these accounts somehow appear before elections and then they go back into hibernation and they’ll come back in 2020,” he said.Â “They’re going to work every single day, chipping away at people’s trust in democracies or democratic institutions or just further inflaming partisan debates.”Â
More: These are the liberal memes Iran used to target Americans on Facebook
More: Facebook foils political influence campaigns originating in Iran, Russia ahead of U.S. midterms