Could Facebook’s Election Study Shape the 2020 Outcome?

11 mins read



Last month Facebook announced a new initiative in which it is partnering with a hand-selected group of academics to study the influence of social media on the 2020 election. Unlike previous  analyses of platforms’ impact conducted after an election, this effort will involve actively manipulating the Facebook and Instagram accounts of volunteers in the lead-up to Nov. 3 with little transparency, raising critical questions about the sites’ growing power over our democratic processes.

It is truly remarkable that after nearly four years of national outrage over Russian interference in the 2016 election that Facebook would announce this step with just two months to go until one of the most contested elections in decades.

At first glance, Facebook’s new initiative appears long overdue. The company is partnering with a group of academics to conduct “objective, dispassionate, empirically grounded research” into the question: “What impact does [Facebook] have on democracy?”

The rest of its announcement, however, frames the effort not as a traditional dispassionate scientific analysis or an attempt to safeguard elections from foreign interference but rather as one intended “to amplify all that is good for democracy on social media, and mitigate against that which is not.”

What precisely does the company see as behaviors that are “good for democracy”? In Facebook’s eyes, “polarization” and “divisions” are listed as negatives, while being “better informed about politics” is a positive. On its face, this assessment seems benign, but raises critical questions of just what kind of voter Facebook views as “good for democracy.”

Yet what truly sets this current effort apart from previous research is its focus on active interventions to study what changes have an effect on real voter behavior during a live election, rather than merely observing and documenting outcomes after the fact.

According to the announcement, up to 400,000 users are being recruited to participate in the research, signing a consent form that allows the company to make “targeted changes to some participants’ experiences with Facebook and Instagram. For example, volunteers could see more or fewer ads in specific categories such as retail, entertainment or politics, or see more or fewer posts in News Feed related to specific topics.” Some will also be asked to not log into the site for a period of time or to consent to having their web browsing habits logged.

The company dismisses the possibility that this work might impact the 2020 election, offering on its FAQ, “Is it likely that this research will change the outcome of an election? No. With billions of dollars spent on ads, direct mail, canvassing, organizing and get out the vote efforts, it is statistically implausible that one research initiative could impact the outcome of an election. The research has been carefully designed to not impact the outcome of the election.” As evidence of this, the company notes that it will be targeting 0.1% of eligible voters.

Given projections of the closeness of the 2020 race, however, 0.1% of eligible voters could very well change the outcome of the election, especially in closely fought swing states. Most importantly, even though Facebook will be manipulating the accounts of just 200,000 to 400,000 users, the changes it makes to their behaviors will propagate outwards to their friends, friends of friends, friends of friends of friends, and so on. For example, imagine if those 400,000 users are shown a daily stream of anti-Biden content in their news feeds, chronicling a range of alleged misdeeds and criticizing his policies, causing them to share or “like” at least some of those stories? If each of those 400,000 users has 200 friends (the median in 2014) and each of those friends has 200 friends, very quickly those changes would have an impact across a much greater percentage of the U.S. electorate, flooding the site with anti-Biden posts. Asked about this, neither Facebook nor the undertaking’s two academic leads, Talia Stroud and Joshua A. Tucker, responded to request for comments.

Facebook appears to acknowledge the potential for inadvertent effects, noting that it “and our research partners will be monitoring the research at every step. In the highly unlikely event they detect unanticipated effects, they will stop the research and take corrective action as needed.” The company and the academic leads did not reply when asked how it would determine whether “unanticipated effects” were occurring, what visibility the public and policymakers would have into these effects and at what threshold it would end the research.

If the Russian government was allegedly able to influence the 2016 election with a fairly small amount of effort despite the same “billions of dollars spent on ads, direct mail, canvassing, organizing and get out the vote efforts” that Facebook believes will prevent the current effort from having an effect, why does the company believe this initiative won’t have an impact?

In the academic world, ethics bodies called Institutional Review Boards carefully evaluate research like Facebook’s, weighing its risks and benefits and considering the potential of inadvertent impacts. Facebook’s announcement is notable for the fact that it declines to list the IRB that approved the research. Neither the company nor the lead researchers responded when asked which IRB approved it and how they weighed the nearly unprecedented risk of undermining confidence in an election. After all, even if Facebook’s effort does not change the voting outcome, its mere existence could stoke concerns of influence in a contested election.

The company confirmed that it has already begun gauging the incentives necessary to get volunteers to participate with an eye toward beginning at least portions of the undertaking by the end of this month, so the project appears to be moving forward at speed.

What impact could it really have? In 2010, the company claimed that it was able to boost election turnout by 340,000 additional voters by adjusting their news feeds to encourage them to vote. Given the current effort’s emphasis on “whether and how [users] vote,” it is certainly possible that Facebook could repeat this work in 2020, leading to concerns about skewed election turnout. In 2014 the company famously showed that it could manipulate the emotions of its users at scale. That same research showed that those effects could spread from the manipulated user to that user’s Facebook friends, lending credence to the idea that the impacts of Facebook’s current election research effort could stretch far beyond its target of 0.1% of U.S. voters.

Two and a half years ago, when Facebook first announced its academic research partnership, I asked that initiative’s academic and ethical lead, the Social Science Research Council, whether it might ever permit active intervention in a U.S. election and how it would prevent that research from swaying the results. The organization’s president at the time responded that all such matters were “to be determined.” Today we have our answer, but we are no closer to having any form of transparency.

It is a remarkable commentary on the state of our nation that the policymakers, pundits and press members who have condemned Facebook for allowing the Russians to allegedly influence the 2016 election outcome now are expressing no concerns about Facebook turning the 2020 election into a research project to measure its sway over voters. The fact that neither the company nor the researchers involved will answer even the most basic questions about the initiative reminds us just how little transparency there is into a company where even mid-level employees have the ability to censor heads of state. As the Washington Post’s motto laments, “democracy dies in darkness.” Perhaps we should keep that in mind when witnessing Facebook use one of the most consequential presidential elections ever as an opaque social science experiment.

RealClear Media Fellow Kalev Leetaru is a senior fellow at the George Washington University Center for Cyber & Homeland Security. His past roles include fellow in residence at Georgetown University’s Edmund A. Walsh School of Foreign Service and member of the World Economic Forum’s Global Agenda Council on the Future of Government.





Source link

Previous Story

MSNBC’s Andrea Mitchell mourns the ‘very unfortunate and untimely end’ of Peter Strzok’s illustrious FBI career [video] – twitchy.com

Next Story

Coronavirus in one state (100)

Latest from COMMENTARY