How Social Media Shields Itself From Public Scrutiny

13 mins read



Social media platforms are our modern digital public squares in which the public, press, and policymakers come together on what amounts to digital private property to debate our nation’s future. Within these Orwellian walled gardens, ever-changing rules of “acceptable speech” dictate what Americans are permitted to say and see. Dissenting views are often redefined as “misinformation” and “hate speech,” while inconvenient facts disappear into the “memory hole.” As Silicon Valley has become the de facto Ministry of Truth, why have the corporate gatekeepers not encountered greater pushback? The answer is that in the absence of governmental regulations requiring greater transparency about their operations, they are able to shield their growing power from public scrutiny.

These companies increasingly operate today as black boxes. In their view, the public has no right to understand how they function, the rules they operate by, how those rules are decided — or even what happens to their personal data. In response to questions, the companies answer with either silence or carefully worded statements that avoid transparency.

It is a remarkable commentary on just how opaque social media is that we don’t even know how many posts there are each day across the major platforms. While it is possible to estimate Twitter’s size through statistical analysis of one of its developer data feeds, the company itself no longer publishes regular statistics on the number of daily tweets and retweets and declines to provide such numbers when asked. User counts are reported through self-defined and regularly changing metrics like “active users” instead of simple standardized measures such as how many distinct users tweeted each day. How could Congress ever hope to regulate social platforms – assuming such effort would withstand legal challenges — when it doesn’t even know how big those platforms are?

When these companies do release actual statistics, they are carefully worded in carefully ways that can cause confusion. Facebook has for several years issued statements such as “99% of the ISIS and al Qaeda-related terror content we remove from Facebook is content we detect before anyone in our community has flagged it to us.” And “we took action on 1.9 million pieces of ISIS, al-Qaeda and affiliated terrorism propaganda, 99.5% of which we found and flagged before users reported them to us.” This statement is frequently misreported, with the New York Times stating that “Facebook said its A.I. found 99.5 percent of terrorist content on the site.” But Facebook’s statement doesn’t say it removed 99.5% of terror content; it says that of the 1.9 million posts it removed as a terror violation, 99.5% of them were first identified by the company’s algorithms (a spokesperson confirmed this as the correct interpretation of its statement). In reality, it is unknown how much terror content is on the platform and either escaped the company’s notice or was deemed not a violation.

What about Facebook’s much-touted fact-checking program? The company goes to great lengths to tout the “independence” of its third party fact-checking partners. Asked in 2018 if Facebook ever requires its fact-checkers to change their ratings, a spokesperson responded that its FAQ page explicitly stated, “Corrections and disputes are processed at the fact-checker’s discretion.” (Interestingly, that FAQ page now redirects to a new FAQ that points to a new disputes page that noticeably lacks that language.)

Last May, a spokesperson confirmed that this was still Facebook’s policy, stating, “Third-party fact-checking partners operate independently from Facebook and are certified through the non-partisan International Fact-Checking Network. Publishers appeal to individual fact-checkers directly to dispute ratings.” Yet last month, Fast Company magazine reported that “Facebook may intervene if it thinks that a piece of content was mistakenly rated, by asking fact-checkers to adjust their ratings, a spokesperson acknowledged to Fast Company.”

How does the company reconcile its denials over the last three years that it doesn’t pressure fact-checkers to change their ratings with Fast Company’s revelation that it has done so? Asked about this, a spokesperson clarified that the company doesn’t actually change the ratings itself and that publishers can appeal directly to fact-checkers to dispute ratings. Yet the spokesperson added a third acknowledgement missing from the company’s previous responses: that Facebook may also ask fact-checkers to change their ratings when the company believes they are not in keeping with its definitions.

The company never technically lied. Both times it was asked whether it had ever intervened in a rating, it didn’t deny doing so; it merely issued statements that disputes are at fact-checkers’ “discretion” and that publishers must appeal directly to the fact-checkers in disputes. It simply left out the fact that there was a third route: the company requiring a fact-checker to change its rating. Facebook’s notable omission offers a textbook example of how the company’s silence and carefully worded statements allow it to hide its actions from public scrutiny.

The little transparency that exists for social platforms tends to come in response to leaks to the media.

Prior to The Guardian’s 2017 publication of a trove of internal Facebook moderation documents, there was little public visibility into what precisely the company considered to be “acceptable” versus “prohibited” speech. Only through the leaked documents did the public learn that the company’s official policy explicitly permitted graphic descriptions of violence against women and child bullying, to name just a few surprises, prompting public outcry.

In the aftermath of The Guardian’s release, Facebook bowed to public pressure and published a copy of its moderation guidelines on its website. Yet these rules are apparently unevenly applied. In India, the Wall Street Journal reports that the company silently waived its hate speech rules for important political leaders in an attempt to curry favor with the government. Only after its actions became public was the company forced to reverse course, but it is unclear how many other influential figures in how many other countries may similarly enjoy such exemptions.

Social media companies increasingly partner with academia on research, emphasizing the “transparency” and “accountability” of these collaborations. In reality, there is typically little of either. Take Facebook’s 2014 study in which it partnered with Cornell University to manipulate the emotions of more than 689,000 users. When the researchers submitted it for publication, the journal — Proceedings of the National Academy of Sciences — was initially “concerned” about the “ethical issu[e]” of manipulating users’ emotions without their consent, until it “queried the authors and they said their local institutional review board had approved it” and thus the journal would not “second-guess” the university. Only after public outrage erupted did it emerge that the university’s review board had determined that since only Facebook employees had access to raw user data, with Cornell University’s researchers having access only to the final results of the analysis, that “no review by the Cornell Human Research Protection Program was required.”

This raises concerns over the rigor of the ethical review of Facebook’s latest academic collaboration to actively manipulate volunteers’ Facebook accounts in the lead-up to Election Day, Nov. 3. Neither of the two academic leads nor their institutions responded to multiple requests for comment. The University of Texas, Austin, told RealClearPolitics that it was “learning about the complex issues you raise” and said it would provide a response — but never did despite multiple follow-up requests. The ethical review board for the project, NORC (National Opinion Research Center) at the University of Chicago, declined to answer any questions about the project, stating, “All inquiries about this project need to go directly to Facebook” while the university itself did not respond. Facebook declined to answer any of the detailed questions posed to it about the research. Asked whether the current effort relied on the same “pre-existing data” exemption used for the Cornell University study to prevent ethical review, none of the organizations, including the designated institutional review board, answered.

Thus, what appears at first glance to be an open, “transparent” academic research initiative turns out to be just as opaque as every other Facebook effort. Indeed, by choosing a private university as the ethics review board for its pre-election effort, Facebook is even able to shield the project from Freedom of Information Act requests.

How might Congress help remedy these concerns? Here are some suggestions.

  • The first is to require social platforms to publish monthly standardized statistics on the number of posts, users and other metrics that would help evaluate their reach.
  • The second would be to convene annual external review panels chosen by bodies such as the National Academies  of Science, Engineering and Medicine — without input from the companies — to review a randomized sample of the social media platform’s moderation and fact-checking actions, including for unconscious racial and cultural bias.
  • The third would be to establish a centralized ethical review board under the National Academies or similar body that would be required to review and approve each major research initiative like Twitter’s “Healthy Conversation” effort or Facebook’s new elections study, ensuring genuine external review without exemptions for “preexisting datasets.”

In the end, as the Washington Post’s motto reminds us, “democracy dies in darkness.” It is time to finally shed light on the inner workings of Silicon Valley.

RealClear Media Fellow Kalev Leetaru is a senior fellow at the George Washington University Center for Cyber & Homeland Security. His past roles include fellow in residence at Georgetown University’s Edmund A. Walsh School of Foreign Service and member of the World Economic Forum’s Global Agenda Council on the Future of Government.





Source link

Previous Story

The Head of the FTC Is Yet Another Tyrannical Deep State Swamp Creature

Next Story

Biden: "I Don't Support The Green New Deal," I Support "The Biden Plan" | Video

Latest from COMMENTARY