Facebook Audit Exposes Algorithm Biases in Policing Speech

9 mins read



Facebook this week released the results of its long-awaited civil rights audit, which was two years in the making. While most of the report focuses on issues of workplace diversity and content policies, an entire chapter is devoted to algorithmic bias, and references to the company’s growing reliance on artificial intelligence to moderate its platform are woven throughout the report’s 89 pages. As Silicon Valley increasingly outsources censorship to opaque AI algorithms, what are the possible inadvertent consequences for democracy?

Like most of Silicon Valley, Facebook has rushed to replace its army of tens of thousands of human content moderators with AI algorithms. In addition to the dramatic cost savings, AI moderation is scalable, allowing the company to review an ever-growing percentage of what its users say and share on its platform.

The word “algorithm” appears 73 times in Facebook’s report, signifying how central AI has become to the company’s future. Just one year ago, 65% of the hate speech posts the company removed were first identified by one of its algorithms before any human reported it. By March of this year, that number had increased to 89%.

According to the report, Facebook “removes some posts automatically, but only when the content is either identical or near-identical to text or images previously removed by its content review team as violating Community Standards, or where content very closely matches common attacks that violated policies. … Automated removal has only recently become possible because its automated systems have been trained on hundreds of thousands of different examples of violating content and common attacks. … In all other cases when its systems proactively detect potential hate speech, the content is still sent to its [human] review teams to make a final determination.”

The automated removal of reposts of past deleted hate speech is similar to the blacklists of known terrorism or child exploitation imagery the company uses and is relatively uncontroversial, preventing users only from reposting previously banned content.

Yet the acknowledgement that the company now goes beyond this to remove “content very closely match[ing] common attacks that violated policies” shows that Facebook’s algorithms now actually make their own decisions about what kinds of speech to prohibit. That almost 90% of the company’s hate speech removals were initiated by an algorithm means these black boxes of software code now codify the de facto “speech laws” of modern society.

Despite their enormous power, these algorithms are among the company’s most protected trade secrets, with even U.S. policymakers kept in the dark on their functioning. Indeed, even when it comes to the most basic of details of how often the algorithms are wrong or how much they miss, social media companies release only carefully worded statements and decline to comment when asked for the kinds of industry standard statistics that would permit further scrutiny of their accuracy.

Chapter 6 of Facebook’s audit opens with the acknowledgement that “AI is often presented as objective, scientific and accurate, but in many cases it is not. Algorithms are created by people who inevitably have biases and assumptions, and those biases can be injected into algorithms through decisions about what data is important or how the algorithm is structured, and by trusting data that reflects past practices, existing or historic inequalities, assumptions, or stereotypes.” The chapter adds that “as algorithms become more ubiquitous in our society it becomes increasingly imperative to ensure that they are fair, unbiased, and non-discriminatory, and that they do not merely magnify pre-existing stereotypes or disparities.”

While Facebook has built a number of tools and workflows internally to attempt to catch bias issues in its algorithms, the audit report acknowledges that such tools can only help so much. Truly mitigating bias requires a diverse workforce that can see the kinds of biases such tools aren’t designed to capture. As the report notes, “A key part of driving fairness in algorithms in ensuring companies are focused on increasing the diversity of the people working on and developing FB’s algorithms.”

Why are software tools and bias analyses not enough to truly minimize algorithmic bias? Because without a sufficiently diverse workforce, the biases found by those audits might not be seen as problems.

As Facebook races to narrow its acceptable speech rules and flag posts by President Trump and other Republicans, its algorithms will in turn increasingly learn to flag posts by conservatives as “hate speech.” An AI fairness audit might flag that the algorithms are heavily biased against conservatives, but if Facebook’s workforce skews largely liberal, that bias might be considered a desired feature rather than a problematic one to be removed.

When Facebook proposed a new news feed algorithm in 2017, its algorithmic bias audit showed that conservative news outlets would be significantly more affected than other news outlets. The company viewed this as a positive step towards reining in misinformation on the platform, while only the intervention of the company’s most senior Republican, Vice President of Global Policy Joel Kaplan, convinced its engineers to tweak the algorithm to lessen its impact on conservative news outlets.

In other words, AI bias audits and algorithmic fairness initiatives mean little when a company’s workforce is so homogeneous that the biases those tools uncover are considered positives to be encouraged rather than biases to be removed. This is especially important given that Facebook’s employee base, like the rest of Silicon Valley, skews overwhelmingly liberal.

Moreover, these issues extend far beyond liberal/conservative considerations. Just 1.5% of Facebook’s engineers and 3.1% of its senior leadership identify as black. As the audit notes, the company’s overwhelmingly white workforce presumably has little experience with race-based housing discrimination and thus Facebook’s advertising algorithms were originally designed to permit housing advertisements that illegally excluded certain races. Algorithm audits can’t spot what their designers don’t think to look for — or don’t realize are problems.

Orwell had it wrong when he saw the government as ultimate censor. As Facebook increasingly outsources its future to opaque AI algorithms that even it doesn’t fully understand, the future of our nation is increasingly being decided by a small cadre of unaccountable companies building secretive AI processes that have become the new de facto laws of acceptable speech in the United States.

RealClear Media Fellow Kalev Leetaru is a senior fellow at the George Washington University Center for Cyber & Homeland Security. His past roles include fellow in residence at Georgetown University’s Edmund A. Walsh School of Foreign Service and member of the World Economic Forum’s Global Agenda Council on the Future of Government.





Source link

Previous Story

AOC on NYC Crime Wave: People "Need To Feed Their Child," Maybe Have To "Shoplift Some Bread" | Video

Next Story

USAF Announces 'Skyborg,' Fighting Drones That Could Replace Older F-16 Jets

Latest from COMMENTARY