Thank you Chairman Goodlatte!
Washington, D.C. – House Judiciary Committee Chairman Bob Goodlatte (R-Va.) today delivered the following statement during the House Judiciary Committee’s hearing on “Facebook, Google, and Twitter: Examining the Content Filtering Practices of Social Media Giants.”
Chairman Goodlatte: Today, we continue to examine how social media companies filter content on their platforms. At our last hearing, which we held April, this Committee heard from Members of Congress, social media personalities, legal experts, and a representative of the news media industry to better understand the concerns surrounding content filtering. Despite our invitations, Facebook, Google, and Twitter declined to send witnesses. Today, we finally have them here.
Since our last hearing, we’ve seen numerous efforts by these companies to improve transparency. Conversely, we’ve also seen numerous stories in the news of content that’s still being unfairly restricted. Just before July Fourth, for example, Facebook automatically blocked a post from a Texas newspaper that it claimed contained hate speech. Facebook then asked the paper to “review the contents of its page and remove anything that does not comply with Facebook’s policies.” The text at issue was the Declaration of Independence.
Think about that for a moment. If Thomas Jefferson had written the Declaration of Independence on Facebook, that document would have never seen the light of day. No one would be able to see his words because an algorithm automatically flagged it—or at least some portion of it—as hate speech. It was only after public outcry that Facebook noticed this issue and unblocked the post.
Facebook may be embarrassed about this example—this Committee has the opportunity today to ask—but Facebook also may be inclined to mitigate its responsibility in part because it was likely software, not a human being, that raised an objection to our founding document. Indeed, given the scale of Facebook and other social media platforms, a large portion of their content filtering is performed by algorithms without the need of human assistance.
And Facebook is largely free to moderate content on its platform as it sees fit. This is in part because, over twenty years ago, Congress exempted online platforms from liability for harms occurring over their services. In 1996, the Internet was just taking shape. Congress intended to protect it to spur its growth. It worked because the vibrant Internet of today is no doubt a result of Congress’s foresight.
But the Internet of today is almost nothing like the Internet of 1996. Today, we see that the most successful ideas have blossomed into some of the largest companies on Earth. These companies dominate their markets, and perhaps rightfully so given the quality of their products. However, this begs another question—are these companies using their market power to push the envelope on filtering decisions to favor the content the companies prefer?
Congress must evaluate our laws to ensure that they are achieving their intended purpose. The online environment is becoming more polarized—not less; and there are concerns that discourse is being squelched—not facilitated. Moreover, society as a whole is finding it difficult to define what these social media platforms are and what they do. For example, some would like to think of them as government actors, as public utilities, as advertising agencies, or as media publishers—each with its own set of legal implications and potential shortfalls.
It’s clear, however, that the platforms need to do a better job explaining how they make decisions to filter content and the rationale for why they do so.
I look forward to the witnesses’ testimony.