Ending Legal Protections for Social Media Platforms That Use Algorithms to Create Biased Newsfeeds (H.R. 492)
Do you support or oppose this bill?
What is H.R. 492?
(Updated January 24, 2022)
This bill, the Biased Algorithm Deterrence Act, would allow social media platforms to be sued for using algorithms to suppress otherwise permissible user-generated content relative to similar content from newsfeeds. It would do so by removing protections under Section 230 of the Communications Act, which currently protects social media companies from liability for filtering illegal or undesirable content — whether or not it’s constitutionally protected — because they aren’t considered the publisher of the content (rather, they’re considered providers). This bill would end that protection by treating social media companies as publishers.
Argument in favor
Social media companies have been using algorithms to suppress content from conservative voices. Unless they stop and act like the neutral platforms they claim to be they should be treated like publishers and face related legal protections.
Argument opposed
Social media companies should continue to be free to use whatever algorithms or other process they choose to present content on their platforms. Rather than trying to expose those platforms to lawsuits, users who don’t like how they operate can stop using them.
Impact
Social media platforms; individuals and outlets that are wrongfully suppressed because of bias; and the courts.
Cost of H.R. 492
A CBO cost estimate is unavailable.
Additional Info
In-Depth: Rep. Louie Gohmert (R-TX) reintroduced this bill from the 115th Congress to end social media platforms’ suppression of conservative messages:
“Social media companies like Facebook, Twitter, and Google are now among the largest and most powerful companies in the world. More and more people are turning to a social media platform for news than ever before, arguably making these companies more powerful than traditional media outlets. Yet, social media companies enjoy special legal protections under Section 230 of the Communications Act of 1934, protections not shared by other media. Instead of acting like the neutral platforms they claim to be in order obtain their immunity, these companies have turned Section 230 into a license to potentially defraud and defame with impunity. Representatives of social media companies have testified in Congressional hearings that they do not discriminate against or filter out conservative voices on their platforms. But for all their reassurances, the disturbing trend continues unabated. Employees from some of these companies have communicated their disgust for conservatives and discussed ways to use social media platforms and algorithms to silence and prevent income to conservatives… Since there still appears to be no sincere effort to stop this disconcerting behavior, it is time for social media companies to be liable for any biased and unethical impropriety of their employees as any other media company. If these companies want to continue to act like a biased medium and publish their own agendas to the detriment of others, they need to be held accountable.”
Rep. Gohmert claims that deliberate filtering, called “algorithmic bias,” favors liberal points of view even while social media companies’ executives maintain their platforms are neutral in order to shield their companies from lawsuits.
Rep. Devin Nunes (R-CA) filed a $250 million lawsuit against Twitter alleging the social media platform systematically shadow-banned conservative users to influence the 2018 elections. The lawsuit alleges defamation, conspiracy, and negligence and seeks an injunction to compel Twitter to turn over the identities of numerous accounts he claims harassed and defamed him.
The Internet Infrastructure Coalition (i2Coalition) opposes this bill. Its Executive Director, Christian Dawson, wrote:
“[This bill] seeks to amend Section 230 of the Communications Decency Act in ways that would change the liability requirements of Internet providers and make them liable for the actions of their customers unless they make a ‘good faith’ effort to deter illegal acts. ‘Good faith’ is such a squishy term that it’s more or less meaningless. Overnight, Internet providers would be subject to endless lawsuits without the strict monitoring of every aspect of their user’s content.”
Of Note: Algorithms are central to how information and communications are located, retrieved, and presented online. They inform Twitter follow recommendations, Facebook newsfeeds, and suggested Google map directions. However, Michele Wilson, an associate professor at Curtin University in Perth, Australia, explains that they aren’t objective instructions, but rather assume certain parameters and values and are in constant flux with changes made by both humans and machines:
“Embedded in complex amalgams of political, technical, cultural and social interactions, algorithms bring about particular ways of seeing the world, reproduce stereotypes, strengthen world views, restrict choices or open previously unidentified possibilities.”
In an article on The Conversation, Giovanni Luca Ciampaglia, an assistant professor in the department of Computer Science and Engineering at the University of South Florida and Filippo Menczer, a professor of Computer Science and Informatives and Director of the Center for Complex Networks and Systems Research at Indiana University, write that social media algorithms are vulnerable to manipulation:
“[T]he fact that low-credibility content spreads so quickly and easily suggests that people and the algorithms behind social media platforms are vulnerable to manipulation… Both social media platforms and search engines employ them. These personalization technologies are designed to select only the most engaging and relevant content for each individual user. But in doing so, it may end up reinforcing the cognitive and social biases of users, thus making them even more vulnerable to manipulation. For instance, the detailed advertising tools built into many social media platforms let disinformation campaigners exploit confirmation bias by tailoring messages to people who are already inclined to believe them. Also, if a user often clicks on Facebook links from a particular news source, Facebook will tend to show that person more of that site’s content. This so-called “filter bubble” effect may isolate people from diverse perspectives, strengthening confirmation bias. Our own research shows that social media platforms expose users to a less diverse set of sources than do non-social media sites like Wikipedia. Because this is at the level of a whole platform, not of a single user, we call this the homogeneity bias. Another important ingredient of social media is information that is trending on the platform, according to what is getting the most clicks. We call this popularity bias, because we have found that an algorithm designed to promote popular content may negatively affect the overall quality of information on the platform. This also feeds into existing cognitive bias, reinforcing what appears to be popular irrespective of its quality. All these algorithmic biases can be manipulated by social bots, computer programs that interact with humans through social media accounts. Most social bots, like Twitter’s Big Ben, are harmless. However, some conceal their real nature and are used for malicious intents, such as boosting disinformation or falsely creating the appearance of a grassroots movement, also called “astroturfing.” We found evidence of this type of manipulation in the run-up to the 2010 U.S. midterm election… [W]e analyzed the core of the misinformation network during the 2016 U.S. presidential campaign. We found many bots exploiting both the cognitive, confirmation and popularity biases of their victims and Twitter’s algorithmic biases.”
Social media platforms have come under fire for bias against conservative news media:
“Testifying before the US Congress, Mark Zuckerberg told the representatives that he wouldn’t be surprised if there was a left-leaning bias in Silicon Valley. Facebook came under fire in May 2016 when a group of former employees told the technology blog Gizmodo that they routinely suppressed news about prominent conservative figures (most notably in the ‘trending’ section of the website). They also claimed stories by outlets like Breitbart or Newsmax were dismissed unless The New York Times or CNN covered the same article, in which case the more left-leaning publications were promoted. Similarly, Youtube has been criticized for using the radically left-wing Southern Poverty Law Center to influence its decisions as to what content is too offensive to be placed on its site – their arbitration has effectively hit only conservative videos.”
Media:
Sponsoring Rep. Louie Gohmert (R-TX) Press Release (115th Congress)
i2Coalition Press Release (Opposed)
The Conversation (Context)
The Boar (Context)
Summary by Lorelei Yang
(Photo Credit: iStockphoto.com / bigtunaonline)The Latest
-
IT: Battles between students and police intensify, and... 💻 Should we regulate AI access to our private data?Welcome to Thursday, May 2nd, listeners... The battle between protesters and police intensifies on college campuses across the read more...
-
Should U.S. Implement Laws Protecting Private Data from AI Access?Artificial intelligence is rapidly integrating into our everyday lives, transforming the way we work, live, and interact with read more... Artificial Intelligence
-
Protests Grow Nationwide as Students Demand Divestment From IsraelUpdated May 1, 2024, 11:00 a.m. EST The battle between protesters and police has intensified on college campuses across the read more... Advocacy
-
IT: Rumors spread about ICC charging Israel with war crimes, and... Should states disqualify Trump?Welcome to Tuesday, April 30th, friends... Rumors spread that the International Criminal Court could issue arrest warrants for read more...