Biden administration urges Supreme Court to narrow Big Tech’s liability shield in pivotal Google case
By Brian Fung, CNN
The Biden administration has told the US Supreme Court that social media platforms ought to be potentially liable for recommendations made by their AI-driven content algorithms, weighing in against Google in a pivotal case on digital speech and content moderation.
In a filing to the Court Wednesday evening, the administration argued federal law does not immunize tech platforms from lawsuits that zero in on recommendation algorithms, even when the same law shields the companies from suits about decisions to host or remove actual user content.
The legal brief could prove instrumental in a closely watched case about the regulation of digital platforms, and reflects longstanding calls by President Joe Biden to roll back liability protections for companies such as Facebook and Google.
The case in question, Gonzalez v. Google, offers the Supreme Court its first opportunity to rule on Section 230 of the Communications Decency Act, the liability shield many websites have used to nip content moderation lawsuits in the bud. Several Supreme Court justices, including most vocally the conservative Clarence Thomas, have expressed interest in hearing a case that may allow the Court to narrow Section 230’s broad protections.
Section 230 has been called “the 26 words that created the internet,” and was passed by Congress in 1996 as a way to shield all websites, not just social media platforms, from lawsuits over third-party content. But in recent years it has come under fire from members of both parties, with Democrats arguing it has enabled platforms to escape accountability for facilitating hate speech and misinformation, and Republicans arguing it shields platforms from claims of political discrimination.
Google didn’t immediately respond to a request for comment.
The US government’s brief addresses Google-owned YouTube’s recommendation of videos produced by the terrorist group ISIS. The plaintiffs in the case — the family of Nohemi Gonzalez, who was killed in a 2015 ISIS attack in Paris — have alleged, among other things, that Google violated a US antiterrorism law with its content algorithms by recommending pro-ISIS videos to users.
Google has argued Section 230 protects the company’s ability to organize and curate content, and that a ruling against it could hurt efforts to remove terrorist content. An earlier appellate court ruling had sided with Google.
“Undercutting Section 230 would make it harder, not easier, to combat harmful content,” José Castañeda, a Google spokesperson, has previously said, “making the internet less safe and less helpful for all of us.”
According to the Biden administration, Section 230 does protect Google and YouTube from lawsuits “for failing to remove third-party content, including the content it has recommended.”
But, the government’s brief argued, those protections do not extend to Google’s algorithms because they represent the company’s own speech, not that of others.
“The effect of YouTube’s algorithms is still to communicate a message from YouTube that is distinct from the messages conveyed by the videos themselves,” the filing said. It added: “Even if YouTube plays no role in the videos’ creation or development, it remains potentially liable for its own conduct and its own communications.”
The-CNN-Wire
™ & © 2022 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.