How the Supreme Court could reshape the internet as you know it
By Brian Fung, CNN
Justice Samuel Alito of the US Supreme Court asked this week what may be, to millions of average internet users, the most relatable question to come out of a pair of high-stakes oral arguments about the future of social media.
“Would Google collapse, and the internet be destroyed,” Alito asked a Google attorney on Tuesday, “if YouTube and therefore Google were potentially liable” for the content its users posted?
Alito’s question aimed to cut through the jargon and theatrics of a nearly three-hour debate over whether YouTube can be sued for algorithmically recommending videos created by the terrorist group ISIS.
His question sought to explore what might really happen in a world where the Court rolls back a 27-year-old liability shield, allowing tech platforms to be sued over how they host and display videos, forum posts, and other user-generated content. The Google case, as well as a related case argued the next day involving Twitter, are viewed as pivotal because the outcome could have ramifications for websites large and small — and, as Justice Brett Kavanaugh observed, “the digital economy, with all sorts of effects on workers and consumers, retirement plans and what have you.”
The litigation could have vast implications for everything from online restaurant reviews to likes and retweets to the coding of new applications.
Though the justices this week seemed broadly hesitant to overturn or significantly narrow those legal protections, the possibility remains that the Court may limit immunity for websites in ways that could reshape what users see in their apps and browsers — or, in Google’s words, “upend the internet.”
Nearly 30 years of protections
Passed in 1996, Section 230 of the Communications Decency Act sought to foster the growth of the early internet. Faced with a technological revolution it wanted to nurture, Congress created a special form of legal immunity for websites so they could develop uninhibited by lawsuits that might suffocate the ecosystem before it had a chance to flourish. In the time since, companies ranging from AOL to Twitter have invoked Section 230 to nip user-content lawsuits in the bud, arguing, usually successfully, that they are not responsible for the content their users create.
For decades, courts have interpreted Section 230 to give broad protections to websites. The legislation’s original authors have repeatedly said their intent was to give websites the benefit of the doubt and to encourage innovation in content moderation.
But as large online platforms have become more central to the country’s political and economic affairs, policymakers have come to doubt whether that shield is still worth keeping intact, at least in its current form. Democrats say the law has given websites a free pass to overlook hate speech and misinformation; Republicans say it lets them suppress right-wing viewpoints. The Supreme Court isn’t the only one reviewing Section 230; Congress and the White House have also proposed changes to the law, though legislation to update Section 230 has consistently stalled.
Understanding how the internet may work differently without Section 230 — or if the law is significantly narrowed — starts with one, simple concept: Shrinking the liability shield means exposing websites and internet users to more lawsuits.
A lack of oversight or a legal cudgel?
Virtually all of the potential consequences for the internet, both good and bad, flow from that single idea. How many suits should websites and their users have to face?
For skeptics of the tech industry, and critics of social media platforms, more lawsuits would imply more opportunities to hold tech companies accountable. As in the Google and Twitter cases, websites might see more allegations that they aided and abetted terrorism because they hosted terrorist content. But it wouldn’t end there, according to Chief Justice John Roberts.
“I suspect there would be many, many times more defamation suits, discrimination suits… infliction of emotional distress, antitrust actions,” Roberts said Tuesday, ticking off a list of possible claims that might be brought.
Roberts’ remark underscores the enormous role Section 230 has played in deflecting litigation from the tech industry — or, as its opponents might say, shielding it from proper oversight. Allowing the courts to scrutinize the tech industry more would bring it in line with other industries, some have argued.
“The massive social media industry has grown up largely shielded from the courts and the normal development of a body of law. It is highly irregular for a global industry that wields staggering influence to be protected from judicial inquiry,” wrote the Anti-Defamation League in a Supreme Court brief.
For a moment, Justice Elena Kagan seemed to agree on Tuesday.
“Every other industry has to internalize the costs of its conduct,” she said. “Why is it that the tech industry gets a pass? A little bit unclear.”
Threats to comment sections, Craigslist, even Wikipedia
Exactly how the internet may change if the Supreme Court rules against the tech industry depends heavily on the specifics of that hypothetical ruling, and how expansive or narrowly tailored it is.
But in general, exposing online platforms to greater liability creates incentives for those sites to avoid being sued, which is how you would get potentially dramatic changes to the basic look and feel of the internet, according to the tech industry, digital rights groups and legal scholars of Section 230.
Websites would face a terrible choice in that scenario, they have argued. One option would be to preemptively remove any and all content that anyone, anywhere could even remotely allege is objectionable, no matter how minor — reducing the range of allowed speech on social media.
Another option would be to stop moderating content altogether, to avoid claims that a site knew or should have known that a piece of objectionable material was on its platform. Not moderating, and thus not knowing about libelous content, was enough to insulate the online portal CompuServe from liability in an important 1991 case that helped give rise to Section 230.
The sheer volume of lawsuits could crush website owners or internet users that can’t afford to fight court battles on multiple fronts, leading to the kind of business ripple effects Kavanaugh raised. That could include personal blogs with comment sections, or e-commerce sites that host product reviews. And the surviving websites would alter their behavior to avoid suffering the same fate.
Without a specific scenario to consider, it’s hard to grasp how all this would play out in practice. Helpfully, multiple online platforms have described to the Court ways in which they might change their operations.
Wikipedia has not explicitly said it could go under. But in a Supreme Court brief, it said it owes its existence to Section 230 and could be forced to compromise on its non-profit educational mission if it became liable for the writings of its millions of volunteer editors.
If websites became liable for their automated recommendations, it could affect newsfeed-style content ranking, automated friend and post suggestions, search auto-complete and other methods by which websites display information to users, other companies have said.
In that interpretation of the law, Craigslist said in a Supreme Court brief it could be forced to stop letting users browse by geographic region or by categories such as “bikes,” “boats” or “books,” instead having to provide an “undifferentiated morass of information.”
If Yelp could be sued by anyone who felt a user restaurant review was misleading, it argued, it would be incentivized to stop presenting the most helpful recommendations and could even be helpless in the face of platform manipulation; business owners acting in bad faith could flood the site with fraudulent reviews in an effort to boost themselves, but at the cost of Yelp’s utility to users.
And Microsoft has said that if Section 230 no longer protects algorithms, it would jeopardize its ability to suggest new job openings to users of LinkedIn, or to connect software developers to interesting and useful software projects on the online code repository GitHub.
Even a ‘like’ could trigger a lawsuit
Liability could also extend to individual internet users. A Supreme Court ruling restricting immunity for recommendations could mean any decision to like, upvote, retweet or share content could be identified as a “recommendation” and trigger a viable lawsuit, Reddit and a number of volunteer Reddit moderators wrote in a brief.
That potential nightmare scenario was affirmed in Tuesday’s oral argument, when Justice Amy Coney Barrett asked Eric Schnapper, an attorney going up against Google, to explore the implications of his legal theory. Schnapper represented the family of Nohemi Gonzalez, an American student killed in a 2015 ISIS attack in Paris; the Gonzalez family has alleged that Google should be held liable under a US antiterrorism law for its YouTube recommendations of ISIS content.
“If you go on Twitter, and you’re using Twitter, and you retweet, or you ‘like’ or you say ‘check this out,'” Barrett said, “on your theory, I’m not protected by Section 230.”
“That’s content you’ve created,” Schnapper agreed.
The sweeping, seemingly unbounded theory of liability advanced by Schnapper seemed to make many justices, particularly the Court’s conservatives, nervous.
Both liberals and conservatives on the Court struggled to identify a limiting principle that could allow the Court to ratchet back the scope of Section 230 without also raising legal risks for innocuous internet use.
Kagan told Schnapper that even if she didn’t necessarily buy his opponent Google’s “‘sky is falling’ stuff… boy, there is a lot of uncertainty about going the way you would have us go, in part, just because of the difficulty of drawing lines in this area.”
The-CNN-Wire
™ & © 2023 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.