Lessons From Making Internet Companies Liable For User’s Speech: You Get Less Speech, Less Security And Less Innovation

Stanford’s Daphne Keller is one of the world’s foremost experts on intermediary liability protections and someone we’ve mentioned on the website many times in the past (and have had her on the podcast a few times as well). She’s just published a fantastic paper presenting lessons from making internet platforms liable for the speech of its users. As she makes clear, she is not arguing that platforms should do no moderation at all. That’s a silly idea that no one who has any understanding of these issues thinks is a reasonable idea. The concern is that as many people (including regulators) keep pushing to pin liability on internet companies for the activities of their users, it creates some pretty damaging side effects. Specifically, the paper details how it harms speech, makes us less safe, and harms the innovation economy. It’s actually kind of hard to see what the benefit side is on this particular cost-benefit equation.

As the paper notes, it’s quite notable how the demands from people about what platforms should do keeps changing. People keep demanding that certain content gets removed, while others freak out that too much content is being removed. And sometimes it’s the same people (they want the “bad” stuff — i.e., stuff they don’t like — removed, but get really angry when the stuff they do like is removed). Perhaps even more importantly, the issues for why certain content may get taken down are the same issues that often involve long and complex court cases, with lots of nuance and detailed arguments going back and forth. And yet, many people seem to think that private companies are somehow equipped to credibly replicate that entire judicial process, without the time, knowledge or resources to do so:

As a society, we are far from consensus about
legal or social speech rules. There are still enough novel and disputed questions surrounding
even long-standing legal doctrines, like copyright and defamation, to keep law firms in business. If democratic processes and court rulings leave us with such unclear guidance, we
cannot reasonably expect private platforms to do much better. However they interpret the
law, and whatever other ethical rules they set, the outcome will be wrong by many people’s
standards.

Keller then looked at a variety of examples involving intermediary liability to see what the evidence says would happen if we legally delegate private internet platforms into the role of speech police. It doesn’t look good. Free speech will suffer greatly:

The first cost of strict platform removal obligations is to internet users’ free expression
rights. We should expect over-removal to be increasingly common under laws that ratchet
up platforms’ incentives to err on the side of taking things down. Germany’s new NetzDG
law, for example, threatens platforms with fines of up to &euro’50 million for failure to remove
“obviously” unlawful content within twenty-four hours’ notice. This has already led
to embarrassing mistakes. Twitter suspended a German satirical magazine for mocking
a politician, and Facebook took down a photo of a bikini top artfully draped over a
double speed bump sign.11 We cannot know what other unnecessary deletions have passed
unnoticed.

From there, the paper explores the issue of security. Attempts to stifle terrorists’ use of online services by pressuring platforms to remove terrorist content may seem like a good idea (assuming we agree that terrorism is bad), but the actual impact goes way beyond just having certain content removed. And the paper looks at what the real world impact of these programs have been in the realm of trying to “counter violent extremism.”

The second cost I will discuss is to security. Online content removal is only one of many
tools experts have identified for fighting terrorism. Singular focus on the internet, and
overreliance on content purges as tools against real-world violence, may miss out on or even
undermine other interventions and policing efforts.

The cost-benefit analysis behind CVE campaigns holds that we must accept certain
downsides because the upside—preventing terrorist attacks—is so crucial. I will argue that
the upsides of these campaigns are unclear at best, and their downsides are significant.
Over-removal drives extremists into echo chambers in darker corners of the internet, chills
important public conversations, and may silence moderate voices. It also builds mistrust
and anger among entire communities. Platforms straining to go “faster and further” in
taking down Islamist extremist content in particular will systematically and unfairly
burden innocent internet users who happened to be speaking Arabic, discussing Middle
Eastern politics, or talking about Islam. Such policies add fuel to existing frustrations with
governments that enforce these policies, or platforms that appear to act as state proxies.
Lawmakers engaged in serious calculations about ways to counter real-world violence—not
just online speech—need to factor in these unintended consequences if they are to set wise
policies.

Finally, the paper looks at the impact on innovation and the economy and, again, notes that putting liability on platforms for user speech can have profound negative impacts.

The third cost is to the economy. There is a reason why the technology-driven economic
boom of recent decades happened in the United States. As publications with titles like
“How Law Made Silicon Valley” point out, our platform liability laws had a lot to do with
it. These laws also affect the economic health of ordinary businesses that find customers
through internet platforms—which, in the age of Yelp, Grubhub, and eBay, could be almost
any business. Small commercial operations are especially vulnerable when intermediary
liability laws encourage over-removal, because unscrupulous rivals routinely misuse notice
and takedown to target their competitors.

The entire paper weighs in at a neat 44 pages and it’s chock full of useful information and analysis on this very important question. It should be required reading for anyone who thinks that there are easy answers to the question of what to do about “bad” content online, and it highlights that we actually have a lot of data and evidence to answer the questions that many legislators seem to be regulating based on how they “think” the world would work, rather than how the world actually works.

Current attitudes toward intermediary liability, particularly in Europe, verge on “regulate
first, ask questions later.” I have suggested here that some of the most important questions
that should inform policy in this area already have answers. We have twenty years of
experience to tell us how intermediary liability laws affect, not just platforms themselves,
but the general public that relies on them. We also have valuable analysis and sources of
law from pre-internet sources, like the Supreme Court bookstore cases. The internet raises
new issues in many areas—from competition to privacy to free expression—but none are as
novel as we are sometimes told. Lawmakers and courts are not drafting on a blank slate for
any of them.

Demands for platforms to get rid of all content in a particular category, such as “extremism,”
do not translate to meaningful policy making—unless the policy is a shotgun approach
to online speech, taking down the good with the bad. To “go further and faster” in
eliminating prohibited material, platforms can only adopt actual standards (more or less
clear, and more or less speech-protective) about the content they will allow, and establish
procedures (more or less fair to users, and more or less cumbersome for companies) for
enforcing them.

On internet speech platforms, just like anywhere else, only implementable things
happen. To make sound policy, we must take account of what real-world implementation
will look like. This includes being realistic about the capabilities of technical filters and
about the motivations and likely choices of platforms that review user content under
threat of liability.

This is an important contribution to the discussion, and highly recommended. Go check it out.

Permalink | Comments | Email This Story