By Philip Verveer, Senior Fellow, Digital Platforms and Democracy Project
This Policy Paper is part of the Digital Platforms & Democracy Project’s efforts to explain and disseminate ideas about regulation of major technology and digital platform companies. Click here to read more of their research and commentary.
The views expressed in Shorenstein Center Policy Papers are those of the author(s) and do not necessarily reflect those of Harvard Kennedy School or of Harvard University.
Policy Papers have not undergone formal review and approval. Such papers are included in this series to elicit feedback and to encourage debate on important issues and challenges in media, politics and public policy. Copyright belongs to the author(s). Papers may be downloaded and shared for personal use under the Shorenstein Center’s Open Access Policy. Please contact the Shorenstein Center with any republication requests.
Abstract: The major digital platforms have generated negative externalities that threaten social cohesion and democratic processes in the U.S. and abroad. Their efforts at prevention or amelioration, despite improvements, continue to lag behind the threats. Equipping a government agency with the ability to require greater prevention-related expenditures should be one element of a larger regulatory regime designed to maximize platform societal benefits and minimize concomitant societal costs. Similar approaches have been employed with respect to other systemically important firms. They should be applied here to compel major platforms to internalize negative externalities and invest in prevention commensurate with both their profitability and the risks their services enable.
For hundreds of years, judicial decisions, statutes, and social norms have addressed and attempted to curb one of capitalism’s worst impulses, the tendency to shift some of the costs of products and services to others, most often the general public. Lawgivers have tried to find ways to force companies to internalize these costs when they are material and noticeably negative.
Based on what we are seeing week by week, it is undeniable that major platform companies have not devoted sufficient investment to prevent or mitigate the negative externalities that flow from their business activities. These externalities adversely affect the American population every day, offsetting the undeniable value that the platform companies’ services also provide. They take the form of dissemination of hate speech, incitements to violence, misinformation, fraudulent schemes, and foreign interference in our democratic processes designed to undermine social cohesion and affect elections, as well as data breaches of personal information entrusted to the platform companies.
Very recently Mark Zuckerberg spoke about Facebook’s efforts to minimize certain negative externalities that are generated by users of its social media site:
We build specific systems to address each type of harmful content — from incitement of violence to child exploitation to other harms like intellectual property violations — about 20 categories in total. We judge ourselves by the prevalence of harmful content and what percent we find proactively before anyone reports it to us. For example, our AI systems identify 99% of the terrorist content we take down before anyone even sees it. This is a massive investment. We now have over 35,000 people working on security, and our security budget today is greater than the entire revenue of our company at the time of our IPO earlier this decade.Mark Zuckerberg at Georgetown University
This, in many respects, is an impressive deployment of Facebook resources, but it still isn’t effective in preventing malign uses of its service, as we are reminded virtually every day. Even at the levels that Mr. Zuckerberg describes, it is less than it needs to be. And that follows a consistent pattern when it comes to curbing Facebook-hosted malevolence. The investment in screening out the negative has always trailed the need; it has always been disproportionately low.
Ameliorating certain of the negative externalities stemming from the business models of major platform companies, particularly Facebook, Alphabet’s YouTube, and Twitter, is a matter of priority and urgency. The importance to our society of the major platform companies is indisputable, but so is the fact that their products and services have been exploited by geopolitical adversaries, criminals, disaffected individuals, and others in ways that inflict material costs on the public. Those costs can be tangible, as when houses of worship are required to hire security guards because of platform-disseminated hate speech and incitement to violence, or intangible, as when state actors and state-sponsored actors seek to weaken our social bonds and influence our elections.
What types of laws and policies would induce the affected platform companies to internalize more of the costs appropriately associated with their businesses?
The legal domain in which the platforms’ externalities arise is significantly affected by the First Amendment, a fact which in turn significantly limits the U.S. government’s potential responses. Even though some of the negative externalities—foreign influence and incitement, for example–plainly fall outside of First Amendment protection, there obviously are First Amendment values in this vicinity that need to be respected. Any governmentally-required impositions necessarily must be limited to countering platform externalities, not platforms’ editorial judgments.
The U.S. experience in requiring “systemically important firms” to deploy capital where their activities produce significant negative externalities that threaten harm to our society has some salience here. Designed to compel firms to better internalize the costs and risks to the public inherent in their business models, it is one of the remedies imposed on large financial firms by the Dodd-Frank Act following the financial crisis of 2008.
The financial crisis brought home the danger that the failure of certain financial firms, in addition to immediate losses, might bring down the entire financial system. Problems encountered by “systemically important” firms “could create, or increase, the risk of significant liquidity or credit problems spreading among financial institutions or markets and thereby threaten the stability of the financial system of the United States” in the words of the Dodd-Frank Act.
One of the principal lessons from the experience of the financial crisis, then, is that excessive risk taking by systemically important firms threatens harms well beyond their immediate sphere of shareholders, creditors, vendors, customers, and employees. If the risks are realized, the victims will include people outside of the firms’ immediate vicinity, people who had no a priori ability to protect themselves from the losses. What is potentially much worse, the damage could occur to the country’s underlying economic and social fabric, with losses both material and immaterial spreading in ways that threaten the social stability and cohesion on which democratic societies depend.
One form of excessive risk taking by the affected financial firms involved the maintenance of insufficient capital to withstand shocks, either their own or stemming from other firms that were in some sense interconnected with them. The reason for the insufficient capital was evident enough: it was the pursuit of profit.
How best to protect against recurrences? One of the most important ways, historically, was to require appropriate levels of capital as a margin for error against losses that might come through mistakes or misfortune.
The underinvestment in prevention or mitigation by major platforms is a form of risk-taking for profit, just as surely as was the reckless financial firm conduct that nearly brought down the global financial system a decade ago. The less spent on prevention or mitigation, the more that flows to the bottom line, with the public individually and collectively made to bear risks and costs more appropriately and more efficiently kept within the enterprise.
At least some of our prominent platform companies should be regarded as systemically important. In their case, the importance is not in sustaining the financial system on which our economy is grounded. Rather, it is in their overarching importance in conveying any information anyone wants to insert on the platform, unpaid and generally available or paid and highly targeted. And this is especially so in the case of various forms of political speech, whether electioneering, advocacy, or commentary on socially sensitive issues.
A requirement to invest more in limiting negative emanations is an admittedly blunt and limited instrument. Nevertheless, it deserves serious consideration in connection with efforts to assemble a regulatory package that would reduce the social costs inherent in the major platforms’ offerings as presently practiced.
Just as with Dodd-Frank, a requirement that a systemically important digital platform devote greater resources to prevention and mitigation is warranted. Providing the power to require it should be included in the portfolio of any government regulatory agency tasked with the responsibility of safeguarding the public’s interest in the operation of the platforms.