Jeffrey Rosen

Jeffrey Rosen—The Deciders: The Future of Free Speech in a Digital World

2016 Richard S. Salant Lecture on Freedom of the Press

Jeffrey Rosen, President & CEO of the National Constitution Center, Professor of Law at The George Washington University Law School, and a Contributing Editor of The Atlantic, delivered the ninth annual Salant Lecture on Freedom of the Press at the Harvard Kennedy School’s Shorenstein Center on October 13, 2016. Rosen argues that Twitter, Facebook, and Google are facing increased pressure to moderate content in a way that is inconsistent with First Amendment protections—in the name of promoting civility rather than democracy. He discusses the controversy around Facebook’s removal of a Pulitzer Prize-winning photo of a naked child from the Vietnam War, problems regarding transparency in content moderation, the EU’s right to be forgotten ruling, and the challenges of online mobs and hate speech, among other topics.

Following are audio and a transcript of the speech. Also available on iTunesGoogle Play (login required)iHeartRadio, and YouTube. 

This transcript has been lightly edited for clarity.

Nicco Mele: It was a Saturday, two weeks before a very divisive presidential election, and Ruth Stanton was on the phone, at home, long distance, talking to one of her friends, when an operator cut into the line and said, “Please hang up the phone. The White House is calling.” She did, but not before wondering out loud, “What kind of rude people are running the White House these days?”

The date was October 28, 1972, and Richard Nixon was running for re-election as president of the United States. Nixon was not happy. The night before, Friday night, during the evening news, CBS had run a long news story—one of the longest segments they had ever run. Walter Cronkite, America’s most trusted journalist, stood there and looked into the camera and uttered these now familiar words: “At first it was called the Watergate caper—five men apparently caught in the act of burglarizing and bugging Democratic headquarters in Washington.”

Today, the phrase “Watergate” carries all kinds of meaning in our culture, about power run amok, about the courage of journalists to hold power accountable. But that Friday night in 1972, most Americans had never heard of Watergate. It’s almost impossible to remember this in the age of the internet, but The Washington Post was really a local paper, and even though it had been reporting on the Watergate burglary, most Americans had no idea what it was. And it was not until CBS aired its story during the Friday night newscast, that a third of American homes tuned into, that Watergate became national news, two weeks before the presidential election.

So Saturday morning, just 10 days before the presidential election, every American knows what Watergate was, and President Richard Nixon was not too happy about it. So he has his special assistant, Charles Colson, go on the hunt for a man named Frank Stanton. Frank was the President of CBS, and he wanted Frank to know that the Nixon White House was not going to stand for this kind of reporting. This was trash. But Frank was conveniently out—perhaps by design—his wife was on the phone long-distance to her friend. So Colson called Bill Paley [the chairman of CBS]—and really let him have it about the irresponsible reporting of CBS.

Google, Facebook, Apple, Amazon, Twitter, almost single-handedly control your experience of the world online. And these companies—unlike CBS—insist they are not media companies, they are technology companies.

On Monday morning, Bill Paley goes to the office and calls two people, and says, “I want you in my office right away.” The first is Frank Stanton, and the second is Richard Salant. Stanton was the president of CBS. Salant was the president of CBS News. And in many ways, Salant was Stanton’s protégé. The three men sat there—Paley, Stanton, and Salant. Paley was very polite and very charming, but very angry about the Friday night news broadcast that had led to the White House interrupting his weekend. And the Friday night show had ended by saying, “This is just part one. Part two is coming.” And Paley made it clear, in a very polite and roundabout way, that he did not want that second segment to ever go on the air. And Richard Salant, who was a lawyer and a corporate executive—and many journalists in the newsroom had their reservations—would this corporate executive stand up for the freedom of the press—well, Dick Salant said, in his own polite and roundabout way, that Paley could go to hell, and he’d have to fire him before he’d take the piece off the air.

I want you to try and imagine, for a moment, that you’re sitting there, under pressure from the president of the United States, and your boss’s boss, to pull something. It’s an excruciating amount of pressure. What are you going to do?

In the end, Salant ran the second part of the story. It was cut from 14 minutes to [8 minutes], but it ran, and Paley was apparently not informed it was going on the air, because he was pretty livid when it did go on the air, and called Salant back to his office to let him know. Paley was not the only one who was livid. The Nixon White House was pissed.

After Nixon won re-election, Colson then called Frank Stanton again, and this time reached him. And he said to him, at great length, many threatening things about how the Nixon White House was going to crush CBS. They were going to regulate it out of existence, take its lucrative affiliate stations away, and force advertisers to abandon CBS. And even though that anger and intensity was frightening, Frank Stanton wasn’t that bothered. Just a few months later, it was clear the Nixon presidency was doomed. On April 30, 1973, Nixon’s inner circle were forced to resign, as they faced the heat of their actions, and that date just happened to coincide with the very night of Frank Stanton’s retirement party.

Here we are tonight, at a lecture endowed by Frank Stanton’s estate, in the memory of Dick Salant. In our current diffuse media environment, we have cable news, we have Facebook—it’s really hard to put yourself in the brain of a television executive in the ’60s. But one way of thinking about this is that television had exploded in the United States in about a decade. One of the many things Frank Stanton deserves credit for in the history of broadcasting is the first televised presidential debate between Richard Nixon and John F. Kennedy. The role of television in presidential politics was actually almost unimaginable. If you ran for president in 1948, if you were lucky, you reached 50,000 voters a day. Maybe on unusual occasions, you’d speak to larger crowds, and reach more people than that. But by 1960, a presidential campaign could reach millions of Americans on television.

Suddenly, television had this incredible power, and the journalists and the networks who controlled television had enormous decisions to make, really unprecedented in American history. They [were] unelected people with so much power in our politics. It was an excruciating crucible, and it had tremendous impact on what our democracy would look like, and who the leader of the free world would be. That’s where men like Dick Salant and Frank Stanton found themselves. The amazing thing to me is that they were really businessmen. They were executives, but they were also serious intellectuals who cared about the state of the national discourse, and the freedom of the press. Just to illustrate this: as television revenue skyrocketed, William Paley once apologized to shareholders, saying that shares of CBS could’ve earned an additional six cents, if it wasn’t for news. And among other things, he complained that they had to broadcast Winston Churchill’s funeral without commercials.

And so it was against the challenges and the demands of a publicly traded corporation, hungry for growth and profitability, and the arrogance of the political elite, who were not used to being held accountable in front of an audience of the nation—that Frank Stanton and Richard Salant had to navigate a path for television news. Their path was imperfect, but they had a deep responsibility, a deep-seated sense of responsibility, not only to the public, but to the Republic, to the idea of the country. It’s fitting that Frank Stanton later took an instrumental role in shaping the Kennedy School, and the Shorenstein Center in particular. His legacy lives on today in the students at each table, and their commitment to public service.

In the context of these two men, I’m really proud to introduce you to Jeff Rosen, our speaker tonight. He’s a distinguished lawyer, author, and academic. The Los Angeles Times, my former employer, called him “the nation’s most widely read and influential legal commentator.” He is currently the president and CEO of the National Constitution Center, and also a professor of law at [George Washington University Law School]. You can read all about his bio here, but I want to say a word about why I invited him to speak with us tonight.

That moment in the ’60s, when television audiences suddenly were giant, and the executives—the businessmen running these companies—suddenly had this enormous power to shape what the public talked about, to make Watergate a household word—that is so distant today. But in fact, I would say that our public life today is shaped by large digital platforms. Google, Facebook, Apple, Amazon, Twitter, almost single-handedly control your experience of the world online. And these companies—unlike CBS— insist they are not media companies, they are technology companies. None of them have a news division, yet. But I hope and pray that their corporate executives care about the Republic and the guarantees of our Constitution, including the right to free speech, as much as Frank Stanton and Dick Salant did. I hope that the energy and intensity that Stanton and Salant gave to thinking about their role as businessmen in a corporation, and their role as citizens in the Republic—I hope the executives of our technology companies take that same responsibility just as seriously. And just as those two men found themselves in an unprecedented moment to shape history, by their approach to television news, so too do these big technology platforms of our time shape our politics and culture.

I want to thank you all for coming tonight. I want to note Elisabeth Allison, a good friend of Frank Stanton, is here with us tonight. I want to turn the lectern over to Jeffrey Rosen, and I invite you to join me in giving him your rapt and complete attention. Thank you.

Jeffrey Rosen: “Rapt and complete.” That’s a stern homework assignment. I was so honored when Nicco invited me back to Cambridge, here on the weekend of my thirtieth college reunion, to talk about the most urgent free speech question of our time, namely, how can we protect First Amendment values in an age where, as Nicco said, young lawyers at Google and Facebook and Twitter have more power over who can speak than who can be heard, than any king or president or Supreme Court justice?

Yet these digital platforms, as private corporations, are not formally restrained by the First Amendment—but are choosing to apply content policies, often in secret, drafted with some receptivity to First Amendment values, but in the face of growing public pressure, here in America and around the world, to favor values such as dignity and safety rather than liberty and free expression. I want to describe to you how these deciders are grappling with this urgent question, and the troubling pressures that confront them on the horizon, and what might be done to ensure that the American free speech tradition—which is so necessary for the survival of American democracy—flourishes online, rather than atrophying. I want to begin with a familiar example. First, I’m going to take out my constitutional reading glasses.

What if the video had been permanently blocked by Facebook or YouTube for violating its content policies? Two-thirds of Facebook’s 1.6 billion users say they use the site to get news. Would there be any other way for people to see the videos? More importantly, who should be making the decision? How transparent should the decision be, and what should the procedures for the appeals be?

In July, on Facebook Live, Philando Castile, a 32-year-old cafeteria supervisor, died after being shot by a police officer in St. Paul, Minnesota. As many of us remember from the video, his girlfriend, Diamond Reynolds, began streaming a video from her phone to Facebook Live right after the officer fired shots. She remained calm at first, but she became increasingly emotional, exclaiming at the end, “I can’t believe they did this.”

At first, Facebook briefly removed the video, owing to what the company called a “technical glitch.” It was later reinstated, and has been viewed more than 2.5 million times. Reynolds said that she started filming because she wanted the world to see the truth, and was afraid that police would misrepresent the situation.

Castile’s death is just one in a series of violent encounters between civilians and the police that have been recorded and posted online. After Castile’s death, Mark Zuckerberg wrote on Facebook, “While I hope we never have to see another video like Diamond’s, it reminds us why coming together to build a more open and connected world is so important, and how far we still have to go.” The company stood by what it had called a technical glitch, and later the correction had enabled the video to remain online, and has never explained what the glitch consisted of, or whether the initial decision had been made by one of its content monitors applying Facebook community standards, which promise that Facebook will remove posts featuring violence or graphic content when they’re shared for “sadistic pleasure or to celebrate or glorify violence.” But the standards at the same time allow people to use Facebook to call attention to human rights abuses that may involve sharing unsettling photos or videos. The difficulty of applying this very nuanced standard would challenge any algorithm, let alone a Supreme Court justice with months to deliberate. Imagine a content moderator in Islamabad, or Dublin, or Menlo Park, making the decision in a matter of seconds, at a time when over a billion pieces of content are being shared on Facebook every month, and Facebook receives a million requests to remove videos every day.

And what if the video had been permanently blocked by Facebook or YouTube for violating its content policies? Two-thirds of Facebook’s 1.6 billion users say they use the site to get news. Would there be any other way for people to see the videos? More importantly, who should be making the decision? How transparent should the decision be, and what should the procedures for the appeals be? And what I want to argue to you today is that, although the decisions now are being made with some sensitivity to First Amendment values, all the commercial pressures that are driving the companies to try to increase their user base will threaten these First Amendment values, and that we need more transparency and more accountability to ensure that the companies are upholding free speech values, rather than threatening them.

Although the decisions now are being made with some sensitivity to First Amendment values, all the commercial pressures that are driving the companies to try to increase their user base will threaten these First Amendment values.

I first became interested in this topic back in 2007, where I had the opportunity to interview, for The New York Times Magazine, Nicole Wong, who was then the deputy general counsel at Google. Her colleagues jokingly called her “The Decider”—this was in the middle of the Bush years—because she was the one who was woken up in the middle of the night, deciding whether to remove videos posted by Greek football fans, accusing Kemal Atatürk, the founder of modern Turkey, of being gay—which is illegal to say in Turkey—or accusing the King of Thailand, who just died today, of being shown with his feet over his head—which is a crime in Thailand. It’s the middle of the night, and she doesn’t speak Turkish or Thai, and multiply that by the 142 countries in which Google does business, and you have some sense of the scale of the problem, back in 2007.

But that was just the dawn of the internet age. I revisited the question in 2012 in a piece for The New Republic, and found that the platforms were evolving from initial content policies that favored liberty, to increasingly restrictive content policies that favored civility. Twitter is the most dramatic example of this. Twitter initially decided only to prohibit direct specific threats of violence against others, which is a standard that pretty much looks like the U.S. First Amendment standards. But then events like the Leslie Jones controversy—where tweeted racial slurs caused the celebrity comedian to publicly leave Twitter, and also the fact that Twitter’s usership dropped for the first time in 2016—led the company to make an about face, and it now has embraced policies that allow the banning of hate speech that would be protected under the First Amendment, although it hasn’t been transparent about what those policies are. Essentially, Twitter, Facebook, and Google are moving from spaces that had favored unregulated liberty, to attempts to create a safer space online. In this sense, the debate about free speech online tracks, in interesting ways, the debate about free speech on campus.

In the process of evolving from platforms of democracy to platforms of civility, the companies are understandably struggling with the sheer volume of the speech they’re required to review. In 2016, according to YouTube, 400 hours of video are posted every minute; Facebook is used by over a billion people daily who flag more than a million pieces of content a day as objectionable; and every 24 hours, Twitter publishes more than 500 million tweets.

In an effort to deal with this volume of content, the companies moved away from their initial decider models, where individual content reviewers would decide whether flagged content violated their user policies, toward a more algorithmic review. They wanted to turn the task over to computers. But algorithmic review can pose grave threats to free speech. There was a recent Facebook trending controversy, where a former Facebook news curator reported that fellow curators kept politically conservative news stories from appearing in trending topics. After the backlash that resulted, Facebook fired its team of editors, and instituted new algorithms, revealing the first ever fully-automatic Facebook trending model. But the automatic model resulted in other content errors, like when false news stories and hoax pieces appeared alongside real news articles, including a September 11th anniversary topic paired with a tabloid article claiming “experts” have footage that “proves bombs were planted in the Twin Towers.” Just as the Supreme Court cannot make algorithmic decisions, so the platforms can’t delegate this inherently human judgment to machines.

In an effort to deal with this volume of content, the companies moved away from their initial decider models, where individual content reviewers would decide whether flagged content violated their user policies, toward a more algorithmic review….But algorithmic review can pose grave threats to free speech.

I want to argue that the platforms—although not formally bound by the First Amendment—have a democratic obligation to embrace something close to the constitutional standard—first articulated by Justice Louis Brandeis—that speech can only be banned if it’s intended to, and likely, to cause imminent violence. That was Brandeis’s great standard in the Whitney decision; it was embraced by the Supreme Court in the 1960s. It remains the crown jewel of American jurisprudence that distinguishes us from the rest of the world. Unfortunately, social and commercial pressures are pushing the platforms in the opposite direction, toward more moderation, and not less. They’re functioning like judges, they’re refusing to publish the reasoning behind their quasi-judicial decisions, and those decisions need to be more transparent if free speech is to prevail in a digital age.

Let me give an example of how these standards are applied in practice. Both Facebook and Google allow you to criticize religious leaders, but not religions. So you can say, “I hate Mohammed,” but you can’t say, “I hate Muslims.” As a result, when President Obama and the president of Egypt demanded that YouTube and Facebook remove the “Innocence of Muslims” video on the grounds that it was purportedly causing the Benghazi riots, both platforms refused, because they looked at the video and concluded it criticized Mohammed, but not a religion. That proved to be a good, free-speech-friendly decision in the short run, especially after evidence suggesting that the video had been up for months in Arabic, and that the riots had been caused by other reasons.

But the decision itself was quixotic, to say the least. It was introduced by David Wilner, the former head of public policy at Facebook. He had started off at the help desk, basically responding to the email requests for password help. He was promoted to head of public policy in his early twenties after graduating as an anthropology major from Bowdoin. He read John Stuart Mill in college, and he was struck by the distinction between group libel—such as criticizing a religion, which he thought could be banned—and the criticism of a religious leader, which he thought was political speech. This distinction is debatable, to say the least, but it’s been adopted by both platforms, and without democratic review. Needless to say, it does not track the Brandeisian standard, which would allow criticism both of religions, as well as religious leaders. And yet even those distinctions are being embattled. Twitter, in the face of these commercial pressures, is moving toward greater restriction of hate speech, but it’s finding that the effort to do so has been challenging. Even Twitter’s new efforts to ban hate speech are being circumvented by users who are substituting anodyne code words for racist epithets, and the struggle to suppress hate speech is being challenged by the volume of technology.

Broadly, the platforms are being pushed by commercial pressures into the error of attempting to accept what NYU professor Jeremy Waldron has called “a well-ordered society.” Waldron urges the platforms to prohibit the expression of racial and religious hatred, even when there’s no immediate prospect they’ll try to provoke violence. He’s embracing a European tradition that favors dignity over liberty. Instead, as Dean Robert Post of Yale Law School has argued in opposition to Waldron, the platforms should be guided by the free speech rules of democracy, not civility. They should embrace their role as the modern version of Holmes’s fractious marketplace of ideas—democratic spaces where all values, including civility norms, are always open for debate. And in this sense, they should embrace their function as media companies, not simply spaces for safety and civility. The distinction is delicate, because the platforms have to deny that they’re media companies in order to retain their immunity from liability under The Communications Decency Act for illegal content posted on their platforms. Section 242 of the act immunizes the platforms for illegal content, as long as they don’t moderate it. But at the same time, they’re exercising more influence as media companies—as Nicco said—than CBS News did in its heyday, and therefore, in order for democratic values to flourish, they need to embrace free speech standards.

They’re functioning like judges, they’re refusing to publish the reasoning behind their quasi-judicial decisions, and those decisions need to be more transparent if free speech is to prevail in a digital age.

As corporate rather than government actors, Facebook and Twitter aren’t formally bound by the First Amendment, but they need to adhere to the Brandeisian view that speech should only be banned when it threatens and is intended to provoke imminent violence.

Now, that wasn’t always Brandeis’s view. And I want to tell you about the evolution in Brandeis’s thinking, because Brandeis can tell us more than any other thinker in the twentieth century about how to balance dignity on the one hand, against free speech and the other, because he changed his mind so dramatically on this subject. So the young Brandeis, in 1890, as many of you know, wrote the most famous article on the right to privacy ever. It was called “The Right to Privacy,” and it was published in the Harvard Law Review. Brandeis was upset about a new technology—namely the Kodak camera and the tabloid press—that guaranteed that what used to be whispered from the closets was now shouted from the rooftops. Apparently it was a mild society item in one of the Boston tabloids that talked about Charles Warren—Brandeis’s law partner’s—young wife’s friendship with Grover Cleveland’s young wife. It was a very mild indignity, but the Boston aristocrat resented it, and he demanded legal recourse. But then Brandeis and Warren searched through American law, and they found that U.S. law—unlike European law and Roman law—contained no remedy of what they called “offenses against honor.”

So they set out to propose a remedy for this essentially dignitary injury. They called it “The Brandeis Tort”—it sounds like a delicious dessert—but it proved to be an unsatisfying area of law, one Brandeis himself came to repudiate. Soon after he wrote the article, he wrote to his wife, Alice and said, “The article is not as good as I thought it was.” Brandeis became concerned that the central element to the Brandeis Tort, which required judges to balance the emotional injuries suffered by celebrities against the value of truthful but embarrassing information, was not something that judges in a democracy should be making, because that was a decision that citizens had to make for themselves. Brandeis thought, and he reflected, and he came to change his mind about the proper balance between dignity and free speech.

And the most beautiful statement of his change in mind comes in Whitney v. California. I’m a law professor, so I’m going to give you some homework. And the homework is, if you haven’t read it, read Brandeis’s path-breaking concurrence in the Whitney case. It’s short; you can find it online, including the riveting online version of this “Interactive Constitution”—put in “Interactive Constitution” in the App Store. You will find leading liberal and conservative scholars in America writing about every clause in the Constitution, describing what they agree about and what they disagree about. It’s thrilling, and there’s a beautiful discussion of Whitney from Jeff Stone and Eugene Volokh, nominated by the Federalist Society and the American Constitution Society.

As corporate rather than government actors, Facebook and Twitter aren’t formally bound by the First Amendment, but they need to adhere to the Brandeisian view that speech should only be banned when it threatens and is intended to provoke imminent violence.

OK, here’s what happened in Whitney. Brandeis spends the summer of 1926 reading a lot of Thomas Jefferson. And he reads Jefferson’s second inaugural, and he reads Jefferson’s letter to Elijah Boardman, talking about the importance of completely untrammeled freedom of thought and opinion, and he comes to believe that his earlier decisions, upholding convictions under the Espionage Act of 1917—which allowed the criminalizing of speech that might have a bad tendency to lead to illegal action in the future, like Eugene V. Debs standing up and saying, “resist the draft,” a statement for which the Socialist candidate for president in 1920 was put into prison in a conviction upheld by a Supreme Court with Holmes and Brandeis in the majority—Brandeis came to believe that that was the wrong balance. He was influenced by Jefferson and Jefferson’s incredible faith in the natural right of free speech—which Jefferson believed came from God or nature, and not government—Jefferson’s great faith in the power of reason, and the need for citizens to develop their faculties of reason. Jefferson, like the other framers, believed that we all have faculties, ranging from passion at the bottom, to reason at the top. And only if we engage in the difficult task of self-education can we cultivate our faculties and be full citizens.

Brandeis was also influenced by fifth-century Athens, and his favorite book was Alfred Zimmern’s The Greek [Commonwealth]—Alfred Zimmern, the great British Zionist—and Brandeis would give this book to everyone he met. For Brandeis, fifth-century Athens was the apotheosis of the engaged face-to-face democratic deliberation on which democracy was allowed to thrive. So he encapsulated all this thinking into a few beautiful paragraphs in Whitney, and I’m going to give you one. I think I can, as a party trick, try to do this from memory, and then I’ll read the second one, but here goes…

He starts by saying, “Those who won our revolution…” So he’s not talking about Madison and the framers in 1787. He’s talking about Jefferson and the revolutionaries in 1776.

“Those who won our revolution believed that the final end of the state was to make men free to develop their faculties, and that in its government the deliberative forces should prevail over the arbitrary. They valued liberty both as an end and as a means. They believed liberty to be the secret of happiness and courage to be the secret of liberty.” That’s a direct quotation from Pericles’s funeral oration as translated by Zimmern. “They believed that freedom to think as you will and to speak as you think are means indispensable to the discovery and spread of political truth; that without free speech and assembly discussion would be futile; that with them, discussion affords ordinarily adequate protection against the dissemination of noxious doctrine; that the greatest threat to freedom is an inert people; that public discussion is a political duty; and that this should be a fundamental principle of the American government.”

I mean, you just say, “Wow, that is constitutional poetry,” and you see in those beautiful words this profound faith in deliberation, and now you understand the source of the Brandeisian standard. The reason that we only [ban] speech when it’s intended to and likely to cause imminent violence is because as long as there’s time enough to deliberate and to discuss—Brandeis has faith that the best remedy to evil counsels is good ones, that counter-speech is more appropriate than suppression, and that reason will ultimately prevail. It’s this profound enlightened, Jeffersonian, Athenian faith in reason and deliberation, and it’s the essence of our constitutional system. Just because it’s so good, I’m going to read this paragraph, because it also encapsulates the lesson, and resonates deeply with the online debate, as well as the campus free speech debate today.

Brandeis has faith that the best remedy to evil counsels is good ones, that counter-speech is more appropriate than suppression, and that reason will ultimately prevail.

“Those who won our independence,” Brandeis said, “recognized the risks to which all human institutions are subject. But they knew that order cannot be secured merely through fear of punishment for its infraction; that it is hazardous to discourage thought, hope and imagination; that fear breeds repression; that repression breeds hate; that hate menaces stable government; that the path of safety lies in the opportunity to discuss freely supposed grievances and proposed remedies; and that the fitting remedy for evil counsels is good ones. Believing in the power of reason as applied through public discussion, they eschewed silence coerced by law—the argument of force in its worst form.”

So there you’ve got it. If you need a defense of the tradition, you just quote Brandeis. Whenever I have a hard question involving free speech and technology, I ask not a simple question, but a basic question: “WWBD: What would Brandeis do?” So what I want to do is try to channel with you WWBD on the questions that Nicco asked me to talk about—free speech on the internet, and in particular, I want to first talk about the new European “right to be forgotten” on the internet, and then I want to talk about this question of content moderation, the policies, what they should be, and what we can do about it.

The right to be forgotten story is absolutely fascinating, because it shows what happens when you favor European notions of dignity over American notions of liberty.

The right to be forgotten story is absolutely fascinating, because it shows what happens when you favor European notions of dignity over American notions of liberty. The European Court of Justice, in the Costeja case a few years ago, recognized this sweeping new right to be forgotten on the internet. If we were in Europe, and one of you were tweeting “Jeff is giving a boring speech right now, and I really would rather be anywhere else,” then after the speech was over, I could sue you in Europe, and I could say that your tweet violated my dignity. And then Google would have to make a decision about whether I was a public figure, and whether your tweet was in the public interest. And if Google guessed wrong, and refused to remove the tweet, and the European Privacy Commissioner disagreed, Google would be liable for up to two percent of its annual income—which in Google’s case, last year, was $60 billion. That tends to concentrate the mind. And as a result, Google has removed 43 percent of the takedown requests that it’s received—500,000 requests in the past two years—to remove 1.5 million links, including a link to an article about the right to be forgotten itself. And as a result, a great deal of information very much in the public interest has been denied to citizens in Europe. Now, the problem is about to get even more challenging. It’s likely that the French Privacy Court may order Google to engage in global takedowns. So right now, Google is only removing conduct that’s deemed illegal in France on google.france, but if global takedowns are ordered, Google has three options: First, it can take things down globally, which it doesn’t want to do, because that doesn’t coincide with its mission of making information available to the world; second, it can withdraw from France, which it doesn’t want to do, because it’s a profitable market; and third, it could create two search engines—basically google.europe and google.us—and all the right to be forgotten stuff could be removed from google.europe, and the U.S. one would be uncensored. That may be the best situation, but if that happens, you will see very dramatically what happens when dignitary rights trump liberty rights. And I think the mature Brandeis, having initially favored dignity over liberty, would’ve repudiated the right to be forgotten, and preferred liberty over dignity.

Let’s now think about the content policies of Facebook, Google, and Twitter in the U.S.—and is Brandeis’s vision feasible, and can the platforms be persuaded to adopt it? On the one hand, citizens are finding on social media—like on Facebook, and Twitter, and on blogs, and in comments sections—some kind of digital fulfillment of the Periclean commonwealth that Brandeis viewed as the model for public discussion. On the other hand, the same web communities are full of idle gossip that Brandeis lamented in his youthful article, full of cat videos and other trivia, on a scale that makes Brandeis’s concerns about the invasion of social privacy by the Kodak camera look tame.

There’s also a serious concern about whether speech has become so polarized on the internet into communities of interest, that Brandeis’s faith in counter-speech no longer applies. At their worst, there can be internet mobs that create echo chambers that represent the death of public reason. Cass Sunstein, here at the law school, has suggested that more speech on the internet may lead to less exposure to competing points of view, less reason and deliberation, and more group polarization. If that’s true, it would call into question Brandeis’s faith in counter-speech, and his conclusion that the fitting remedy for evil counsels is good ones. And the fact that internet mobs can polarize so quickly, exaggerating the influence of extreme views, also challenges Brandeis’s faith in the importance of time as a precondition for reasoned deliberation. And time is important. Remember, the whole thing hinges on the idea that there’s time enough to deliberate, and the speed with which internet mobs can polarize may call into question Brandeis’s faith that, if there’s time to expose through discussion the falsehoods and fallacies, that counter-speech is the best response.

Google has removed 43 percent of the takedown requests that it’s received—500,000 requests in the past two years—to remove 1.5 million links…as a result, a great deal of information very much in the public interest has been denied to citizens in Europe.

On the other hand, Brandeis would have insisted on empirical evidence for what’s been called the “Filter Bubble” effect—the claim that people are building online echo chambers that reinforce rather than challenge their existing points of view. And the empirical evidence, to say the least, is mixed. I won’t give it all to you, but there’s some evidence that there’s less speech polarization on Facebook than people fear.

Faced with the ambiguous evidence, I imagine that Brandeis would have been cautiously optimistic—or maybe “nervously” optimistic is a better word—about the possibility of achieving reasoned deliberation in bounded communities online. He would’ve insisted that individual citizens—not governments or judges or “internet deciders”—bear the ultimate responsibility for using social network technologies to fulfill their political duty and public deliberation, and the recent public outcry and social galvanization on issues involving race and policing as a result of online videos would have pleased him.

For all these reasons, I think Brandeis would have been critical of the recent decision of Facebook, YouTube, Twitter, and Microsoft to sign a code of conduct formulated by the European Commission, in which they’ve agreed to review reports of illegal hate speech on their platforms within 24 hours, and act on them by removing or disabling the access to the content, as long as the complaints are precise and substantiated. And he would have insisted as much as possible that the content policies of the major platforms track U.S. constitutional standards.

Above all, he would have insisted on greater transparency of content moderation online. The former public policy chief of Google, Andrew McLaughlin, told me back in 2007 that he hoped that growing trends to censor speech at the network level and elsewhere would be resisted by millions of individual users who would agitate against censorship as they experienced the benefit of free speech. And there is a very recent high-profile vindication of that hope. That was the case with Nick Ut’s photo of the naked Vietnamese child running from the napalm attack during the Vietnam War. Facebook initially blocked the photo, because their content policies clearly prohibit nude pictures of children, so it was a violation, and a violation of the site’s community standards. But in the face of the public outcry, and an open letter to Mark Zuckerberg written by the editor of the Norwegian newspaper which launched the controversy, Facebook retreated and put the image back up. In his letter, the newspaper said that if the company failed to correct the error, then it would “simply promote stupidity and fail to bring humans closer to one another.” I think that the protest against [the removal of] the photograph and its reinstatement was a heartening example.

The decision to remove it was made, presumably, by human beings. We don’t know if it was human beings or algorithms, but it was a technical violation of the standards. But the decision to reinstate it because the photograph was so clearly in the public interest is an example of the triumph of reason. I’m concerned, though, that these decisions are not always transparent. I’ve come to know some of the deciders, having served on a committee convened by the Anti-Defamation League that brought together the leading deciders at Google, Facebook, and Twitter. And broadly, these are American lawyers, in the free speech tradition, who are trying, in the face of great commercial pressures to the contrary, to enforce as much of the American free speech standards as possible. They’re acting like judges, but none of their decisions are transparent. And I think that they and the companies would be well-served if there were a publication of the decisions and their reasoning, much as U.S. Supreme Court decisions are published. It would inure well to the companies, and it would also allow a degree of transparency and accountability. There also have to be more transparent appeal mechanisms. Right now, if you’re blocked from Facebook or Twitter or Google for violating their content policies, your account is disabled, and people have trouble getting them reinstated. We need rule of law values, just as they exist in real space, that allow citizens who believe that their speech has been wrongly suppressed, to appeal.

My great concern, though, is that I’m not confident that the public will demand First Amendment and constitutional values—such as transparency, procedural regularity, and free expression—over dignity and civility. In colleges in America, and on digital platforms around the word, public pressure is clamoring in the opposite direction, in favor of dignity rather than liberty of thought and opinion.

As public pressures on the companies grow, they may increasingly try to abdicate their role as deciders entirely, to avoid being criticized for making unpopular decisions. I can imagine a future where Google, Twitter, and Facebook delegate their content decisions to government, to users, or even to popular referenda, in order to avoid criticism and accountability for exercising human judgment. The result would be far more suppression of speech, and less democratic deliberation than exists now, making the age of the deciders look like a brief shining age, a Periclean oasis before the rule of the mob with a dictator. Although the current deciders at Facebook and Google are struggling valiantly to resist these pressures, because of their commitment to the American free speech tradition, in for-profit corporations, consumer pressures will ultimately prevail, as Nicco described at CBS. But the consequences of consumer pressures prevailing on the internet, the vast increase in the scope of speech that would be suppressed, and the liberal standards that would be applied, makes the decision to suppress or not—of Walter Cronkite, or Frank Stanton, or Richard Salant—look tame. That’s why the stakes are so tremendously high, and the market pressures, the mob pressures, the consumer pressures, so troubling.

As public pressures on the companies grow, they may increasingly try to abdicate their role as deciders entirely, to avoid being criticized for making unpopular decisions.

The values of consumers are not the same as the values of citizens. Both Brandeis and the framers insisted that the right to complete freedom of thought and opinion—a right Jefferson believed came from nature’s god, and not from government—should ultimately be shaped not by mobs or majorities, but by courageous, engaged citizens who take the time to develop their faculties of reason. As Jefferson put it in the beautiful letter about the founding of the University of Virginia, “This institution will be based upon the illimitable freedom of the human mind. For here, we are not afraid to follow truth where it may lead, nor to tolerate any error as long as reason is free to combat it.” Facebook, Google, and Twitter should be based on the same principle. Like universities and media outlets, online speech platforms should not be safe spaces. They should be democratic spaces, with the ultimate victors in the clash of ideas determined by reason and deliberation, not the urge to avoid offense.

For this reason, something like U.S. constitutional standards, applied by fickle humans, seem to me the best way of preserving an open internet. It’s time, in other words, for some American free speech exceptionalism, if the web is to remain open and free in the twenty-first century. Commercial pressures are pointing in the opposite direction toward the rule of the mob and the dictator, not the rule of reason. So as the centralizing tendencies of democracies, corporations, and technologies around the world continue to threaten human individuality and liberty, Brandeis’s warnings about the duties of public deliberation must again be heard. As Brandeis wrote, “Publicity is justly commended as a remedy for social and industrial diseases. Sunlight is said to be the best of disinfectants.” Or, as he also put it, “If we would govern by the light of reason, we must let our minds be bold.”

Like universities and media outlets, online speech platforms should not be safe spaces. They should be democratic spaces, with the ultimate victors in the clash of ideas determined by reason and deliberation, not the urge to avoid offense.

Thanks so much, and I would love to have a conversation with you about this great topic. (Applause)

Nicco Mele: Thank you, Jeffrey. We have time for a few questions, and to get into a bit of a conversation.

From the audience: Thanks so much for an incredible speech. Just applying “WWBD” and the idea of promoting or inciting violence, how do you think—particularly governments—can deal with actors that might have a strategy that might not be immediately promoting violence, but is working towards that goal? So I’m thinking particularly of groups like ISIS, or other radical terrorist groups. The online discourse might not be necessarily promoting violence immediately, but cumulatively, over time, that is the goal. How do you think we deal with that, using WWBD?

Jeffrey Rosen: It’s a great question. It’s one of the hardest questions for the platforms, and for us. And there’s no doubt, as you say, that the long-term strategy of these groups is to win recruits that will promote violence down the line. And Facebook has made some very tough calls about this, and allowed some especially brutal execution videos that were soon followed by acts of violence. But WWBD—imminence means imminence. That’s the whole game. It has to be “Go kill Jeff now.” And the threat has to be credible and likely. You couldn’t just be bored by the speech, but really want to just do me in. (Laughter)

I think the bottom line is that the standards for combatting online extremism are many and varied, and counter-speech is very important, and Facebook has been actively involved in winning the hearts and minds of influencers in potentially radicalized regions, by trying to engage in counter-speech. But once you abandon the imminence requirement, then the entire structure collapses. Drones are the analogue. The Obama administration justifying drone strikes suggested that someone who expressed willingness to carry out terrorist acts in the future posed an imminent threat, and justified targeted assassination. And like many American civil libertarians, I found that legal analysis unconvincing, because it redefined imminence out of meaning. So without in any way denying the seriousness of the problem—you identify the need to combat online violence—banning is not the answer. It’s also ineffective, and we’ve found that societies that do ban more hate speech—like France and Germany—hardly are immune from attacks; in fact, they have more. So it’s not at all clear that, even if you believe in the dangers of persuasion, that suppression is the best way to combat it. Great question.

Societies that do ban more hate speech—like France and Germany—hardly are immune from attacks; in fact, they have more.

From the audience: I have two questions about “What would Brandeis do?” So, one: What would Brandeis do about the mob mentality you see on Twitter when you have hundreds, thousands of people and bots tweeting at people slurs, phone numbers, when you start to see, maybe, the threat of violence build up there, that drives people off? And second: Even if we move past the banning discussion, what would Brandeis do about things like algorithms that preference particular materials? What does it mean to take a balanced polis-type view of preferencing certain types of materials so that they show up in newsfeeds? There needs to be a choice that seems to be made at some point. It doesn’t seem as though the newsfeed can be entirely neutral. Or if it is possible, I don’t know what that neutrality is.

 Jeffrey Rosen: Those are two such good and hard questions. So online mobs are a terrible shaming device, and they represent the death of public reason, and that great New York Times excerpt from the book about people whose lives were destroyed by jokes taken out of context—a single tweet which led them to be fired, which led them to have their careers destroyed—reminds us of just how brutal the unreason of the mob can be. When it comes to targeted and incredible threats, those are already forbidden under the speech policies of all the platforms. What would Brandeis make of the mobs? He wouldn’t have liked the mobs. What can be done to stop them? The content policies don’t stop the mobs, because when something goes viral, by definition, it’s shared without restraint.

So I suppose Brandeis would have looked for mechanisms of reintroducing public reason. Here’s a good Brandeisian example. There was a Twitter mob thread called “Un bon Juif”—“A good Jew”—and it was initially anti-Semitic— is a dead Jew is—a victim of the Nazis, and so forth. Counter-speech was introduced, and people started saying that “Un bon Juif” is Einstein, is Gershwin, and it switched. And because of the counter-speech, it tipped, and became pro-Semitic, and not anti-. I’m not naïve about the ability of counter-speech to avoid mobs. Mobs are a terrible democratic problem of the internet that’s having broader political effects than a hate speech one, and Brandeis would have deplored them. I just don’t think it would lead us, though, to change the legal standards that focused only on direct and intentional violence.

The algorithmic question is really good, and really tough as well. Just thinking aloud, I guess I would say, yes, Frank Stanton and Richard Salant remind us of the values of editors, and the need to balance, as Nicco said, what speech is in the public interest, and what isn’t. And the objection to Facebook, first, was that it was making politically slanted decisions by content moderators—favoring liberals over conservatives—and then it tried to replace that with an algorithm, and the algorithm produced nonsense. I’m very concerned, as I said, that the fear of being held accountable for human discretion—because [they] have such a suspicion of experts now—that Facebook and Google are deathly afraid that their deciders will make the wrong decision. So moving entirely to algorithms is one option. Another is just delegating it to, as I said, to foreign governments. Let Germany, or China, or Russia—let the governments tell them what to take down. That would be the death of free speech. Or let’s have a referendum, and have people vote about what speech should be taken off. That’s bad too. So I think there’s no such thing as a neutral algorithm, but algorithms are not the solution, and what is a solution is judgment.

There was a Twitter mob thread called “Un bon Juif”—“A good Jew”—and it was initially anti-Semitic…because of the counter-speech, it tipped, and became pro-Semitic.

One final interesting First Amendment twist: Google is arguing that its search algorithm itself is protected speech under the First Amendment, and therefore any regulation of Google violates the First Amendment. Eugene Volokh, who you can find on the thrilling “Interactive Constitution” app, argued that very thing, in a very interesting white paper, written for Google. And if that position were carried to its logical conclusion, then all the Federal Trade Commission regulations of Google, all the anti-trust regulations of Google, and so forth, could be violations of the First Amendment.

Some have called this “First Amendment Lochnerism,” referring to the Lochner case from the early Progressive era, where the Court struck down maximum hour laws for bakers. And the idea is basically that, by protecting everything that the platforms do as corporate speech, it makes them utterly immune to regulation. That’s not a Brandeisian solution. Brandeis was a foe of bigness in business and government. He wrote the most memorable attack on JP Morgan for the risks he took with other people’s money, investing in complicated financial instruments the House of Morgan couldn’t possibly understand, leading to the crash of ’29, and also, of course, to 2007. I interviewed Justice Ruth Bader Ginsberg for the Brandeis book, and I asked her what Brandeis would’ve thought of the Citizens United case, and she said, “He would not have been a fan of Citizens United, not at all,” because it implicated his concerns about curbing corporate bigness, and also the dangers of unregulated corporate speech. So I’m not a fan of the algorithmic defense.

From the audience: I just wanted to ask you about any hope you have for the European system versus the American? The hope in the European model is that this doesn’t become the purview of 28-year-old Silicon Valley Stanford graduates, and actually gets to stay in court. Now, you may disagree with the European Court of Justice’s decision, but at least this was done by a court, and it seems like a better procedure. Do you see any hope in that?

Jeffrey Rosen: I don’t. I respect the challenge, and let me tell you why I’m not persuaded, although you did ask me if there’s any glimmer of hope in the European model, and I do see some. I think the European Court of Justice made a category error of cataclysmic constitutional proportions, and Robert Post has elucidated this in a great lecture at Berkeley recently.

They took a data protection act that was supposed to regulate the bureaucratic collection of data—which was essentially a ministerial function—and applied it to the democratic sphere—which is essentially the public sphere of platforms, which is supposed to be open to all. So they confused bureaucratic and civility norms with democratic norms, and came up with this odd solution of delegating the decision of channeling European dignity norms to 27-year-old lawyers at Google. Far from taking the decision away from Google, it increased the pressure on Google’s lawyers to expand their team to channel what privacy commissioners might say, and because of their fear of liability, basically, to take most stuff down. It was also an odd conceptual idea. Remember, the decision only applies to Google and Yahoo!, to search engines, but not to the underlying media organizations. You can still find the article about Mr. Costeja’s bankruptcy case in the Spanish newspaper—if you know where to look for it—but not at Google. So the urge to regularize is one I sympathize [with]—but the urge to juridify proves, in this case, to do exactly what the mature Brandeis thought shouldn’t happen, which is not to juridify decisions about what the public should be interested in. That was why he came to repudiate his youthful article about the right to privacy, because he thought judges have no business deciding what speech is in the public interest and what isn’t. That was a decision the citizens in a democracy had to make for themselves.

What’s good in the European model? I think their effort to use anti-trust laws to maybe break up Google and Facebook is Brandeisian. He was a foe of coercive bigness. He would’ve been wanly amused by Google’s rebranding of itself as Alphabet in an attempt to create a disaggregated body. But if we’re concerned—as I have expressed concern—that the corporate values are applied to the free speech marketplace, and that in particular, monopoly… this was the great concern of Jefferson and Brandeis. Jefferson introduces an amendment to the Constitution that would’ve forbidden Congress from setting up corporations with exclusive privileges or monopolies. And that anti-monopoly tradition goes from Jefferson to Jackson to Wilson and Brandeis. In the election of 1912, all three candidates are going against the banks, but Taft wants to prosecute them under anti-trust laws, the European model; Brandeis and Wilson want to break up the banks; and Theodore Roosevelt wants to create big regulatory bodies to oversee the big banks. I was distressed to see, in the campaign, Bernie Sanders attribute his proposal to break up the banks to Roosevelt. Actually, it was Wilson and Brandeis. Roosevelt was a big government as well as big corporations guy.

But I think, if we’re concerned about corporate power on speech, then the European instinct to think about disaggregating might make sense. But I do think they made a basic category error in trying to juridify decisions about what the public should be interested in. I just have to give a shout out to your great paper, too. We talked before the show about your writing about Hugo Black and William O. Douglas, two other justices from the late twentieth century who took a more radical and deregulatory view of speech than Brandeis himself did, and I can’t wait to read what you learned about that. Thanks so much.

From the audience: This is also a question about the glimmer of hope for the European model, but I was wondering if you could address the role of privacy in promoting liberty? Most of us no longer say anything interesting in writing, and I think young kids are being raised to not to say anything in writing because of the permanence and the ability of that writing to spread everywhere.

Jeffrey Rosen: Well, it’s a really important question, and Brandeis had a lot to say about it, and I care a lot about it too. I learned from one of your colleagues here tonight that, on some campuses, there’s pressure to remove articles about what you did in college. I’m here for my thirtieth college reunion. Thank God there was no internet when I was here. I’m not kidding. It’s very different now. And if the right to be forgotten were just limited to removing stuff before you were 18, that’s the core of the French right. It’s from the French, “droit à l’oubli,” or the “right of oblivion,” which is incredibly French. It’s like straight out of Sartre, you know? French want to be forgotten, and Americans want to be remembered. (Laughter)

But the core of the original French right—and also the German right, too—is, if the crime took place 20 years ago, or when you were a kid, then it’s expunged from your record. And that makes sense—a second chance for youthful indiscretions. It’s this broad dignitary right—the right to demand the removal of anything that offends your dignity, anywhere—that troubles me. But here’s Brandeis’s mature thoughts on privacy: he came to reconceive it from a right of celebrities to keep truthful but embarrassing information out of newspapers, into a right of intellectual privacy that all citizens had to be free from prying surveillance that could reveal their unexpressed thoughts, sensations, and emotions.

Can I give you a three-minute version on this? Because it’s so cool. You can tell I really like Brandeis a lot. Your other homework, if you choose to accept it, is Brandeis’s dissenting opinion in the Olmstead case. Nineteen twenty-eight, wiretapping, they were enforcing prohibition, there’s this big bootlegger who’s importing all this booze from Vancouver. They put wiretaps on the public sidewalks leading up to his office, and they eavesdrop on his phone conversations. They find out he’s a wild bootlegger, they indict him, and convict him. A majority of the Court [and] Chief Justice Taft upholds the conviction: no trespass, no privacy violation; it was a private sidewalk. Brandeis dissents. He has this incredible visionary dissent. In his desk is a clipping about a new technology, television, but he misunderstands television—it’s 1927. He thinks it’s a two-way technology, where people can see each other through the sides of the camera. He anticipates Skype and webcams. And his law clerk, Henry Friendly, another Harvard giant, says, “You can’t just look at a television and see someone on the other side.” Now, of course, you can. But Brandeis alludes to it, and there’s this incredible passage—and this is not word for word; I don’t have the party trick. This is just basically what he says:

The original French right—and also the German right, too—is, if the crime took place 20 years ago, or when you were a kid, then it’s expunged from your record…a second chance for youthful indiscretions. It’s this broad dignitary right—the right to demand the removal of anything that offends your dignity, anywhere—that troubles me.

Discovery and invention are not likely to stop at wiretapping. Ways may someday be developed where it’s possible, without physically intruding into desk drawers, to extract secret papers, and introduce them in court. Advances in the psychic and related sciences may make it possible to reveal unexpressed thoughts, sensations, and emotions.

So he’s anticipating the internet, where we store our papers on third-party servers in the digital cloud, and fMRI technology, which can read minds.

And he says, at the time of the framing—[when] a smaller invasion that allowed the king’s agents to rummage through people’s houses to search for the authors of anonymous political pamphlets criticizing the king sparked the American Revolution—if that was unreasonable at the time of the framing, he says, then wiretapping has to be, because that can invade the privacy of people on both ends of the wires.

So, WWBD? I think Brandeis would think that any government-sponsored technology that has the potential to reveal our unexpressed thoughts, sensations, and emotions, violates our intellectual privacy, and the anonymity on which liberty and freedom depend. So he would have struck down the position system devices that can make possible 24/7 surveillance, which the Court struck down. He wouldn’t have liked these new Baltimore Police helicopters that are flying around the city of Baltimore, and transmitting, in real time, 24/7 images of all public movements. People have called it “Google Earth meets TiVo.” So if we’re in Baltimore, you can go to the database, take a picture of me, back-click on me to see where I came from, forward-click, and reconstruct my movements 24/7 for a month. That’s a violation of our unexpressed thoughts, sensations, and emotions, with or without a warrant.

So all that is very robust protection for intellectual privacy when it comes to government surveillance, but it doesn’t entirely solve the problem that you rightly identified—and that I’m also concerned about—which is, what do we do for second chances in a world where the internet never forgets? And even limited forms of the right to be forgotten could go somewhere toward that. Can I just ask—because the usual answer here is younger people will be more forgiving, and as more comes out, the norms will adjust. We’re in the middle of a political drama where someone’s not-so-youthful private conversations have not been overlooked. Do you agree with the proposition that the norms will change, and people will just come to forgive recorded indiscretions, or not?

From the audience: No, I don’t think so.

Jeffrey Rosen: And what do you think the answer is?

From the audience: I don’t know. It’s a tough problem, but I think that if we don’t have some ability to be forgotten, that works against all the values that you’ve been talking about, about the ability to develop our own thoughts and our own views on the world.

Jeffrey Rosen: I don’t know if this is too much, but this is the Jewish New Year, and the Talmud is full of injunctions about how to atone for sins and be forgiven and forgotten. And there are all sorts of norms about how, if you’ve wronged someone else during this time of atonement, you’re supposed to go to them and apologize in person. And if you apologize three times, it’s considered bad form on the other person not to accept it. So maybe we’ll both learn to treat each other with greater respect and dignity, and also learn new norms of forgiveness, as well as forgetting.

From the audience: Since the internet doesn’t have geographic boundaries, does there need to be some sort of international legal framework to govern privacy and speech rights, the way that the Geneva Convention sometimes ineffectively governs certain rights, and how international trade disputes are treated in international settings?

Jeffrey Rosen: It’s a very good question. It’s worth thinking hard about, and thinking about lots of models. But I am very concerned about international bodies, because the ones that have been created to police internet speech right now have proved to be brutally suppressive of speech, rather than favoring of it. A leading international convention adopted the standards of China and Russia, not of Europe. It advocated censorship at the network pipe level—so in other words, not anti-takedowns in response to requests, but identifying entire categories of speech that would be blocked by Comcast and Verizon and French Telecom—at the network level.

In Baltimore, you can go to the database, take a picture of me, back-click on me to see where I came from, forward-click, and reconstruct my movements 24/7 for a month. That’s a violation of our unexpressed thoughts, sensations, and emotions, with or without a warrant.

I’ve just made this very passionate argument for American free speech exceptionalism, and given the fact that America’s far more protective of speech than any other country in the world, I would fear, obviously, that any international body would enforce norms that were far less protective than the U.S. But there are meaningful differences, even among not only Russia and China and the democracies, but even between France and Germany, and among the Western democracies. Privacy, dignity—[are] not norms within our universal values; it means different things in different contexts. There’s a wonderful article by James Whitman the Yale Law scholar—“Liberty vs. Dignity: Two Conceptions of Privacy”—which talks about why Europe embraces dignitary values rooted in its tradition of hierarchy, and America embraces liberty. In Europe, it’s a crime to give someone the finger on the highway. Imagine how that would fare in California. Or in Boston. (Laughter)

Whereas America’s much more liberty-based and property-based. Your colleague argued for juridification, even by the European Union. I think a transnational body would be even more troubling. So my strong impulse here is voluntary adoption of standards that coincide with American constitutional ones, but that’s a tall order, and I know there’ll be very great pressure in the other direction.

Nicco Mele: I’m sure you’ll be available for continued discourse just after we’re done here. I want to do a couple of final thank yous. I’m very grateful to Judge Mark Wolf over here in the corner, for introducing me to Jeffrey some years ago. (Applause)

And a big thank you to you for making the time for us this evening. Thank you so much. (Applause). These events don’t just happen. The Shorenstein Center staff has been working very hard to make it happen, especially Tim Bailey, here in the corner. Thank you. (Applause) And with that, I’ll bid you adieu, and with one final reminder: it’s not too late, in some jurisdictions, to register to vote. You can go to turbovote.org, by a Kennedy School alum, and register to vote. Thank you very much. (Applause)