Beyond the Gates: Making Speech Free for All
No gatekeepers—but platforms must help us build the world beyond the gates.
In Greek mythology, Pandora received a sealed box filled with things the gods kept from humanity. She is warned not to open it, but curiosity wins. When the box opens, everything inside escapes: conflict, deceit, jealousy, plague, and all the ills that afflict human life. Pandora panics and slams the jar shut, but it’s too late; the world has changed irreversibly. What was contained is now loose, and no amount of regret or force can put it back.
People sometimes compare social media to Pandora. These platforms opened the 'box’ by dismantling the old media gatekeepers and creating a world in which anyone can publish, speak, organize, persuade. In an instant, enormous goods escaped: creativity, connection, the democratization of information and so on. But a lot of harmful speech came out too: trolling, hate, falsehood, and propaganda to name a few.
It can be tempting to think that we just need to bring back the gates. We should figure out how to keep the good stuff that comes with an expansive speech environment while keeping the bad out. I'm not convinced we can do that and I don't think we should for reasons that Dan Williams lays out in his excellent recent essay.
So let’s assume we can’t do that anymore. Instead of trying to reseal the box, the real question is how to deal with what has escaped. In particular: how can we respect free speech while ensuring that everyone can participate on fair and equal terms, without being harmed by its excesses?
This essay has three parts. In Part 1, I argue — with the help of Alexis de Tocqueville — that restrictions on the media, old and new, tend to do too much and too little at the same time. It's remarkable how well a nearly 200-year-old analysis of free speech and the media applies to our present moment.
In Part 2, I ask what follows from this. Many critics of censorship and “cancel culture” stop at saying that the answer to bad speech is ‘more speech’. That is true, but incomplete. We need people who contest harmful speech, but we also need better conditions for so-called counterspeech to be effective. In other words, we need to support those who speak back.
In Part 3, I propose a way for social media platforms to defend free expression while also supporting counterspeech (building on ongoing joint work with Jeffrey Howard).
Simply put: if we’re going to live with more speech, we need to live with more support for those who answer it. A lighter moderation regime doesn’t end responsibility, it shifts it. When platforms moderate less, they owe more to the people who push back. What we should want, ultimately, is to open the gates so that everyone can walk through as equals.
Part 1: “Too Much and Too Little” - On the Limits of Controlling the Media
In the 19th century, the French political thinker Alexis de Tocqueville wrote an epic study of American society and the influence of democratic ideas on its citizens.1 In one memorable chapter of Democracy in America, he examines the Americans’ fascination with press freedom. He does not love freedom of the press because it is intrinsically valuable. He famously says,
“I love it much more from consideration of the evils it prevents, than the good things that it does.” - Democracy in America, p.290.
He goes a step further and says that he would accept an intermediate view between the media's “complete independence” and “total subservience” to the state - if such a position were stable. But, step by step, he argues that every attempt to curb the press ends up doing too much and too little at the same time.
First, he says, let the juries judge writers and punish harmful writing without heavy censorship. But the juries might sympathize with writers and acquit them and the verdict of one person turns into the “opinion of the country”. The result? Too little censorship (the idea spreads) and too much dissemination (the trial amplified it).
Second, he suggests handing the task to professional judges, on the assumption that they will be stricter and more consistent. But judges must hear the evidence in open court. The offending views will be read aloud, discussed publicly, and often disseminated more widely than they would have been if left alone. Again: too little and too much.
Third, just hand the writers over to official censors to stop the ideas before they spread. But even if one newspaper is silenced, people can turn to others or simply speak in public. Indeed, a banned idea being spoken by one charismatic speaker may attract more attention precisely because it is forbidden.
To prevent that, Tocqueville concludes, you must just “destroy freedom of speech along with freedom to write.” Order is restored - but at a cost:
“What point have you reached? You had set out to repress the abuses of freedom and I discover you beneath the boots of a tyrant.” - Democracy in America, p.290.
The logic is clear: content-moderation will inevitably escalate. To be effective, it must expand. This is why the middle-ground view Tocqueville searches for will collapse into one of two extremes: total independence or total control.
Transition to Today
Tocqueville was writing in 1835, long before broadcast media or the Internet. The newspapers of his day didn't exactly represent everyone and the golden age of 20th-century media gatekeeping didn't either. For example, the BBC, The Times, The NYT — these institutions produced incredible journalism, but they also set the boundaries of public debate (helping establish the so-called Overton window.) What they ignored, millions never heard.
Still, Tocqueville's core insight transfers quite neatly to today's world: platforms with the potential to be the freest media – because they allow everyone to speak – also face the most pressure to control expression. And, if Tocqueville is right, every attempt to manage speech risks doing too much and too little.
Many people today agree with Tocqueville's argument but they often don't explain what should come next. What should we do about harmful speech like misinformation and hate speech?2 Sure, there could be good reasons to ‘open the gates’ but what do we do now?
Nearly a century later, Justice Louis Brandeis - in Whitney v. California (1927) gave an influential answer to the question: “the remedy to be applied is more speech, not enforced silence.” In other words, harmful speech must be met with counterspeech. But what is counterspeech and why is it valuable?
Part 2: The Value of Counterspeech
Counterspeech can be understood as “any form of communication that tries to counteract potential harm brought about by other speech.” Fact-checking a false claim is counterspeech. Calling out degrading or hateful remarks is counterspeech as well.
The moral appeal of counterspeech is that it reconciles two liberal commitments: respect for free expression and concern for the harms speech can cause. As philosophers like Jeffrey Howard argue, we sometimes have a moral obligation to engage in counterspeech, grounded in a more general duty to prevent harm to others when the personal cost is low. If you see someone being harmed — by actions or by words — and you can intervene safely and effectively, you should. Supporting free speech, Howard rightly notes, does not entitle us to rest on our laurels while ignoring its consequences.
At this point, two objections naturally arise.
First objection: Once someone says something racist, hateful, or wildly untrue, isn’t the harm already done? Why should a chorus of “That’s false/mean/racist” make any difference?
Second objection: Some people tell me — and I’ve heard this directly — that it’s easy for me to say “let’s use counterspeech instead of limits” because I’m a straight, white man and have not been on the receiving end of certain harms. Isn’t counterspeech just a luxury for the privileged?
Both objections are important and worth taking seriously.
Let's start with the second. I can understand where people are coming from. But the objection quietly assumes that I defend counterspeech for my own comfort. But that's wrong. I personally defend it for two reasons. First, because I care about others. Why else would I suggest a response to bad speech at all? Second, I defend counterspeech because it respects two rights at once: the right to speak, and the right not to have degrading speech go unchallenged. Supporting counterspeech is not about sparing myself effort — it’s about defending the dignity and voices of those affected by the excesses of free speech.
This brings us back to the first objection. If the speech is uttered, offline or online, has it not already harmed or degraded its targets? Isn't that why we should prevent it from being said in the first place? Also, if we agree with this, what is the point of countering bad speech?
Here's the key mistake with the objection: offensive or hateful speech doesn't harm in isolation – it harms through a social environment that gives it authority and reach. Imagine someone on an island shouting slurs into the ocean, or someone posting bigotry online to an account with two followers that no one reads. In these cases, it’s intuitive that no one has been harmed.3 Some speech causes harm not just by existing but by circulating, being believed, being normalised, or going unchallenged.
So the job of counterspeech is not to erase the initial sting of a remark. Instead, it can shape the downstream social meaning and consequences of what was said. When a community practices counterspeech, hateful or false claims don’t stand uncontested; they lose authority, credibility and their ability to define norms.
Of course, individuals can’t do this alone. Just as a slur shouted into the ocean harms no one, counterspeech that reaches no one has no effect. For counterspeech to work, it needs an environment in which voices can be heard and supported. That’s where social institutions – like social media platforms – come in. And crucially, just because you've decided to remove the gates to free expression doesn't mean that you have no responsibilities at all for what happens beyond the gates.
Part 3: If You Moderate Less, You Must Amplify More Counterspeech
In my view, the core responsibility of platforms can be captured in one slogan:
If you're going to moderate less bad speech, you must amplify more counterspeech.
I think this is very defensible. First, the owners of social media plausibly have a right to manage their platform in the ways they see fit, subject to the usual caveat of avoiding non-consensual harm. They have ‘curatorial rights’. META can decide whether they want to be ‘family-friendly’ and avoid having porn on their platforms. Similarly, if a platform decide that they are going to engage in less content-moderation, they have a right to change how they manage their platform and allow more ‘lawful but awful’ content (provided that they continue to remove illegal content).4
But even if you're justified in making such a change, that doesn't mean you're not responsible for what comes next. Further, even if it is OK to moderate less offensive speech, it doesn't seem like amplifying it beyond a chronological baseline is acceptable. Some studies suggest that some platforms are, in fact, doing this. Others suggest that this kind of speech is just engaging to people and that's why it's far more visible. Whether offensive content is algorithmically amplified or simply engages people more, the result is the same: harmful speech becomes disproportionately visible relative to counterspeech.
When hate or other degrading speech becomes disproportionately visible, it doesn’t just offend — it chills. People who are targeted or marginalised understandably withdraw from public participation. Few want to speak back to a hostile crowd, online or offline. And if harmful speech reliably drives vulnerable speakers out of the conversation, free speech is not preserved but hollowed out. On free speech grounds themselves, platforms must act when visibility imbalances effectively silence certain groups. Importantly, if they don't do anything about this, then we have good reason to believe they're defending free speech in bad faith.
This is where counterspeech comes in. If platforms decline to remove harmful but lawful speech — as is their prerogative — they assume a parallel obligation: to ensure that those affected by such speech have meaningful opportunities to respond. Less moderation is not neutrality; it is a choice that increases exposure to harm. And choices that increase exposure generate duties to protect equal standing in the public sphere.5
In the case of hate speech, platforms could algorithmically amplify the comments, replies, or shares that contest such posts. Alternatively, they could have a speech-pair system where the most prominent counter-response is attached to the hate-containing post, so that users can see what was said and how it was responded to. The details are technical, but the principle is simple: these tools would allow platforms to make good on their promise of supporting free speech for all their users while helping users to see the value in counterspeech.
These are just examples, but they illustrate the broader point: if platforms commit to openness, they also commit to protecting the users whose dignity is at stake in the speech environment. Expression is not just noise; it is how we show who we are, assert our agency, and participate in shared life — even when we sometimes express that agency poorly or harmfully. That is why free speech matters: it reflects the dignity — the special moral status — of human beings. And dignity is powerful: it works both as a sword, justifying wide freedom to speak, and as a shield, ensuring people can defend themselves against the excesses of that freedom.
Clarifications
I want to consider three points of clarification before finishing up.
The first is that even if you stop acting like a gatekeeper that does not mean that any speech goes. Hate speech or misinformation that incites violence or which could immediately cause harm shouldn't be allowed on the platform. Again, you can justify this from within the value of free speech itself. Speech that reaches the level of incitement and whose success depends on means that silence others (e.g., threat of physical injury or death) logically entails a violation of free speech.
The second concerns whether counterspeech even works. While studies suggest that counterspeech rarely persuades committed speakers to abandon hateful views, it can be effective in shaping how such speech is received by others. Visible responses that challenge hate reduce perceived acceptability, limit social endorsement, and discourage downstream sharing. Just as importantly, they reassure targets and bystanders that degrading speech does not speak for the community as a whole. The harm of hate speech often lies less in changing minds than in establishing who belongs; counterspeech works, when it works, by contesting that authority.
Relatedly, there's already some encouraging data on Community Notes, the bottom-up crowd-sourced fact-checking system on X, suggesting that users trust its interventions and reduce their sharing of falsehoods. Community Notes isn't perfect, but it offers a promising example of user-empowered counterspeech, and shows that platforms are already capable of supporting the approach I have defended here.
The final clarification is about whether amplifying counterspeech - however it's done - could itself risk a kind of censorship or intrusiveness on the part of the platform. To some extent, we have to concede it is intrusive. But it is the most minimal intrusion that is also consistent with responding to the harms of certain speech.
We already accept this sort of necessity principle in the ethics of self-defence, for example. Intuitively, if someone is attacking you, you have a right to defend yourself but only using the least restrictive alternative available to you. If you can stop the attacker either by (i) shooting them in the leg or (ii) killing them, you should go for (i).
In the case of social media platforms, they have a role to play in helping us defend ourselves against harmful speech. In the case of (non-inciting) hate, if they can (i) ban that speech or (ii) give us an effective way to challenge that speech, they should go for (ii). If they do go for (ii), they can respect our interests in free speech and our interests in not being subject to gratuitously degrading speech.
Conclusion
I want to conclude by returning to Pandora. People often forget that in Hesiod's original myth, something was left in her box when she closed it: Hope. For centuries, scholars have disagreed about whether hope should be understood as a lost gift to humanity or as a further curse. I'm inclined to think Pandora was right to open the box and she couldn't have had it closed forever. More importantly, I think she should have fished hope out of the box.
Similarly, I commend platforms for wanting to support free speech, and I don't think we could have lived with gatekeepers forever. Censorship can paradoxically chill speech as much as it can heat it, making expression more extreme rather than less.6 But, if I'm right, platforms cannot simply open the gates and walk away. Free speech is only one side of the coin; counterspeech is the other.
If gatekeeping was unjust in part because it denied people equal voice, then a permissive speech environment that tolerates hate or misinformation without supporting counterspeech risks recreating that injustice in a different form. Amplifying counterspeech is not a retreat from free speech. It is a way of honouring it under conditions of real inequality.
He was originally sent by the French government to study the American prison system, but decided to write one of the most influential treatises about American democracy instead.
Hate speech is notoriously hard to define. I prefer a broad definition like Susan Brison's: Any speech that vilifies individuals or groups on the basis of such characteristics as race, sex, ethnicity, religion, and sexual orientation, which (1) constitutes face-to-face vilification, (2) creates a hostile or intimidating environment, or (3) is a kind of group libel.”
By ‘harm’ here, I mean: a way in which you are worse off than you otherwise would be if the speech had not been uttered.
This is vastly complicated by the fact that some countries are more respectful of free speech than others, which means that content that is illegal in Germany might not be illegal in America. If you're curious about what happens in those situations, META will ‘geoblock’ content that is illegal in Germany. This means users who have a German IP address will not be able to see, for example, any form of speech that amounts to holocaust denial.
I think this same moral logic is what justifies moderating less content and having a system like X’s Community Notes instead. Of course, this system is not perfect and I’ll explain how I think it can be improved in a future post.
For the idea of a ‘heating effect’ in the context of self-censorship, see this paper by Robert Simpson.



Thanks for this essay, Kyle. Unlike my colleagues, I don’t do research on bystanders (who consider counterspeech but remain silent) but it does look like the current “informational environment” doesn’t really encourage counterspeech. Latest data suggests people increasingly lurk on social media and don’t feel like engaging in general…
Separately, a recommendation to readers who are interested in these issues - many people who probably enjoy (wrestling with) this Howard paper: https://journals.publishing.umich.edu/jpe/article/id/6195/
Thanks for the comment, Jan! Yeah that research is interesting, I guess I’m curious if that lurking is just a natural tendency or if it’s a response to the incentives/lack thereof such that changing the platform design in some way would incentivise people to counterspeak. I feel like this would respond to the fact that a lot of people think (sometimes rightly) that counterspeech won’t be effective.