and getting cheaper (e.g., DDoS attacks). This allows the governments to deflect the blame—they’re not doing the censorship, after all—and thus also significantly undercounts total censorship in the world. In many cases, governments don’t have to do anything at all; plenty of their loyal supporters will be launching DDoS attacks on their own. The democratization of access to launching cyber-attacks has resulted in the democratization of censorship; this is poised to have chilling effects on freedom of expression. As more and more censorship is done by intermediaries (like social networking sites) rather than governments, the way to defend against censorship is to find ways to exert commercial—not just political—pressure on the main actors involved.
It’s also becoming clear that authoritarian governments can and will develop sophisticated information strategies that will allow them to sustain economic growth without loosening their grip on the Internet activities of their opponents. We certainly don’t want to spend all our energy tearing down some imaginary walls—making sure that all information is accessible—only to discover that censorship is now being outsourced to corporations or those who know how to launch DDoS attacks. This is yet another reason why “virtual walls” and “information curtains” are the wrong metaphors to assist us in conceptualizing the threat to Internet freedom. They invariably lead policymakers to opt for solutions for breaking through the information blockade, which is fine and useful, but only as long as there is still something on the other end of the blockade. Breaking the firewalls to discover that the content one seeks has been deleted by a zealous intermediary or taken down through a cyber-attack is going to be disappointing.
There are plenty of things to be done to protect against this new, more aggressive kind of censorship. One is to search for ways to provide mirrors of websites that are under DDoS attacks or to train their administrators, many of whom are self-taught and may not be managing the crisis properly, to do so. Another is to find ways to disrupt, mute, or even intentionally pollute our “social graph,” rendering it useless to those who would like to restrict access to information based on user demographics. We may even want to figure out how everyone online can pretend to be an investment banker seeking to read Financial Times! One could also make it harder to hijack and delete various groups from Facebook and other social networking sites. Or one could design a way to profit from methods like “crowdsourcing” in fighting, not just facilitating, Internet censorship; surely if a group of government royalists troll the Web to find new censorship targets, another group could also be searching for websites in need of extra protection?
Western policymakers have a long list of options to choose from, and all of them should be carefully considered not just on their own terms, but also in terms of the negative unintended consequences—often, outside of the geographic region where they are applied—that each of them would inevitably generate. Of course, it’s essential to continue funding various tools to access banned websites, since blocking users from visiting certain URLs is still the dominant method of Internet control. But policymakers should not lose sight of new and potentially more dangerous threats to freedom of expression on the Internet. It’s important to stay vigilant and be constantly on the lookout for new, yet invisible barriers; fighting the older ones, especially those that are already crumbling anyway, is a rather poor foundation for effective policy. Otherwise, cases like Russia, which has little formal Internet filtering but plenty of other methods of flexing the government’s muscles online, will continue puzzling Western observers.
The main thing to keep in mind, though, is that different contexts give rise to different problems and are thus in need of custom-made solutions and strategies. Clinging to Internet-centrism—that pernicious tendency to place Internet technologies before the environment in which they operate—gives policymakers a false sense of comfort, a false hope that by designing a one-size-fits-all technology that destroys whatever firewall it sees, they will also solve the problem of Internet control. The last decade, characterized, if anything, by a massive increase both in the amount and in the sophistication of control, suggests that authoritarian regimes have proved highly creative at suppressing dissent through means that are not necessarily technological. As such, most of the firewalls to be destroyed are social and political rather than technological in nature.
The problem is that technologists who have been designing tools to break technological rather than political firewalls—and often have