The First Amendment of the Constitution reads, “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances.” It says nothing about private entities, not-for-profits, or corporations abridging the freedom of speech.
If you step into my bar and start mouthing off, I can escort you back out to the street. If Google, Facebook, or Twitter hears their users promoting racism, murder, or sexism, they too can escort those users away from their services.
Indeed, that is now happening. A number of prominent social media companies and companies with social media products recently laid out plans to limit the reach of terrorists through their platforms.
Terrorists’ Use of Social Media
So far in 2017, there have been more than 800 terror attacks around the world, with more than 5,000 fatalities—about 6 deaths per attack. Some attacks have had dozens of fatalities. The Islamic State group’s attack on the concert in Manchester, U.K., saw 22 deaths and many more injured. Also in the U.K., the attacks near Westminster in March and on the London Bridge in June killed 13 and injured nearly 100, collectively. Even as hundreds die in Somalia, the Democratic Republic of Congo, Nigeria, and Mali, it is the attacks on Europe that bring western media attention to terrorism and therefore bring the attention of western corporations to social media.
The Islamic State group is famous for its slick social media treatment of its cause and ability to sell itself to those looking to go live in and die for its attempt at a caliphate. Politicians have often promoted tighter controls on social media platforms in order to identify terrorists or potential terrorists. After the London Bridge attack, British Prime Minister Theresa May said that internet companies provide a “safe space” for terrorists:
[W]e cannot allow this ideology the safe space it needs to breed. Yet that is precisely what the internet, and the big companies that provide internet-based services provide.
We need to work with allied democratic governments to reach international agreements that regulate cyberspace to prevent the spread of extremist and terrorism planning. And we need to do everything we can at home to reduce the risks of extremism online.
Facebook, Twitter, YouTube, and Microsoft were listening. Soon, they launched the Global Internet Forum to Counter Terrorism to share hash lists among members, information on extremist groups with governmental authorities, and best practices for identifying and stopping the spread of hate speech with smaller social media companies. Hash lists are databases of unique image identifiers that are also used to trace images of crimes against children. The big internet companies can use them to trace the origin of images generated from terrorist acts and thereby have a better chance of finding patterns and identifying individual criminals responsible for terrorist actions.
Secure hash algorithms can be run on existing blocks of hash to add a layer of verifiability to any given document or image. Such systems have been developed by government cryptographic agencies, including the National Security Agency (NSA) in the U.S.
Definitions, If You Please
Terrorism is when someone, or some group, uses violence to terrify a group of people, usually toward some political or military effect. But some free speech advocates (and shouldn’t we all be advocates for free speech, really?) find themselves affronted by the new measures that Facebook, Twitter, and the rest are taking against terrorism. And some have, apparently, been shut down for speaking out against speaking out against speaking out.
Jordan B. Peterson, a Canadian psychologist, had his YouTube account suspended for several hours without explanation in early August 2017. Peterson has been an outspoken critic of a bill that he believes would compel speech. He claims that Bill C-16 (also known as “An Act to amend the Canadian Human Rights Act and the Criminal Code,” which became law in Canada on June 19, 2017) would treat the failure to use an individual’s preferred pronouns as an act of what might be called extremist “hate speech” and that someone using hate speech would be especially liable for abusing the human rights of others under the bill. Peterson’s position is that compelling speech from free citizens is wrong. Limiting speech, in the American system (see the First Amendment), is frowned upon. Peterson claims that Bill C-16 not only limits speech, but compels certain uses of speech so that novel pronouns for transgender citizens, such as ze and xyr, would be required.
In any case, Peterson’s YouTube account was apparently out of order for some hours, as reported by the right-leaning news source LifeZette (Laura Ingraham is editor-in-chief) and tweeted by Peterson. Correlation does not equal causation, but the temporal proximity to YouTube’s underlining of its stance against intolerance may raise eyebrows.
Definitions for All
Given that YouTube, Twitter, and Facebook are not the Areopagos, we still find ourselves in a world in which these semi-walled and for-profit gardens are something like public forums. The corporations maintaining the spaces do have an interest in the kinds of information shared and the communication enacted among their users, but as billions use these platforms, we must wonder what “rights,” if any, are due to the users, beyond the boilerplate terms of service we all agree to.
Definitions are required. When does a religio-political debate between Sunni and Shiite clerics become hate-ish? When does pride become hate in the context of orange pennants in Ireland? When does one’s failure to use a neologistic pronoun become an infringement of a trans person’s most basic human rights?
Without clear definitions, we will have trouble moving forward together, but it is not at all clear that the board of directors of Facebook and company are the ones who should be doing the defining.