On May 4, 2017, ProPublica accused network operator Cloudflare of policies that make possible the “operations of such extreme sites” as “the neo-Nazi website The Daily Stormer.” The article notes that Cloudflare, “an otherwise very mainstream internet company,” was providing services to neo-Nazi sites, “including giving them personal information on people who complain about their content.” Titled “How One Major Internet Company Helps Serve Up Hate on the Web,” the article was picked up by other news outlets and has become another part of the ongoing debate about the role of websites and social media in giving a platform to extremism.In the past few years, internet hate has become a major issue. Charges of fake news, accusations of inciting hate speech and hate crimes, and requests for takedowns of internet content have become common for social media companies. Facebook agreed to update its Trending section—which focuses on popular discussion topics—in reaction to requests. Google removed 200 publishers from one of its ad networks in response to the proliferation of fake news sites.
Governments are also responding to fake news and intentionally offensive content. Germany introduced a bill that aims to stem fake news and hate speech. In England and Wales, laws are focused on content that “stir[s] up hatred on the grounds of” race, religion, or sexual orientation. Australia; Osaka, Japan; China; South Africa; and Russia have sought legal restrictions on the content of websites. In the U.S., there has always been the need to balance even the obscene with First Amendment rights to free speech.
On May 2, 2017, Will D. Johnson, a member of the board of directors for the International Association of Chiefs of Police (IACP) and chair of its Human and Civil Rights Committee, gave testimony before the Senate Committee on the Judiciary regarding hate crime and the internet: “[T]he Internet provides extremists with an unprecedented ability to spread hate and recruit followers. Individual racists and organized hate groups now have the power to reach a global audience of millions and to communicate among like-minded individuals easily, inexpensively, and anonymously.”
Johnson reminded the committee that “although hate speech is offensive and hurtful, the First Amendment usually protects such expression.” He said, “The ease of sending Internet hate messages and threats across state lines can make perpetrators and victims difficult to identify and locate and creates criminal jurisdictional issues. Criminal cases concerning hate speech on the Internet have, to date, been few in number. The Internet is vast and perpetrators of online hate crimes hide behind anonymous screen names, electronically garbled addresses, and websites that can be relocated and abandoned overnight.”
Protecting Consumer Complaints
The right to free speech is just one issue that needs to be dealt with. In the case of Cloudflare, calls for better vetting of content move responsibility for that content into a new realm. Cloudflare, established in 2009, isn’t a social media site or an online consumer sales outlet. The company works behind the scenes, providing a content delivery network for websites—it is sitting between users and the website itself. Its network service links more than 100 data centers across the globe, monitoring website operations for potential hacking or other disruptions and supplying consistent service using its global network. Cloudflare is part of the internet infrastructure. It offers web hosts protection against denial-of-service or other cyberattacks and is able to even out load levels that can arise during heavy use on hosting services.
The problem for Cloudflare is that it also collects user reports of issues with websites that it receives and passes them along to the web hosts. As Ars Technica reports, “by providing service to any website operator and failing to provide anonymity to people who complain about racist or otherwise abusive online content,” it leaves those users open to attack from those sites.
Cloudflare reacted quickly to the ProPublica charges with a May 7 blog post that details new policies. Co-founder and CEO Matthew Prince says in the post that “feedback from the article caused us to reevaluate how we handle abuse reports. As a result, we’ve decided to update our abuse reporting system to allow individuals reporting threats and child sexual abuse material to do so anonymously. We are rolling this change out and expect it to be available by the end of the week.” Prince admits that this potential was something the company hadn’t imagined—it was “a blindspot”—and now it has established a process through which “reporters of these types of abuse can choose to submit them and not have their contact information included in what we forward. The person making the abuse report seems in the best position to judge whether or not they want their information to be relayed.”
Protecting Our Right to Free Speech
However, Prince also restates the company’s firm commitment to its “belief that it is not Cloudflare’s role to make determinations on what content should and should not be online. That belief comes from a number of principles. Cloudflare is more akin to a network than a hosting provider. I’d be deeply troubled if my ISP started restricting what types of content I can access. As a network, we don’t think it’s appropriate for Cloudflare to be making those restrictions either.” He notes that “from time to time an organization will sign up for Cloudflare that we find revolting because they stand for something that is the opposite of what we think is right. Usually, those organizations don’t pay us. Every once in awhile one of them does. When that happens it’s one of the greatest pleasures of my job to quietly write the check for 100% of what they pay us to an organization that opposes them.”
Prince goes on to say that “the best way to fight hateful speech is with more speech.” Hopefully, that’s something most people would agree with.