The BBC’s interpretation of its responsibility continued with a blog post by David Jordan, director of editorial policy and standards, explaining how the BBC believes its archive is “a matter of historic public record,” and therefore, the decision needs to be made very carefully: “We must balance the harm to the individual named with the potential harm to the public interest in the removal of our content—in other words we need to ask is it fair to the person we feature to keep the content and is it fair to our users to remove it? We need to consider whether the information has also been put in the public domain by others such as the courts or the police. Perhaps the information is already circulating widely on the internet—if it is, removal may be ineffective.”
In general, the BBC’s new Editorial Guidelines assert, “Unless content is specifically made available only for a limited time period, there is a presumption that material published online will become part of a permanently accessible archive. … Users should be made aware that published content is part of the historical record and should not normally be removed from the online archive, because to do so may reduce transparency and trust with our users and effectively erases history.”
More National Legislation in the Works
In July 2015, Vladimir Putin signed into law Russia’s version of the right to be forgotten, allowing individuals in Russia to demand removal of search engine links to personal information deemed “misleading and irrelevant,” beginning Jan. 1, 2016. An analysis of the law from the Norton Rose Fulbright Data Protection Report notes that “unlike the European ruling, the law extends the ‘right to be forgotten’ to public figures. It specifically allows public figures to request the removal of their information. Thus the Russian law has been widely criticized by those who argue that it fails to strike a balance between the right to personal privacy and freedom of information.”
According to the Harvard Journal of Law & Technology’s JOLT Digest, although no sanctions have been determined, the law states:
Russian citizens can request that a search engine remove a link if it (1) reveals information that ‘violates their personal data, (2) contains “unverified information”, or (3) contains information that is “no longer relevant.”’ Affected websites include any search engines that serve targeted advertisements to Russian citizens, such as Google, Yahoo!, and Yandex. Search engines will have up to ten days to respond to takedown requests, and failure to respond to requests within the time frame, or an erroneous refusal to remove content, will result in litigation and potential fines.
This mandate has led to further criticism—specifically that it gives search engines the responsibility to decide whether certain behaviors are crimes.
The National People’s Congress (NPC) of the People’s Republic of China has released the draft of a new law on cybersecurity in which the government “claims a need for heightened network security, preservation of cyberspace sovereignty, and protection from cyber attacks. To achieve this, the NPC delegates broad powers to the Chinese Cyberspace Administration to regulate network providers. For example, the draft law requires network providers to mimic the State’s existing tiered network security protection system to ward off cyber attacks and data leaks. The law also requires all sensitive data collected about Chinese citizens to be stored in servers located within China, and subjects those servers to security checks from state regulators.” Although perhaps not an issue for individuals outside of China, this law portends ominously for companies with operations in the country.
Do Americans Have the Right to Be Forgotten Too?
Writing on The Huffington Post’s blog, Lindsay Hoffman reminds readers that “whether the articles that the EU citizens want removed from Google searches is to hide articles they don’t agree with or just don’t like, or to hide articles that are ‘inaccurate, inadequate, or irreverent’ as the court ruling deems them necessary to be removed, I have no idea. However, we shouldn’t be so quick to judge the ruling without knowing more about what exactly is being removed. How can we judge people for wanting their own ‘clean slate’ without knowing their circumstances?”
The Guardian recently published “new data hidden in source code on Google’s own transparency report … information it has always refused to make public.” More than 95% of requests made came from members of the general public concerned about how Google search results affect their privacy—and not from criminals, politicians, or others (who made up only 5% of the requests). “Breakdowns for each country reveal that within the primary category of ‘private or personal information,’” The Guardian’s analysis continues, “just shy of half the requests are delisted, more than a third are refused, and the remaining are pending. By contrast, for each of the other categories, around one in five have actually been delisted. The numbers fall evenly between crime, public figures, political and child protection. Around two-thirds of these requests are refused. In many countries, including France, Germany, the Netherlands, Austria, Portugal and Cyprus, 98% of requests concern private information.”
A special issue of Science published in January 2015 discusses whether some type of the right to be forgotten legislation would be appropriate in the U.S. In the introduction, Martin Enserink and Gilbert Chin explain that today “vast amounts of information about you are collected with only perfunctory consent—or none at all. Soon, your entire genome may be sequenced and shared by researchers around the world along with your medical records, flying cameras may hover over your neighborhood, and sophisticated software may recognize your face as you enter a store or an airport.” Additionally, “New computational techniques can identify people or trace their behavior by combining just a few snippets of data. There are ways to protect the private information hidden in big data files, but they limit what scientists can learn; a balance must be struck.”
Rights or Wrong?
Jules Polonetsky, executive director and co-chairman of the Future of Privacy Forum, believes this “decision will go down in history as one of the most significant mistakes that Court has ever made. … It gives very little value to free expression. If a particular Web site is doing something illegal, that should be stopped, and Google shouldn’t link to it. But for the Court to outsource to Google complicated case-specific decisions about whether to publish or suppress something is wrong.”
In a recent debate between supporters and opponents of an American right to be forgotten law, Harvard Law School professor Jonathan Zittrain called the EU ruling “a bad solution to a very real problem, which is that everything is now on our permanent records.” He went on to note that removing indexing to information was like “saying the book can stay in the library, we just have to set fire to the catalog.”
The internet may be the information superhighway, but clearly, we are in need of some road repair.