With the midterm elections looming and memories of election interference fresh in their minds, Congress members are aggressively questioning Facebook, Twitter, Google, and other social media outlets on privacy and the steps they are taking to protect information and provide transparency. At a September hearing before Congress, Facebook and Twitter executives acknowledged the existence of fake accounts and the need for massive shifts in their respective operations. This hearing was followed by separate pressure from the Department of Justice, state attorneys general, and the European Union (EU), along with yet another reported hack of millions of Facebook accounts.The threats were—and remain—real. In her September testimony, Facebook COO Sheryl Sandberg indicated that her company had found and disabled more than a billion fake accounts in the last year. Previous investigations had revealed several hundred fake accounts tied to a Russian company that were “promoting or attacking candidates and causes, creating distrust in political institutions, and spreading discord.” Those accounts were associated with thousands of Facebook and Instagram ads and more than 200,000 individual pieces of content.
In similar testimony, Twitter CEO Jack Dorsey said his company identified more than 50,000 automated accounts that were “Russian-linked and Tweeting election-related content,” comprising more than 2 million tweets. He noted that since the 2016 election, Twitter has added both technical and policy changes that have doubled the number of accounts removed for policy violations, imposed additional account verification requirements, and taken steps to verify and label legitimate accounts from candidates and parties.
Congress Steps In
Whether these actions will be enough, or whether Congress will impose additional or different requirements, was also a subject of the hearing. U.S. Senate Select Committee on Intelligence vice chairman Sen. Mark Warner (D-Va.) suggested that new regulations from Congress are a distinct possibility. Speaking a week after the hearing, Warner stated that an “overwhelming majority” of members of Congress from both sides of the aisle would likely support social media legislation (available at bloomberglaw.com behind a paywall). At the hearing, he noted, “The era of the Wild West in social media is coming to an end.”
Others are less certain. The Verge columnist Casey Newton writes that he still has his doubts, although he did find that Congress was getting more effective in pressing social media executives for more specifics about their platforms, how they work, and how they could more effectively limit election tampering and other activities.
Other commentators said that there are limits on what companies can do on their own. But increased scrutiny, verification, and disabling of accounts are all necessary and achievable by the companies without government help. Writing for Bloomberg just before the September hearing, Sarah Frier and Alyza Sebenius suggested that there is some conflict between tech companies needing additional, potentially sensitive information from government agencies and agencies having security concerns over sharing confidential information. The government, the writers asserted, is taking the position that the companies are in a “better position to assess the risk.” They expressed concern that “nobody has the full picture.”
Stopping Terrorist Content
This leads to an important question: What steps could Congress mandate for social media companies? One idea, of course, is more effective coordination between government and industry. Other ideas being suggested include labeling of social media bots, improved consumer rights to data access and portability, and strengthened takedown requirements covering fake accounts and accounts that promote fraudulence or violence.
Recommendations such as these are making headway in the EU, including a September proposal from the European Commission (EC) that would require the immediate removal of terrorist content and a separate challenge from the EC over the alleged misleading use of personal information.
The anti-terrorism proposal, made as part of the EU president’s State of the Union address, would impose a legally binding 1-hour deadline for removing terrorist content when ordered to by a national authority. Terrorist content would be clearly defined along with safeguards to assure that only content that meets the definition would be removed and that unjustifiably removed content would be reinstated. Service providers would also be required to take proactive measures, including “the use of new tools” to protect against terrorist abuse, and a framework would be set up for increased cooperation among service providers and between service providers and the EU member states. Penalties of up to 4% of “global turnover” for the previous business year could be imposed against service providers for “systematic failures” to remove content.
As with any such sweeping proposal, it is receiving mixed reviews. Bloomberg reports (the article is behind a paywall) that the Computer & Communications Industry Association, a trade group based in Brussels, Belgium, and Washington, D.C., expressed concern that the proactive measures being discussed could amount to active monitory, which would raise privacy issues and could violate existing EU laws. It also expressed concern about whether smaller social media companies would have the resources to take the kind of proactive measures being suggested.
The EU’s Role
Separately, in May 2018, the EU enacted the General Data Protection Regulation (GDPR), an updated data privacy directive. The GDPR expands the existing EU Data Protection Directive by making it clear that it applies to all companies that collect data from EU citizens, regardless of where the company is located. This would of course include U.S.-based companies such as Facebook, Twitter, and Google. With the GDPR, consent request and data breach notification requirements have been strengthened, and companies are “no longer able to use long illegible terms and conditions full of legalese.” Data would become portable, with expanded rights of access. The GDPR also includes a “right to be forgotten,” or the ability to have certain personal information erased. Penalties for noncompliance to the GDPR could be up to 4% of “annual global turnover” or €20 million (about $22.8 million), whichever is greater.
Facebook modified its terms and conditions following the implementation of the GDPR, but apparently not enough to satisfy EU Commissioner for Justice, Consumers and Gender Equality Vera Jourová or the government of Ireland. Jourová warned that additional progress needed to be made by December 2018 or “sanctions will come.” Ireland’s Data Protection Commission had already been looking into complaints about the terms and conditions and recently opened a new investigation into the September data breach of Facebook.
Congress is unlikely to act prior to the fall elections, and the election results will undoubtedly have an impact on any future social media regulations. Facebook and Twitter have claimed that their efforts since the 2016 election will bear fruit in the 2018 midterms by reducing foreign and fraudulent accounts, advertising, and influence. But given the divisiveness of modern politics and the influences that come from so many different sources and directions, it may be impossible to tell what social media did right and what it did wrong.