Truth and Trust Online 2022 delivered notable updates and new perspectives on the rapidly evolving fight against digital misinformation. This was the fourth annual conference under this title, shifting to a hybrid format after 2 years as a virtual-only program. It opened with a day of online presentations on Oct. 12, followed by 2 days combining in-person and virtual presentations to live attendees gathered at Boston University and live-streaming to a widely scattered audience.The speaker roster was dominated by academics and nongovernmental organization-affiliated researchers, with none listed as representing large tech companies or media platforms. As befits an academic conference, submittals were subjected to a peer review process. Four papers were accepted for the peer-reviewed conference proceedings, which is available at truthandtrustonline.com/proceedings. Here’s a sampling of some of the conference themes and presentations.
Theme 1: Tech Company and Social Media Platform Behavior
The content moderation policies and practices of social media platforms came under more scrutiny this year than at last year’s conference. Rob Hawkins, creator of Reveddit, highlighted the practice of “shadow banning,” which is practiced by Reddit and other platforms. Shadow banning involves removing content from view for all users—except the user who posted it. The posting user isn’t notified of the takedown and thus may believe that the content is accessible and being read, when in fact it isn’t. Hawkins’ Reveddit app overcomes shadow banning, allowing users to see that content they have posted has been banned and to track platforms’ actions over time.
The theme of platform behavior was continued in a presentation by staffers of the European organization Tracking Exposed, which added the concept of “shadow promotion” to Hawkins’ shadow banning. Shadow promotion occurs when content that is supposed to be banned is selectively promoted to a targeted audience. For example, the researchers reported that TikTok selectively promotes supposedly banned content to Russian users. The Tracking Exposed tool allows users to audit the behavior of social media companies’ content-blocking and promotion algorithms by comparing content fed to different accounts. Using the tool, the researchers have found substantial geographic variations in what TikTok promotes and blocks, as well as opacity and lack of accountability of its algorithms.
It would have been helpful to have representatives of major platforms present to discuss their operations. Instead, the in-person program included remarks by Tomer Poran, a VP of ActiveFence. ActiveFence provides tools and services to social media platforms and other companies to help them moderate content. Without directly responding to the issues of shadow banning and shadow promotion, Poran described a daunting array of challenges for the “trust and safety” officers of social media platforms, ranging from the ongoing cat-and-mouse game among platforms seeking to eliminate bad content and purveyors of disinformation finding ways to evade blocking to the ever more complex and contradictory regulatory environment in which content moderation operates. Citing the contrast between California and Texas, he pointed out that a platform may be breaking the law in Texas if it blocks certain content and breaking the law in California if it doesn’t.
The last word on this theme came from researchers Shayne Longpre and Cameron Hickey, who reported on the early stages of a project to advance transparency in policies and practices for content moderation. Currently, there are widespread calls for transparency, but little shared understanding of how to define and assess it. They envision surveying current efforts, coordinating them, developing a schema and rubric for content moderation policies and practices, and tracking transparency of platforms over time.
Theme 2: The Nature of the Threat—Do People Want Misinformation? And Is Misinformation the Only Problem?
Last year’s conference included a conceptual presentation, “How Misinformation Works for People, Not on Them” on the idea that sometimes people may be perfectly happy with misinformation—or even prefer it. This year, Piers Howe from the University of Melbourne presented quantitative research to support that point. Hypothesizing an accurate, easy-to-use fact checker, the researchers found that study participants would use it only some of the time and would share content labeled as false two-thirds of the time. If the content carried a warning label, respondents would leave it in place, but still share the content. In other words, what people seem to care about when sharing content on social media is interest (i.e., entertainment value), not truthfulness.
Having said that, lots of work continues on developing, deploying, and evaluating fact-checking tools. Some of the initiatives described were:
- Birdwatch, a tool for crowdsourcing misinformation-flagging in Twitter
- vera.ai (VERification Assisted by Artificial Intelligence), a European project to provide an AI tool to assist human fact-checkers. It includes deepfake recognition capabilities and identification of disinformation campaigns by recognizing similar semantics in posts across multiple languages and platforms.
- CooRnet, a software tool to identify disinformation campaigns by tracking coordinated link-sharing across social media accounts
Additionally, a panel of researchers discussed the need to broaden the field of misinformation and disinformation studies in a variety of ways: new tools and methods, as well as greater emphasis on activities in the global south and state-sponsored disinformation beyond large, well-known actors such as Russia.
Theme 3: Beyond Fact-Checking—Fighting Back
At this year’s conference, several speakers concerned themselves with misinformation and other harms in online behavior—especially those aimed at specific communities or individuals. While information involves statements about factual matters, and misinformation involves statements that are incorrect, there is a broader class of harmful behavior that may include misinformation but also includes insults, hate, and threats of violence. In their keynote address, professors Sarah J. Jackson, Moya Bailey, and Brooke Foucault Welles discussed their book, #HashtagActivism: Networks of Race and Gender Justice (2020), which focuses on people of color and other marginalized communities. Proceeding from the premises that “communication is politics” and “truth is complicated,” they highlighted the significance of different narratives that arise from different perspectives about an issue or event. As an example, they contrasted mainstream news media reporting on the aftermath of the killing of Michael Brown in Ferguson, Missouri, with perspectives supplied by members of the community where events were unfolding. A panel on the effects of disinformation on people of color presented further insights.
In a panel session, Maria Rodriguez from the University at Buffalo talked about a survey of women of color who ran for office in the 2020 U.S. election. Among other findings, she identified 14 candidates who had reported disinformation and threats to social media platforms. Of those, only one eventually resulted in the offensive and threatening content being removed.
Saiph Savage from Northeastern University discussed data voids, which are gaps in publicly available information. Disinformation purveyors identify topics not being addressed and create false narratives about those topics. Dealing with data voids requires not just seeking to eliminate the false and offensive content, but creating a counternarrative to get the truth on the public record. (For an introduction to data voids, see datasociety.net/library/data-voids.)
In the panel’s final presentation, Kristina Wilfore, co-founder of #ShePersisted, discussed ways that individuals and communities targeted by disinformation and its associated harms can fight back. One approach uses a three-pronged strategy: pre-empting attacks, reporting offenses, and responding by publishing and promoting counternarratives. Groups are starting to publish guidance for fighting back, such as the Digital Resilience Toolkit for Women in Politics.
Conclusion
There’s no more dynamic and important aspect of information study and practice today than the nature of information behavior in the digital sphere. For generalists and specialists alike, the annual Truth and Trust Online conference continues to provide a valuable snapshot of recent developments. Let’s hope it continues to grow as a forum for not only academics and nonprofits, but also industry and government, to make the digital information space more equitable and accurate.