If anyone needed additional proof that the battle against disinformation is being fought hard in the Global South, especially Latin America, the third annual Global Summit on Disinformation provided plenty. A joint project of Argentina’s Proyecto Desconfío and the Foundation for Journalism of Bolivia, together with the Inter American Press Association, the 2-day conference (Sept. 27–28) was conducted entirely on Zoom and drew more than 2,000 registrants from 52 countries. Most presentations were in Spanish, with comprehensive simultaneous English translation, and a few were in English. Day 1 focused on descriptions of the problem of disinformation and the harms caused by it, while Day 2 was devoted to innovative strategies to combat it.
DAY 1: THE PROBLEM OF DISINFORMATION
The program opened with a panel of three journalists describing three examples of disinformation. Daniela Mendoza Luna of Verificado MX in Mexico explained the various types of disinformation targeted to would-be immigrants headed for the U.S. Jacqueline Sordi of Revista Questão de Ciência in Brazil analyzed an example of health disinformation: a bogus “natural cure” for cancer. Olivia Sohr from the Chequeado media organization in Argentina discussed disinformation related to gender identity and sex education.
The three presenters, plus those in a subsequent panel, highlighted several key commonalities among disinformation campaigns, regardless of their subject matter:
- They’re driven by goals of obtaining money and/or power and use disinformation as a tool.
- Their social media messages are distributed by a small network of accounts but are backed by a complex network of actors and funders behind the scenes.
- They not only push their own false messages, but seek to discredit truthful sources and to sow mistrust, polarization, and fear. They often promote simple “solutions” to complex problems and issues.
Inevitably, subsequent presenters turned to the question of strategies to counter disinformation. Just as inevitably, two that were highlighted were fact-checking and information literacy education and training for all audiences. However, two more that are not discussed as often were mentioned prominently:
- Proactive distribution of accurate information (because disinformation loves to fill a vacuum)
- Strong moderation of social media platforms, including warnings and blocking of rogue accounts.
Daniel Suárez Pérez of the Digital Forensic Research Lab in Colombia reviewed findings from the “Digital News Report 2023,” published by the Reuters Institute and the University of Oxford. Among the many interesting and alarming findings: video-oriented platforms like TikTok and YouTube are supplanting both older social media (Facebook) and websites, and a growing number of people overall are just tuning out of news altogether.
DAY 2: INNOVATIVE STRATEGIES
Innovation was the focus of presentations on Day 2. Seizing the hot topic of the moment, the first two presentations emphasized the multiple and complicated relationships between AI and disinformation.
Ignacio Gutiérrez Peña and Manuel Pardo Martín of Europa Press in Spain demonstrated FND, a smart fake news-detection tool built on a generative AI platform named SARAH. It is positioned as an aid, not a replacement, for journalists. Based on its natural language understanding, it highlights possible false segments of text for journalists to review and follow up on. It also incorporates “explainable” AI, including reasons why it flags suspicious passages.
Subsequently, Arkanath Pathak of Google Research in India presented developments in Google’s Fact Check Explorer. Noting the rise of misinformation in digital images, he discussed advances in the tool’s ability to identify falsified images by comparing an image to past images in its vast database. He noted that there are three dominant types of falsification: claiming an image depicts a location that it does not depict (it has been repurposed from an earlier image depicting another location), falsifying the date of an image (again, falsely relabeling an earlier image), and falsifying the identities of people (or animals, things, etc.) depicted (such as inserting the image of someone not present in the original image).
Mateo Heras presented a different approach based on his work at the Content Authenticity Initiative in Argentina. Members of this 1,500-strong consortium apply a hashing technique conforming to an open standard to securely identify an original, unaltered image. The hash code also enables an audit trail to track any changes. Ed Dearden of Checkworthy in England returned to the topic of AI’s role in verification and fact-checking in his description of Full Fact AI. Recognizing that the volume of claims and potential disinformation far surpass the capacity of human fact-checkers to analyze them, Full Fact developed the tool to help fact-checkers identify claims that are most in need of checking and prioritize their work. It also draws on a database of past fact-checks to identify similar claims that have already been checked.
FINALE: A CONFERENCE CONCLUSION
All of the presentations are viewable on the conference YouTube channel (see Day 1 and Day 2). For a North American English-speaker, the event provided a valuable snapshot of interesting and potentially valuable efforts to mitigate the malign influence of disinformation in contemporary society in Latin America and elsewhere. For anyone actively working in the field, there were quite a few specific takeaways.
Beyond these results, the Inter American Press Association took the unusual step of releasing a conference summary through its executive director, Ricardo Trotti. Distributed in Spanish, the Google-derived English translation calls for a “multidimensional posture” in which “governments, platforms, fact-checkers, media and audiences” all must play a role. While recognizing the dangers of government regulation, Trotti warned that “disinformation cannot be controlled if there are no intelligent, efficient and severe public policies.” While noting categories of “bad legislation,” he argued that “intentional disinformation must be attacked as a crime that conspires against democracy, with aggravating factors when disinformation seeks to corrode electoral processes.” He also proposed a three-part framework for legislation:
- [P]rohibit governments and political parties from spreading misinformation
- [D]emand transparency from governments
- [H]old digital platforms and artificial intelligence developers responsible as content editors and not only as distributors of content
Continuing with comments about the responsibilities of nongovernment actors in the disinformation ecosystem, Trotti concluded:
This Third Summit demonstrated that disinformation is undermining democratic pillars and motivating authoritarian governments. Therefore, like never before, there is more awareness that each sector must assume its responsibilities to protect democracy.
The irony is that the conference program emphasized the initiatives of journalists and technologists and made very little reference to governments and politicians. Perhaps future conferences will deal more fully with this most fraught dimension of the disinformation problem.