This year’s fourth annual Global Summit on Disinformation, like its predecessors, presented a vivid picture of the ongoing conflict between the purveyors of disinformation and the defenders of responsible journalism. More than its predecessors, however, it positioned that conflict within the context of the challenges to open, democratic society. Reports focused primarily on Latin America, with global situations also included.The conference, held Sept. 18–19 on Zoom, was bracketed by two outstanding keynotes and supported by trenchant reports from the front lines of battle. Most presentations were delivered in Spanish, with some in English; excellent simultaneous English and Spanish translations were available throughout.
KEYNOTES
Investigative journalist Julia Angwin’s opening keynote highlighted the multiple roles AI plays in disinformation. By enabling just about anyone with an internet connection to create false multimedia content, it supercharges the amount of disinformation available. The spread of false content is fueled by natural human confirmation bias. It’s further enabled by the largest information gatekeepers—Facebook, X, and Google—having abdicated their responsibility to moderate the content on their platforms. And, at their current level of development, AI-powered chatbots such as ChatGPT contribute to the problem by returning incorrect answers to political and election-related questions. A recent study by Angwin and colleagues—“Seeking Reliable Election Information? Don’t Trust AI”—evaluated the responses of generative AI services to election-related questions and found that 51% were inaccurate. Many responses were also judged harmful, incomplete, and biased.
Angwin advocated for both increased scrutiny of AI and initiatives to counter the general erosion of trust in all sources of information. She proposed that responsible journalists take a page from online influencers and follow a three-part strategy to build trust with their audiences: projecting benevolence—demonstrating that they have their audiences’ interests at heart; expertise—displaying not only credentials, but showing the application of their skills; and integrity—by engaging with their audience members and welcoming feedback and fact-checking from them. Angwin’s organization, Proof, investigates and reports on these and related issues.
The second-day keynote presented a sharply contrasting view of the role of one of the Big Tech social media platforms. David Agranovich, director of global threat disruption at Meta, oversees the company’s investigation and deterrence of disinformation operations on its platform. Because it’s impossible to monitor every individual posting of questionable content, he noted, Meta has developed a policy of focusing on “coordinated inauthentic behavior.” Coordinated inauthentic behavior is displayed by networks of fake accounts, using fake identities, that act together to amplify harmful and deceitful content. Entities in these networks may start by promoting innocuous content, in order to gain followers and build trust, before they begin to spread disinformation or malware. When action is taken against these networks, all entities they include, such as personal accounts and webpages, are deleted—which makes it harder for those behind the networks to re-establish them. Agranovich emphasized the distinction between this approach, which focuses on network behavior, and efforts to judge content. He asserted that with rare exceptions, user postings are not removed due to their content. He added that Meta’s policy toward state-controlled media is to label it and not to promote it, but rarely to take it down—an exception being the recent takedown of Russian-controlled media outlet RT’s accounts.
REPORTS FROM THE FIELD
One of the recurring themes of the summit was the interplay of government, journalism, social media, and AI. Carlos Eduardo Huertas from Connectas discussed recent events in Venezuela, which held a presidential election this past summer. The incumbent government has claimed victory, but its claim is disputed by the opposition and international organizations. Huertas discussed Operación RETUIT, formed by independent journalists to counteract pervasive disinformation published by government-controlled media. He said that some 235 reporters and staff contributed to the effort, all while hiding their identities due to the threat of arrest. To avoid showing the faces of actual reporters, they used AI-created avatars to report news stories.
Another speaker, Werner Zitzmann, executive director of AMI Comunicación, cited Colombia as a further example of antagonism between the government and independent media. There, government sources have accused independent journalists of being funded by the drug trade and using their funding to undermine the government. In contrast, the government of Chile formed a commission to combat disinformation, but journalists there have decided not to participate due to free speech concerns.
Returning to Venezuela, Iria Puyosa, senior research fellow at the Atlantic Council’s Democracy + Tech Initiative, reported on the use of AI by state media. The government established a YouTube channel, House of News (now shut down), that posted AI-generated disinformation. Puyosa characterized House of News as simply the latest initiative in a long-term disinformation campaign by Venezuela’s government.
Not all of the reports dealt with conditions in Latin America. Victor Rico of the Spanish organization Maldita.es presented findings from a study of the response of social media platforms to complaints of disinformation during the European Parliament elections earlier this year, “Platform Response to Disinformation During the EU Election 2024.” Five platforms were analyzed: Facebook, TikTok, YouTube, Instagram, and X. Responses to 1,321 posts covering 26 European Union countries were tracked, and, overall, 45% of the complaints received no response from the platforms. YouTube compiled the worst record, ignoring 75% of complaints, while X followed closely at 70%. Facebook achieved the best record, responding to 88% of complaints. The nature of platform responses ranged from mild labeling to takedowns.
Yulia Alekseeva of the DW Akademie described a gamification approach to teaching information literacy skills to children who have been dislocated and traumatized by the war in Ukraine. And Erich de la Fuente of Florida International University described the migration of disinformation from Latin America to Spanish-speaking communities in the U.S. Noting that the Russian RT outlet has more correspondents in Latin America than any other foreign news agency, he described RT’s focus on reducing support for Ukraine in its war against Russia and how RT’s messages are also received by, and influence, the diaspora of Latin Americans migrating to the U.S.
Completing the global survey of disinformation and threats to responsible, independent journalism, Cathleen Berger and Charlotte Freihse discussed the work of Upgrade Democracy, a project sponsored by the Bertelsmann Foundation of Germany, which includes the “Truth in Turmoil: Countering Disinformation in Latin America” report, among others. Along with other tasks, the project has performed a global survey on the struggle between disinformation and open, democratic societies around the world.
AN EVOLVING RECOGNITION OF THE STAKES
In my report on last year’s summit, I commented that the focus was on journalists and technologists and that the role of governments and politicians was largely absent. The examples in this article—just a few of the many excellent presentations—illustrate how this year’s conference addressed that gap. Focusing on Latin America, it vividly showed that free and responsible journalism is threatened both by censorship and by the efforts of state and non-state actors to drown it out with disinformation.
Meanwhile, AI—the newest human technology—is developing in the tradition of the oldest human technology, fire, which can either heat our homes and cook our food—or burn us alive. It’s up to defenders of democratic, open society to assure that it does the former, not the latter. The stakes could not be higher.