Political campaigning and election news have reached a fever pitch in 2024. More than 40% of the global population will vote or has already voted for leadership this year, with elections being held in 50-plus countries. While claims of election shenanigans are tales as old as time (our motto in Chicago is “Vote Early, Vote Often”), the evolution of misinformation and disinformation combined with artificial intelligence (AI) has created new challenges for all of us who are dedicated to free and fair elections, the democratic process, and the peaceful transfer of power. In a striking KFF poll from December 2023, 31% of Americans stated that “all or most of the time,” they feel uncertain about 2024 presidential election information, with more than half stating that they “sometimes” feel uncertain. Further, an April 2024 survey conducted by the Imagining the Digital Future Center at Elon University found that:- 73% of Americans believe it is ‘very’ or ‘somewhat’ likely AI will be used to manipulate social media to influence the outcome of the presidential election—for example, by generating information from fake accounts or bots or distorting people’s impressions of the campaign.
- 70% say it is likely the election will be affected by the use of AI to generate fake information, video and audio material.
- 62% say the election is likely to be affected by the targeted use of AI to convince some voters not to vote.
- In all, 78% say at least one of these abuses of AI will affect the presidential election outcome. More than half think all three abuses are at least somewhat likely to occur.
INFILTRATION
In 2017, a University of Oxford study found that one-third of pro-Donald Trump tweets during the first debate of the 2016 presidential election were generated by bots. Similarly, also in 2017, The New York Times reported that social media imposters “leaked” both real and fabricated documents to influence the presidential election in France. While the malicious intent remains the same, fast forward to 2024, and the level of sophistication required to employ these tactics is significantly lower. As Darrell M. West of the Brookings Institution comments in Popular Science, “We’ve lowered the barriers of entry to basically everybody.” Since it is now so much easier to use AI tools to create content and visuals, West offers this analogy: “We put a Ferrari in the hands of people who might be used to driving a Subaru.”
There are multiple ways that disinformation and AI might infiltrate our media consumption, with new methods evolving almost daily. Recent research from Serious Insights and the Brennan Center for Justice has uncovered some commonly used tactics:
- Sophisticated robocalls with deepfake candidate voices
- AI bots bombarding voters with messages via social media posts
- Fake videos of candidates committing crimes or making fabricated statements
- Biased AI algorithms manipulating mainstream political talking points that already alienate marginalized communities
- Voter suppression via false dates and incorrect location information
TRENDS AND TESTING
It often seems that we are fighting a losing battle in encouraging constituents to think critically about information evaluation. For example, a Backlinko study on Google search behavior released in August 2023 uncovered 13 trends. The following are a few:
- [S]earchers use one of Google’s autocomplete suggestions 23% of the time.
- Only 17% of users bounced back to the search results after clicking on a result.
- The majority (59%) of Google users visit a single page during their search session.
- 19% of searchers click on a Google Ad during their search.
- Only .44% of searchers go to the second page of Google’s search results.
- [H]alf of all search sessions are finished within 53 seconds.
While these behaviors imply that pedestrian Google searches are unlikely to yield the high-quality, vetted information that voters need to make informed choices, test results from using AI chatbots to find election information are even more disconcerting. Experts at the Proof news studio tested five AI chatbots (Claude, Gemini, ChatGPT-4, Llama 2, and Mixtral) by asking them election-related questions and rating their responses in four categories: bias, inaccuracy, incompleteness, and harm. According to the analysis of the study, “Overall, the AI models performed poorly on accuracy, with about half of their collective responses being ranked as inaccurate by a majority of testers. More than one-third of responses were rated as incomplete and/or harmful by the expert raters. A small portion of responses were rated as biased.”
The New York Times considered a different possibility—what if we try to tinker with the actual tools themselves to make them provide the types of answers we want? It turns out that this manipulation was easy to pull off. Using posts from Reddit and Parler as the training set, Jeremy White reports that The New York Times altered copies of the Mistral large language model. Fine-tuning the model with posts from both the far-right and far-left, the model produced remarkably disparate answers to questions involving polarizing topics such as critical race theory, immigration, and taxes. In further adjustments to the model, The New York Times discovered ways to make answers more aggressive by ranking potential responses. Additional tweaks such as changing the randomness settings made replies more like toxic rants, and adding diversity led to “typos, punctuation errors and uppercase words.”
Mistral declined to comment, but White echoed West’s concern about the ease of access and capability of misuse of these potentially nefarious tools. Reminding us that entire teams of people were required to produce disinformation in the 2016 U.S. presidential contest, White writes, “Now, one person with one computer can generate the same amount of material, if not more. What is produced depends largely on what A.I. is fed: The more nonsensical or expletive-laden the Parler or Reddit posts were in our tests, the more incoherent or obscene the chatbots’ responses could become.” He continues, “And as A.I. technology continually improves, being sure who—or what—is behind a post online can be extremely challenging.”
FIGHTING BACK IN INDIA
India’s election period ran from April to June in 2024, and during that time, research found that 20% of political ads run were from “surrogates or ‘shadow’ accounts,” according to The Washington Post. Further, 20% of shadow accounts ran ads with far-right and violent groups, and there were more than 60 million views of ads that “potentially broke election laws against hate speech and misinformation.”
A study from the Tech Transparency Project demonstrated how easy it is to buy a real Facebook account to run ads under a different name. The process begins when approved posters sell their accounts. Buyers can then change the account and administrator names and begin to run ads, including in countries other than their own, raising “major concerns when it comes to foreign interference in America,” according to Tech Transparency Project director Katie Paul in the same Washington Post piece. Although Meta is supposed to match photo IDs and local addresses before allowing accounts to run political ads, The Washington Post reports that research has found that oftentimes, addresses do not match businesses or have incorrect sponsors listed. The Washington Post alleges that Meta took more than a year to remove an Indian military propaganda organization, “then changed its published rules to avoid announcing the takedown.”
Paul believes that India’s experience should provide a cautionary tale to countries whose elections have not yet taken place. “India is the canary in the coal mine,” she warns. The good news is that India fought back, reports The New York Times, by using a strategy that can be adapted to other situations and elections. A collective of Indian media houses formed the Deepfakes Analysis Unit, and through a combination of media fact-checking partnerships, AI-detection software, and old-fashioned boots-on-the-ground sleuthing, it was able to reveal deceptive content.
FIGHTING BACK IN TAIWAN
Some countries have seen private citizens take up the mantle for truth in elections. In advance of elections in January 2024, Taiwan’s reaction was fast and nimble. The Associated Press (AP) reports that as fact-checking organizations snuffed out fake news, the Central Election Commission held news conferences to discredit rumors and innuendo. A popular YouTuber contributed videos that explained how votes are counted.
In a lesson for nascent and not-so-nascent democracies everywhere, the secret to Taiwan’s success seemed to be using a multi-pronged approach to discredit fake news rather than focusing on one strategy alone. Dubbed a “whole of society response” by Kenton Thibaut of the Atlantic Council’s Digital Forensic Research Lab, it relied on “government, independent fact-check groups and even private citizens to call out disinformation and propaganda,” AP notes. As a cautionary tale, Alexander Tah-Ray Yui, Taiwan’s de facto ambassador to the U.S., offered this advice for dealing with disinformation: “Find it early, like a tumor or cancer. Cut it before it spreads.”
FIGHTING BACK IN THE U.S.
Some U.S. municipalities are taking this advice to heart. At least 18 states have statutes governing the use of AI in elections and campaigns, with accompanying punishments ranging from fines and damage payments to possible misdemeanor or felony charges, notes the National Conference of State Legislatures. In Maricopa County, Arizona, local celebrities such as players from the NBA’s Phoenix Suns have begun community outreach campaigns to promote voting and explain procedures.
Former Speaker of the U.S. House of Representatives Tip O’Neill famously said that “all politics is local,” and the state of Michigan is proving it. Its Secretary of State, Jocelyn Benson, whose office is entrusted with election oversight, has formed voter confidence councils at the local level, The Washington Post notes. Comprising trusted community leaders, these councils share accurate voting and election information at churches, clubs, and schools. “And many states—including crucial swing states such as Pennsylvania and Wisconsin—are maintaining fact-checking websites that aim to dispel common election-fraud narratives,” reports The Washington Post.
Further, Benson’s colleagues in the National Association of Secretaries of State have joined forces to create Can I Vote, a nonpartisan website offering myriad voter resources: sections on registering to vote, voter registration status, finding the correct polling place, valid forms of identification, absentee and early voting, overseas voters, and becoming a poll worker and a directory of election officials.
COMPANY COMMITMENTS
The question remains, What are the Big Tech/social media companies contributing to this effort?
Companies such as Google, Microsoft, Meta, Amazon, and OpenAI have made their concerns known for years, and in July 2023, they reached a voluntary agreement to work on implementing watermarks for images created with their tools. By February 2024, they addressed election-related concerns head-on, and 20 companies formed a consortium that is “making a series of commitments to try to detect AI-enabled election misinformation, respond to it and raise public awareness about the potential for deception,” Bloomberg reports. As of July 2024, Google and Meta require advertisers to disclose AI-generated content or images in election ads.
In the European Union (EU), there are laws to encourage these companies to do right, including the Digital Services Act, which affords the European Commission the power to fine companies up to 6% of global revenue as well as raid offices, interview company officials, and gather other evidence if they do not comply with the act’s rules. For example, The New York Times notes that under the auspices of this act, Meta is currently under investigation by the EU for “the spread of disinformation on its platforms Facebook and Instagram, poor oversight of deceptive advertisements and potential failure to protect the integrity of elections.”
According to The Washington Post, in the EU, Google partnered with local organizations to create animated ads on YouTube, Facebook, and Instagram. These video spots discussed commonly used disinformation techniques and highlighted other tactics used to deliberately mislead voters. Will we see something similar in the U.S.? Not likely, says Sander van der Linden, a professor at the University of Cambridge who worked with Google to develop such techniques. He believes that the political fallout and complaints from constituents are not worth the trouble for members of Congress. “It’s a political risk for them,” he told The Washington Post.
TIPS
In addition to our usual toolkit of how-to guides for spotting fake news and source evaluation detectors, what can we in the library and information industry do to protect the integrity of the democratic process? A Frontiers in Psychology article notes that, as far back as the 1960s, there was “inoculation theory,” or “persuading others to avoid persuasion.” In 1961, Yale psychology professor William McGuire found that when exposed to a weakened form of a misguided argument followed by a strong refutation of that argument, people are more likely to reject the false argument when they encounter it. For example, beginning a conversation with “You might have heard (insert falsehood here); I’ve heard it too. But I did my research and found that (insert facts).”
Essentially, it comes down to the phrase, “pre-bunk to de-bunk.” Don’t wait for fake news to take hold. Be proactive and on the lookout for fake news, and snuff it out immediately by pre-bunking possible erroneous claims. Those trusted community members play a role here too—when the inoculation theory procedure is laid out by someone we trust, we are more likely to believe it. This is definitely true of librarians; librarians are ranked in the top five most trustworthy professions alongside healthcare providers and educators, according to a recent EveryLibrary survey. And remember, partnerships are critical. If you can enlist another respected community member to help disseminate truthful election information alongside you, all the better.