Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology Unisphere/DBTA
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



News & Events > NewsBreaks
Back Index Forward
Threads bluesky LinkedIn FaceBook Instagram RSS Feed
 



A Deep Dive Into Election Deepfakes
by
Posted On June 4, 2024
AI. Have ever two letters looked so sinister together?

“A” is the alpha. The firstborn. Numero uno. Firsts are often a little stiff, burdened as they are with being so visible, so vulnerable—if this whole thing fails, won’t it be their fault? But with great responsibility comes great likeability.

As for “I,” who among us can’t relate to its skyscraper-like appearance, signifying that attention must be paid? Put these revered letters together, however, and they spell D-O-O-M, according to some. Others see the digraph not as dastardly, but as our salvation.

Election season is a new canvas for this world-defining chiaroscuro. Amid the political ads we all know and loathe has appeared a new strain: deepfakes. Librarians are at the forefront of efforts to maintain information integrity, so we should know as much as we can about this new and growing peril.

THE RISE OF DEEPFAKES

First, what is a deepfake? It is “an artificial image or video (a series of images) generated by a special kind of machine learning called ‘deep’ learning (hence the name).” Basically, it’s a realistic-looking picture, audio, or video of a situation that never happened. Deepfakes have been used for years in a number of industries, including politics, with little fanfare. But they made headlines back in January with a series of robocalls of President Joe Biden seemingly discouraging people from voting in the New Hampshire primary, which he called “a bunch of malarkey.”

Except it wasn’t Biden. It was an AI-generated voice that sounded like him.

CONCERNS

Of course, artificial intelligence is neither good nor bad. It is a tool, an apparatus, a thing to be used. No doubt many of those uses are outstanding. But the downsides can be significant too. According to the Brennan Center for Justice, a recent survey asked Americans about their concerns regarding AI, and 85% said they were “very concerned” or “somewhat concerned” about the spread of deepfakes.

This concern poses a threat of its own, as venal individuals can manipulate it by falsely claiming that legitimate audio or video content is, in fact, artificial, and therefore fake. Law professors Bobby Chesney and Danielle Citron call this the “liar’s dividend.” As deepfakes become harder to spot, they argue, false claims that real content is AI-generated will become more persuasive as well.

You can see how this would be particularly appalling in the political realm.

GOVERNMENT REFORMS

The good news is, those in power are aware of the problem and are trying to do something about it. Last year, a bipartisan bill was introduced in Congress “to prohibit the distribution of materially deceptive AI-generated audio or visual media relating to candidates for Federal office,” although it hasn’t become law yet. Not content to wait for legislation, in February 2024, the Federal Communications Commission outlawed the sort of robocalls that had been used in New Hampshire the prior month. The Cybersecurity and Infrastructure Security Agency, a division of the U.S. Department of Homeland Security, has an excellent white paper discussing the electoral risks of AI and recommending strategies to mitigate them.

Reforms are also underway at the state level. According to the National Conference of State Legislators (NCSL), 16 states have statutes governing the use of AI in campaigns and elections. Most were enacted in 2023 or 2024, although two states—California and Texas—were ahead of the curve, passing their legislation in 2019.

Other states have considered AI-related laws but haven’t yet reached a consensus. One challenge, according to NCSL, is defining precisely what AI entails. Another is determining the threshold for when AI use triggers a legal restriction. There are also First Amendment considerations, such as when AI is used to create, say, political satire. Finally, some states don’t see the need for this sort of legislation, saying, for example, that existing anti-defamation laws should be enough to protect candidates for office.

BEST PRACTICES

Naturally, new laws aren’t the only solution. The Brookings Institution describes eight best practices for state officials to “defend elections from the dangers of AI.” There are also education efforts such as “prebunking,” which works like this: Debunks don’t reach as many people as misinformation, and they don’t spread as quickly. When we are told that the misinformation is false, research suggests that it continues to influence our thinking. Better, then, to stop misinformation before it spreads.

Finally, earlier this year, 20 tech companies, including Google, Amazon, Microsoft, Meta, OpenAI, X, and TikTok, signed an accord to try to prevent their software from being used to interfere with elections. The pledge commits the signatories to take eight concrete steps, such as “seeking to detect the distribution of Deceptive AI Election content” and “providing transparency to the public.”


Anthony Aycock is the author of The Accidental Law Librarian (Information Today, Inc., 2013). He is a freelance writer (anthonyaycock.com) as well as the director of the North Carolina Legislative Library.



Related Articles

9/5/2024Mental Health Foundation Provides Tips for Managing Election Stress
8/27/2024The Verge Explores the Future of Easily Accessible AI Image Editing
9/10/2024Combating Fake News in the U.S. and Global 2024 Leadership Elections
1/18/2024OpenAI Details Its Approach to Upcoming Political Elections
8/1/2023AP Looks at the Need for More Inclusive Disinformation Education During the 2024 U.S. Presidential Election
7/19/2022The Looming Problem of Deepfakes
8/22/2019'This Program Makes It Even Easier to Make Deepfakes' by Samantha Cole
11/27/2018A Guide to Understanding Deepfakes


Comments Add A Comment

              Back to top