This article originally appeared in the September/October 2025 issue of The Information Advisor’s Guide to Internet Research. Learn more at informationadvisor.com.
The language around unintentional and maliciously created news and online content has evolved and may be evolving again. The term “fake news” appeared on the landscape between 2015 and 2016, evolving from a comedic phrase used by Jon Stewart to describe what was reported on The Daily Show, but it was co-opted by then candidate Donald Trump to describe the mainstream media, causing it to be discarded by most observers, as it soon lost any helpful meaning. During the last few years, the term fake news has largely been replaced by “misinformation” to describe honest or inadvertent errors and “disinformation” for maliciously created information that intentionally can cause harm.
However, NewsGuard, one of the most prominent fact-checking operations, recently sent out an email. It was titled, “Commentary: Why We’re Moving Beyond ‘Misinformation’ and ‘Disinformation,’” and written by McKenzie Sadeghi, NewsGuard’s AI and foreign influence editor. Here is an excerpt:
The words misinformation and disinformation once served a purpose. They gave name to a crisis that many had not yet recognized: an onslaught of falsehoods flooding the digital ecosystem, often with malign intent. These terms helped distinguish between honest mistakes (misinformation) and intentional, agenda-driven lies (disinformation), and helped define a complex information landscape. But as the landscape and political climate have changed so has the language. These words have now been politicized beyond recognition and turned into partisan weapons by actors on the right and the left, and among anti-democratic foreign actors. …
At NewsGuard, we’re retiring these words as primary labels. Not because the threats they describe have vanished. To the contrary, the threats have increased. But rather because the words no longer help us explain these threats.
We now face an information environment that is more complex, more coordinated, and more technologically sophisticated. AI-generated content, deepfakes, secretly partisan news sites posing as independent local news outlets, and foreign state-linked influence campaigns are reshaping how falsehoods spread. But the language we’ve used to describe those threats hasn’t kept up. It’s vague, overused, and increasingly seen as partisan. When everything is “disinformation,” nothing is, and public trust continues to disintegrate.
It’s no longer enough to call something fake—because those on all sides of the political divide use the term so avidly and casually. Instead, we are turning to language that’s more precise, harder to hijack, and more specific. We will describe what a piece of content actually does, such as whether it fabricates facts, distorts real events, or impersonates legitimate sources. We’ll explain whether a claim is explicitly false, AI-generated, unsubstantiated, or manipulated. …
I had a chance to ask Sadeghi some questions about this decision in an email exchange. Below is an edited summary of our conversation:
Was there a particular singular development or event that spurred this change in language at this particular moment?
There wasn’t a single event but rather the accumulation of many over the years. Over the past few years, we watched “misinformation” and “disinformation” become increasingly politicized and weaponized across the political spectrum and around the world, particularly during major events like the COVID-19 pandemic, the 2020 and 2024 U.S. elections, and the rise of AI-generated content. What ultimately spurred the decision to shift our language now was a recognition that the terms were no longer helping us do what they were designed to do, which is to explain information threats clearly and apolitically. In fact, they were increasingly making that job harder, and the use of such terms often would shut down conversations rather than open them up.
The labels misinformation and disinformation have also fallen out of practice over the years, especially as these campaigns become more sophisticated in how they’re distributed. What starts out as an innocent rumor online can be picked up by foreign malign actors and vice versa. A state-sponsored narrative can be laundered through influencers, memes, or AI-generated content until it appears organic. In this environment, the original intent behind a piece of content becomes irrelevant, and the lines between accidental falsehoods and deliberate deception blur quickly. Using misinformation and disinformation involves assigning intent and motive, and sometimes that’s impossible to establish.
Beyond the words being politicized by partisan actors on both the right and the left, one thing that also started to play out was how much misinformation and disinformation were being used interchangeably by government officials, reporters, academics, and experts in the industry. It seemed like these terms were often adding more confusion, and even those who were supposed to be using them with precision were blurring the lines between them. At a certain point, the language became more about signaling than clarity, and that’s when the terms stopped being useful.
The Information Advisor’s Guide to Internet Research has been following the rise of “persuasive AI.” Would you categorize this concern in your taxonomy, or is it outside of it?
I think this is part of the reason as to why we’re moving toward more precise descriptors. Persuasive AI poses a real risk because it can generate false claims, and because it can subtly manipulate users by mimicking tone, authority, and emotion. In a case like this, how can we know if the AI system is intentionally spreading falsehoods so that it can advance an ideological view (such as an AI model developed by China) or if it is simply parroting inaccurate content it was trained on from the web? We can’t determine this, and trying to make that distinction can distract from the more important question, which is the content itself. What would instead be more useful here is to describe what the AI is doing: impersonating a real person, fabricating citations of nonexistent studies and experts, citing unreliable sources, etc. Being able to articulate the mechanism of manipulation is more valuable to readers, especially as persuasive AI becomes more embedded in everyday platforms.
Where does the notion of “propaganda” in general fit into this taxonomy? Is using the term too much like misinformation/disinformation and to be avoided?
I think “propaganda” is most useful in describing state-driven influence operations, particularly those that aim to shape public opinion rather than just spread false claims. I think propaganda covers a broader category of content and isn’t always false. One example that comes to mind is how during the U.S. pro-Palestinian college campus protests, we found that Russian, Chinese, and Iranian state media pumped out hundreds of articles covering these demonstrations. These articles weren’t advancing false claims, but they were designed to amplify domestic U.S. tensions, stoke anti-Western sentiment, and frame the U.S. as unstable or hypocritical. That to me is a case of propaganda: content with a strategic intent, designed to influence perception, often through volume, framing, and repetition, rather than false claims. Like misinformation and disinformation or any other labeling term, I think the term propaganda can be misused if it is applied too broadly or without evidence. I think it’s most helpful when paired with specifics, such as who is behind the content, what the message is trying to achieve, what kind of tactics are being used, etc. Rather than relying on the label alone, it’s more effective to explain how the operation works and what narratives it’s pushing.
Where does the notion of missing/cherry-picked relevant context from a claim fit? We see a lot of this when one side is trying to make a point and leaves out relevant information.
I think part of the reason the terms misinformation and disinformation became so muddled is because of how often they were applied to claims that were simply missing context or cherry-picked. When I was at USA TODAY working as an independent third-party fact-checker for Meta, we had a range of ratings we could apply to posts such as “False,” “Missing Context,” “Altered,” “Partially True,” and others. But over time, even content flagged as “Missing Context” was often publicly labeled or interpreted as misinformation, which further led to confusion of the terms.
When we start using terms like misinformation or disinformation to describe content that may be misleading but isn’t factually wrong, the language begins to lose meaning. At NewsGuard, our standard has always been to set a very high bar for the claims we catalog and debunk. We focus on provably false claims that can be demonstrated with clear evidence to be inaccurate.
I think claims that are “missing context” fall into a grayer, more subjective category. What one person sees as lacking key background, another might interpret as a fair simplification. That doesn’t mean it’s not worth flagging (context is, in fact, where a lot of manipulation hides), but it requires a different kind of explanation. Instead of labeling it misinformation, it’s more helpful to describe exactly what’s missing and how that omission could lead to a distorted impression without assuming anything about the reader or why the context was missing.
The problem with using misinformation as a catchall for everything from hoaxes to poorly framed headlines lacking important context or cherry-picking is that it collapses important distinctions. A claim that is technically true but framed in a way that leads to a misleading conclusion isn’t the same as a fabricated, baseless conspiracy theory. And when both get lumped under the same label of “misinformation,” it erodes public trust in the fact-checking process and gives ammunition to critics who accuse efforts to correct the record [as a sign] of bias or censorship. By being precise, we can better communicate the nature of the problem to the public. And I think that precision is especially important now, when bad actors often rely on half-truths, cherry-picked data, and decontextualized quotes to make their narratives seem more credible.
Might the new terms also be subject to the same denials and political polarization?
I think that any term can be politicized if the incentives are strong enough. But the hope is that by focusing on the function of the content rather than assigning intent, we sidestep much of that polarization. “False claim,” for example, is specific and evidence-based—something is factually provable or it isn’t. And descriptors like “AI-generated,” “manipulated,” or “unsubstantiated” refer to observable attributes supported by evidence, not motives. Using these more specific terms shift the conversation away from who’s right or wrong politically and toward what’s verifiably true or not. I don’t think that any term is completely immune from misuse. Bad actors will still likely claim that something provably false is actually true or dispute whether a video was AI-generated. Or, a politician may label a real video as AI-generated, and the liar’s dividend issue [the liar’s dividend refers to people becoming so skeptical of the truth that they begin to doubt the veracity of true information], etc. But by grounding our language in verifiable characteristics rather than motivations, we reduce the area for politicized denial and increase the chances that the public debate stays focused on facts and not feelings.
I see this as similar to the gradual shift away from the label of fake news in 2016 after it was politicized by Trump. The U.K. put out a report at the time stating, “The term ‘fake news’ is bandied around with no clear idea of what it means or agreed definition. The term has taken on a variety of meanings, including a description of any statement that is not liked or agreed with by the reader. We recommend that the Government rejects the term ‘fake news. …,’” and calling the term a “poorly defined and misleading term that conflates a variety of false information, from genuine error through to foreign interference in democratic processes.” I think most people in this space would now look back and agree that abandoning fake news as a primary label was the right move. I see our decision to move beyond misinformation and disinformation in this same light.
Given the incredible sophistication of actors’ ability to mimic news sites, persons, entities, voices, images, identities, etc., now, how can we teach learners that they can truly trust anything to be what it purports to be—even during a fact-checking process?
I think learners should shift away from the mindset of “What can I trust?” to “What questions should I be asking?” Rather than teaching students to trust or distrust content because it comes from a particular site or platform, we should teach them how to evaluate how the content was created and verified. Even content from trusted institutions can include mistakes, bias, or outdated information. Instead of asking, “Can I trust this?” people can ask questions such as, “Who made this and why?” “What evidence is the source providing and can I verify it independently?” “Is this claim consistent with other known facts?” “Is there anything emotionally manipulative about how this is being presented?” A source earns trust by being transparent about how information was gathered, by citing original evidence, by correcting errors, and by showing a consistent editorial standard.
Readers can look for these signs of accountability rather than relying on brand recognition. I also think that it’s OK to not know right away, and resisting the impulse to immediately believe or share something (especially if it provokes a strong emotional reaction) is part of responsible digital behavior. At a time where virtually anything can be fabricated at a very low cost, being skeptical and questioning content is a form of literacy.