False information spreads rapidly, altering our response to issues like climate change. Learn about the impact of misinformation vs. disinformation and how to safeguard yourself.
There’s a reason why information is often considered the most powerful currency in the digital era. Modern technology has changed the flow of information, “good” or “bad,” to become faster and reach more people than ever. As journalist Natalie Nougayrède wrote in The Guardian: “The use of propaganda is ancient, but never before has there been the technology to so effectively disseminate it.” Social media platforms, in particular, have become a fertile ground where misinformation and disinformation flourish.
Emerging technologies are constantly posing new challenges for senders and recipients of information. In times of crisis, false news travel fast, creating chaos and affecting public opinion with the potential to change the outcome of a situation. This can do incredible damage — on a global political scale and in our social circles.
We experienced this during the height of the COVID-19 pandemic, and we are currently seeing it happen in the Russian war on Ukraine, with false information spreading worldwide. We have also long watched it happen in discourses on climate change, where disinformation has influenced public opinion and delayed political action.
This article explores the differences between misinformation vs. disinformation, how each affects the discourse on climate action and what you can do to stop false information from spreading.
What is Misinformation and How Does It Spread?
Have you ever accidentally shared a piece of information that turned out to be false? The term “misinformation” is commonly used to refer to information that is wrong or inaccurate but shared without the intent to harm.
Examples of misinformation include information that is vague or missing context, false interpretations, or factual errors like getting a name, date or number wrong or misquoting a news piece you’ve read. This can also happen in professional news reporting. Clickbait headlines containing misleading language or unverified facts, as well as wrong connections — titles or images that don’t support the content of an article — are also considered misinformation. Other examples are satire and parody.
According to PEW, seven out of ten Americans use social media to interact with one another, receive news content or exchange information. This means there is a lot of potential for false information to be passed on from individual users to large groups. Misinformation spreads rapidly through the digital sphere via social networking platforms, private messengers, blogs, forums and other online sites — but also by word of mouth. In fact, a study by MIT has shown that false information spreads much faster than actual news.
Misinformation can be challenging to spot because there usually is some truth to it. Most false information — intentional or unintentional — is shared by people who are unaware that what they are sharing isn’t true and even think that it is helpful, according to Claire Wardle, co-founder of First Draft News, a project that offers guidance on dealing with mis- and disinformation online.
Disinformation: Intent to Deceive
In terms of misinformation vs. disinformation, the latter constitutes a deliberate attempt to deceive its recipients into thinking a claim is accurate, usually to change public opinion and advance specific agendas by causing confusion and spreading fear and suspicion. These claims range from information presented in a false context, for instance, a picture with an altered caption, to otherwise manipulated or entirely fabricated content, like fake websites, deep fakes or hoaxes used to gather personal data (also known as spear fishing).
Similar to misinformation, disinformation is primarily spread on digital platforms like social media channels, games, websites and messenger apps and state-controlled media outlets subject to censorship under authoritarian governments.
Note: such media outlets are to be distinguished from public broadcasting, which is not subject to governmental editorial oversight.
Misinformation vs. Disinformation: How Does Disinformation Spread?
The ways disinformation spreads are complex. As the World Economic Forum states, disinformation is distributed by various online actors, including “governments, state-backed entities, extremist groups and individuals.” These actors use several elaborate methods which often draw on sophisticated technology to achieve their goals, including:
- inauthentic amplification of disinformation through bots, trolls and influencers
- exploiting advertising tools to micro-target users that are likely to share disinformation
- through harassment and abuse by manipulating an audience or using bogus accounts to “obscure, marginalize and drone out journalists, opposing views and transparent content” (WEF).
In addition, new kinds of misinformation and deception have emerged as a result of the development of artificial intelligence (AI): The UN Refugee Agency (UNHCR) refers to these technologies as “synthetic media” to denote “the artificial production, manipulation and modification of data and multimedia by automated means, especially AI algorithms, to mislead or change original meaning.”
Synthetic media is feared for its potential to boost fake news, promote misinformation and distrust of reality. One type of synthetic media that has received much attention due to its use in hoaxes, financial fraud, false news and revenge porn is “Deep Fakes” (UNHCR).
On the other hand, the human element in the spread of disinformation is not to be underestimated. Both in terms of who instigates a “fake news” campaign and who keeps it going. In 2018, three MIT scholars conducted a study on how information spreads on Twitter and found that humans were primarily responsible for the rapid distribution of false or misleading information. Nevertheless, using bots is a powerful way to target, convince and mobilize crowds of real people to deceive others into believing a particular claim.
What Is the Impact of Disinformation?
Disinformation can shift public opinion to the extent that it can influence policy or delay political action, manipulate democratic processes like elections – or decide the outcome of a war. As the European Parliament states in a 2021 study, disinformation undermines several human rights, feeds polarisation and erodes trust within institutions and among communities. Moreover, it affects not just political but also economic, social and cultural facets of life.
Moreover, one of the key findings of a peer-reviewed study published by Harvard Kennedy School is that exposure to fake news (defined as fabricated information with the format of news content but not the editorial standards and practices of legitimate journalism) is associated with a general decrease in media trust.
Plenty of recent examples show the detrimental effects of strategic disinformation campaigns. In the worst cases, strategic disinformation can deceive the population and influence policy or delay political action, manipulate democratic processes like elections – or decide the outcome of a war.
As the American Psychological Association writes, social media disinformation plays a role in the psychological warfare in the battle for Ukraine, “causing confusion, fueling hostilities and amplifying the atrocities in Ukraine and around the world.”
Misinformation vs. Disinformation: False News and the Environment
Misinformation and disinformation have significantly affected public discourse on climate change by creating confusion, skepticism and doubt. According to the Under-Secretary-General for Global Communications at the UN, “Climate action is being undermined by bad actors seeking to deflect, distract and deny efforts to save the planet. Disinformation, spread via social media, is their weapon of choice.”
For a long time, these “bad actors” — among them stakeholders with significant influence in the fossil fuel industry, politics and throughout civil society — chose to deny climate change outright, preventing any action toward a climate-friendly future. So it may seem like a good sign that many of these players have stopped denying the existence of climate change in recent years. Unfortunately, this reflects a shift from an unfeasible strategy in the face of overwhelming scientific evidence to other, more promising ploys.
Read “The Great Climate Change Hoax? How to Fight Climate Denial” to learn more about climate change denial and how to react to it.
How Disinformation and Misinformation Shape the Climate Discourse
Those actively working against climate-friendly policy have been focusing on another form of denial: negating or diminishing the negative repercussions of fossil fuels to support their idea that human behavior doesn’t impact climate change. Trivializing the climate crisis has also become a popular strategy, even as the effects are pervasive.
There is a term for these kinds of narratives: “Discourses of climate delay” acknowledge the reality of climate change while defending inactivity or insufficient action, for instance, by raising doubt that mitigation is possible, as described in an article published in Global Sustainability. They manage to pervade present discussions about climate action by drowning out trustworthy sources through the spread of disinformation, which ranges from presenting data in a false context to smear campaigns intended to discredit or discourage activists to framing climate action as a threat to a certain way of life. A 2021 report by the Center for Countering Digital Hate showed that 10 publishers are responsible for 69 percent of all Facebook interactions with content promoting climate change denial.
Misinformation also factors into the problem. As Claire Wardle writes, “When disinformation is shared, it often turns into misinformation … Often a piece of disinformation is picked up by someone who doesn’t realize it’s false, and shares it with their networks, believing that they are helping.”
The Severe Impact of False Information on Climate Action
The result is severe. According to the UN, rampant disinformation is delaying climate action. And it is not a new phenomenon: NPR has written about how the fossil fuel lobby has been able to influence lawmakers for years by “sowing public doubt about the science of climate change” and thereby halting US climate policy. Recent studies have confirmed this, showing how the fossil fuel industry’s misinformation strategy has contributed to “everything from political inaction to the rejection of mitigation policies” (NRDC). The latest report by the Intergovernmental Panel on Climate Change also explicitly addressed the issue of climate misinformation, claiming that a “deliberate undermining of science” was causing “misperceptions of the scientific consensus, uncertainty, disregarded risk and urgency, and dissent.”
Recognizing and Addressing Misinformation vs. Disinformation
What are the best strategies for recognizing and addressing misinformation and disinformation and countering discourses of climate delay? Experts agree that media literacy and critical thinking skills are key to combating the spread of false information. Here are four tools to help you stay safe.
1. Familiarize yourself with common strategies of deception
Clickbait, fake websites and trolls: Knowing about the different kinds of misinformation and disinformation is a crucial first step towards recognizing and exposing them. Media literacy skills — learning to analyze and evaluate information online critically — can help you stay safe.
Fostering your intellectual humility is likely to help you think more critically about the things you think you know — and recognize deception.
2. Check your sharing habits
The widespread explanation that people are likely to believe false information because it aligns with their convictions (confirmation bias) is not as significant as you might think. Several studies show that spreading false information is more closely linked to a lack of critical thinking. Research conducted by MIT suggests that online misinformation can be reduced by “shifting attention to accuracy.”
In a new study (2023), researchers conclude that the architecture of social media platforms that reward users for habitually sharing information is a major contributor to the circulation of “fake news.” Many experts agree that concerted efforts to restructure the design of social media networks are necessary to substantially reduce the risk and spread of false information.
Messenger apps pose a particular risk because information is exchanged privately. So it is up to those participating in a conversation to ensure these spaces don’t go unchecked. This can be challenging since we receive information from people we trust and don’t necessarily question its veracity. Before passing on any information, remember that hitting send is easy, but revising the effects of misinformation is nearly impossible since corrections typically don’t spread as widely as the initial false information. Make sure to let your friends know if they’ve (inadvertently) sent you a piece of incorrect information.
3. Avoid misinformation vs. disinformation by paying attention to the source
Get your news from trustworthy sources and check those you are unfamiliar with. Consider potential biases and dependencies. Can you find more than one source to back up the claim? Is it even a real source? You can consult independent fact-checking organizations to help you distinguish legitimate from illegitimate sources. Look at the International Fact-Checking Network founded by The Poynter Institute or the toolkits offered by First Draft. The American Psychological Association also provides a lot of resources on this topic.
4. Be vigilant! New technologies demand new media skills
Learn how to spot fakes by questioning their authenticity and examining details.
How to recognize a manipulated image:
Perhaps you’ve seen the recent image of Pope Francis sporting a giant white puffer coat, and maybe something felt off. The viral photo has taught us a valuable lesson by reminding us to trust our gut and be especially careful when encountering sensational news. Sometimes, questioning whether something makes sense and taking a closer look is the only way to expose a fake: Would the Pope really appear in such dress? Is that the signet ring he’s wearing, or is it something else? Are those more than five fingers on his hand? At the current stage, AI-generated images often still show some signs of manipulation. Though they will eventually get even better, it shows that we can train ourselves to think and look critically at the information we come across.
If you’re uncertain whether an image really depicts what the source is claiming, you can always try a reverse image search on search engines like Google or websites like TinEye as a first step to uncovering foul play. That way, you can find out if professional news sites have already analyzed the image for authenticity or if it has shown up in other places — and was perhaps taken out of context, at another time or place. This Medium article explains how it works.
Beware of bots and trolls:
If you ever find yourself in a discussion in the comments section, try to ensure you aren’t arguing with a bot or another fake identity. Sometimes you can tell by studying a profile’s pictures, bio, number of followers or content posted. If unsure, follow your instincts and engage in constructive conversations only. Abstain from useless discussions. Be careful when encountering users trying to provoke or bully others, and always report hate speech to the platform.
Read more:
- The Keystone Pipeline: Myths and Facts of a Climate Disaster
- Ecopsychology: What Is it & Can I Study it?
- What Is Sustainable Development and Why Is It Necessary?
Do you like this post?