Caroline Logan Manager, CollaborateUp
In early 2020 dozens of 5G towers were set on fire throughout Europe.1 The cause? False information spread online claiming that the millimeter wave spectrum used by 5G technology causes COVID-19. The falsehoods were backed with the fact that Wuhan had installed 5G towers before the COVID outbreak. The claims spread online like a wildfire motivating many to burn down the towers meant to boost connectivity.
The destructive nature of mis- and disinformation extends far beyond the well meaning aunt or uncle sharing conspiracy theories on Facebook. The threat posed by misinformation (an untruth) and disinformation (an untruth, deliberately spread) is growing in scope and scale. Mis- and disinformation erode trust in public institutions, exacerbate class conflict, cultivate fear and hatred, embolden hostile actors, and jeopardize democracy.
Impact of misinformation
Similar to a nuclear blast, there are some big boom impacts resulting from misinformation that are easy to see and understand. At the same time, like the radiation poisoning resulting from the initial blast countless invisible and insidious effects lurk unseen and continue to spread and unleash harm on the population.
Consequences manifest in the realm of politics, public health, the environment, and technology – causing harm, and even death. If individuals are misinformed, they may make decisions for themselves or their families which are not in their best interest. When you compound misinformed decisions across the globe, the magnitude of consequences and subsequent ripple effects on society grow exponentially.
The spread of misinformation is not a new phenomenon. In 1622, London Printer Nathaniel Butter started the first British Newspaper, garnering rapid popularity. The paper was embellished with inaccurate details and in some cases, included stories that were simply made up. By the 19th century, members of the British parliament lamented that they no longer had much power given the influence of the newspapers on British subjects.2
While misinformation has been around for centuries, the technologies and platforms that now connect billions of people globally have magnified the threat. The acceleration of digital technologies has made information more accessible and shareable, increasing the speed at which lies spread. To make matters worse, a well constructed lie often spreads faster than a complicated truth.
We have more information at our fingertips than ever before, and yet in the swirl of abundant noise we all find it increasingly challenging to discern fact from fiction. The internet does not have conventional gatekeepers such as professional editors and fact-checkers, making us our brothers’ and sisters’ keepers when it comes to sharing news and information on social networks.
Defining the issues
In order to explore the impacts and potential solutions to mitigating the spread of mis- and disinformation, CollaborateUp convened a series of consultative roundtables in regions all over the world to better understand how governments, companies, and civil societies in these geographies experience and approach this growing phenomenon. The report3 published following our research identified trends and recommended solutions outlining how we might understand and combat the effects of misinformation.
One of the most profound trends revealed in our research highlighted our continuing inability to address human psychology in the technology platforms and regulatory frameworks. Our brains are built for an analog world with instincts, autonomic responses, and cognitive coping mechanisms ill-suited for a digital world. Social platforms intentionally take advantage of these features, manipulating our own human wiring. As a species we have not yet had time to adapt to the way information is shared and exchanged in a digital world.
Studies show that social media can be as addictive as gambling or drugs4. Social media platforms track individual behavior, creating a tailored experience – engineered with feedback loops that promote addictive use of the platform. Unfortunately, negative or inflammatory content is more memorable than objective, fact-based content, which reinforces greater use of the platform and results in more frequent sharing of content unsupported by data or facts.
Tech platforms use algorithms to tailor content largely based on our individual interaction to consume our attention.. As a result, algorithms are not objective sources of balanced information, but custom designed by virtue of the user’s interaction – and therefore biased. Exacerbating the problem, in many cases reporters even for traditional media outlets are incentivized based on the number of clicks their articles receive, reinforcing the use of inflammatory and misleading headlines.
Tech platforms are ripe for the spread of mis- and disinformation for a number of reasons. Information can now be more easily manipulated than ever before with digital tools. Studies show that “shallow fakes” created by amateurs compared to “deep fakes” created by sophisticated malign actors are more common and impactful in the spread of mis- and disinformation.5 New mediums such as memes which capture an emotion in a simple moving picture have redefined the way a message can spread through powerful sentiment.
Inside the echochamber
As information consumers and platform users, we are collectively impulsive and inattentive. After conducting a study on the impact of a “like” or “retweet,” researchers at the University of Notre Dame found that many social media users will press share, based on partial information, such as a headline, without actually clicking on a link or reading its content.6 Research shows that while people claim to value accuracy first and foremost when sharing information, the most significant factor that contributes to sharing is signaling our group belonging.7 As a result, social media has quietly and effectively become a form of tribalism.
We process information the most fluently and are more likely to believe it when it feels familiar. Whether we process something analytically or intuitively, our mental models favor the acceptance of messages compatible with pre-existing beliefs as the message does not contradict current knowledge and as a result “feels right”.8 Whether conscious or not, people are drawn to social media because it often reconfirms their pre-existing beliefs and many find themselves in an echochamber, where sources of information feel familiar.9 Before the birth of the internet, most people got their news from their neighbors and familiar faces in their community. The “Metaverse” intentionally simulates this same sense of community, tricking our brains into believing we are chatting with familiar neighbors when in many cases we are online with an infinite network of strangers all over the globe. The result is we are more likely to believe misinformation if these sources feel familiar, rather than approaching information with a skeptical eye.
Can we correct misinformation?
Although there has been a significant emphasis placed on fact-checking misinformation and disinformation in traditional media, on social media platforms, and in closed networks — and these efforts should continue — it is essential that we focus more heavily on producing and distributing fact-based information in the first instance.
This is critical as we possess cognitive factors that often render misinformation resistant to correction.10 Research has shown that once individuals read, hear, or see false information, it is impossible to erase its influence. Further, many correction attempts repeat the false information in the process of debunking it, heightening the visibility of the myth, and possibly spreading it to people who might otherwise have never seen it.11 In this way, even if false information is corrected and we believe the correction, the human psyche is forever reconfigured.
A study conducted by Schwartz et al examined our memory for misinformation, seeking to explore why retractions of misinformation are so ineffective. His study showed that efforts to retract misinformation in some cases backfire and, ironically, increase misbelief. Schwartz makes an important distinction between ignorance and misinformation, noting that ignorance rarely leads to strong support for a cause. In contrast, false beliefs based on misinformation are often held with strong and infectious conviction. For example those who vigorously reject the scientific evidence for climate change also believe they are the best informed about the subject.12
Misinformation retractions can also be ineffective because of our innate reactions to certain sources.13 People generally do not like to be told what to think and how to act, so they may reject particular retractions. Considerable research has been dedicated to the impact of misinformation effects in a courtroom setting wherein mock jurors are presented with a piece of evidence that is later ruled inadmissible. When the jurors are asked to disregard the tainted evidence, their conviction rates were higher when a judge accompanied an “inadmissible” ruling with extensive legal explanations, thick with legal jargon, than when the s/he left inadmissibility unexplained.14
Under this “continued influence effect,” people will continue to rely on the misinformation to which they have been exposed, even after retractions or corrections.15
How might we begin to make strides in countering misinformation when our human wiring is stacked against us? Start by taking a “harm reduction mindset.” One promising tactic: train fact-checkers on the best way to correct false claims while avoiding unintended consequences that may actually reinforce the “stickiness” or believability of false information. Correction strategies should focus on emphasizing what’s true without repeating the details of the false information. Furthermore, fact checkers should provide a simple brief rebuttal in order to retain the attention of information consumers. In order to create common ground and avoid patronizing consumers, it is also useful to frame evidence in a worldview affirming manner by enforcing the values of the audience when correcting misinformation.16
A plan to tackle online misinformation
Taking a “harm reduction mindset,” to decrease the impact of mis- and disinformation and increase the spread of correct information, starts by recognizing disinformation has become endemic to society. We can’t eradicate it so we must learn to live with and manage it. We must also recognize that disinformation has multiple causes and therefore requires multiple solutions. As an example, to make information more useful, it must be interoperable, usable by multiple institutions regardless of origin. In addition, information must be verifiable by independent sources so that more people can rely upon it. Governments can play a role in ensuring that information is gathered according to commonly agreed-upon standards and establish processes for verification and oversight to reduce misinformation.
While social media companies have started taking greater responsibility, as currently constructed, their platforms play to the worst of the human psyche and legislators and regulators lag in their ability to provide guidance and oversight. Left alone, social media companies will continue to prioritize profit over the public good. Just as tech platforms have leveraged human behavioral science to increase their user base, they have the opportunity to apply that same science to mitigate the spread of mis- and disinformation. Exploring this potential connection has not yet received significant traction, overburdening the individual user with responsibility for making judgments on the accuracy of information.
There have, however, been some encouraging initial experiments and proposals for how to tackle this problem:
- Twitter is piloting a “pause button” that encourages people to click and read before sharing. This could be scaled, as could other “autonomic nudges” that cause us to make different choices. These “nudges” add an extra step in the process of sharing to disrupt the instinctive behavior of sharing immediately. Such “strategically introduced friction” might be the interruption needed to punctuate the dopamine-seeking reward loop and our addiction to sharing mis- and disinformation.
- Experts recommend some form of media literacy education, coupled with tech-enabled nudges to encourage people to have more unconscious and automatic responses to misinformation. Media literacy is defined as “the ability to determine what is credible and what is not, to identify different types of information, and to use the standards of authoritative, fact-based journalism as an aspirational measure in determining what to trust.”17
- Local journalists and Chief Security Officers can play a role in supporting fact-production and media literacy at a more local level, especially in communities where organizations have credibility rooted in the fabric of the local ecosystem. Local journalists and CSOs can become the conveyors of truth, correcting misinformation hopefully before it arises.
- Regulators have an opportunity to intervene, introducing different incentives to influence the behavior of information consumers, providers, and distributors. Platforms need to strike a balance between meeting their business needs and answering to the public and its concerns without stifling freedoms of expression. The development of technology should not be hindered, but rules and guidelines can help create incentives that will help these companies better protect users and make the internet a safer place.
Media outlet leadership needs to be convinced to invest in long-term, harm-reducing efforts that will help combat mis- and disinformation. No single entity will be able to eradicate mis- and disinformation alone. Successfully overcoming this formidable challenge requires a global, multi-stakeholder approach – uniting individuals, civil society, business, and government behind this common purpose.
2 A very Short History of Mis- and Disinformation, John Maxwell and Heidi Tworek in https://collaborateup.com/news-literacy-and-misinformation-disinformation-in-the-era- of-covid-19/
3 https://collaborateup.com/news-literacy-and-misinformation-disinformation- in-the-era-of-covid-19/. The study was supported by Philip Morris International (PMI). CollaborateUp retained full independence in the research, writing, and editorial and peer review phases of the study.
4 Busby, M. (2018, May 8). Social media copies gambling methods “to create psychological cravings.” The Guardian.
5 Yankoski, Michael, Walter Scheirer, and Tim Weninger. “Meme Warfare: AI Countermeasures to Disinformation Should Focus on Popular, Not Perfect, Fakes.” Bulletin of the Atomic Scientists 77, no. 3 (May 4, 2021): 119–23. https:// doi.org/10.1080/00963402.2021.1912093.
6 Glenski, Maria, Corey Pennycuff, and Tim Weninger. “Consumers and Curators: Browsing and Voting Patterns on Reddit.” IEEE Transactions on Computational Social Systems 4, no. 4 (December 2017): 196–206. https://doi.org/10.1109/TCSS.2017.2742242.
7 Kim, Grace. “Fake News: Analyzing News Sources.” Notre Dame de Namur University, n.d. https://library.ndnu.edu/fakenews/identifying.
8 https://journals.sagepub.com/doi/pdf/10.1177/1529100612451018
9 GCFGlobal. “Digital Media Literacy: How Filter Bubbles Isolate You,” n.d. https://edu.gcfglobal.org/en/digital-media-literacy/how-filter-bubbles-isolate-you/1/.
10 Lewandowsky, S., Ecker, U. K. H., Seifert, C., Schwarz, N., & Cook, J. (2012). “Misinformation and its correction: Continued influence and successful debiasing.”
11 Schwarz, N., Newman, E.J., & Leach, W. (2016). “Making the truth stick and the myths fade: Lessons from cognitive psychology.” Behavioral Science & Policy, 2(1), 85-95.
12 (Leiserowitz, Maibach, Roser-Renouf, & Hmielowski, 2011).
13 Brehm & Brehm, 1981
14 Pickel, 1995, Wolf & Montgomery, 1977
15 Lewandowsky, S., Ecker, U. K. H., Seifert, C., Schwarz, N., & Cook, J. (2012). “Misinformation and its correction: Continued influence and successful debiasing.” Psychological Science in the Public Interest, 13, 106-131. — DOI 10.1177/1529100612451018
16 https://journals.sagepub.com/doi/pdf/10.1177/1529100612451018
17 need-for-teaching-news-literacy-in-our-schools
Caroline Logan is a Manager at CollaborateUp where she supports CollaborateUp’s portfolio of collective impact programs. In this capacity, she accelerates cooperation among governments, companies, and nonprofits as they tackle some of our world’s toughest challenges. Alongside the CollaborateUp team, Caroline has co-created solutions across a wide range of issues, from combating misinformation to combating wildlife crime. She also leads the firm’s private sector new business development. Over the last decade, Caroline has led the design and delivery of tailored workshops across industries. Prior to her role with CollaborateUp, she served as a Project Lead with McChrystal Group wherein she designed and delivered leadership development programs for participants across all levels – from corporate executives to emerging leaders. Caroline also served as a staff writer with BORGEN magazine, where she published on international development topics. Caroline is a Board Member of Globally, a nonprofit focused on building communities of impact and developing emerging leaders in global affairs. Caroline graduated with merit from The London School of Economics with a Master of Science in International Relations. She also earned a Bachelor of Arts in Political Science and English Literature from The University of North Carolina Wilmington.