The spread of false information is a leading contributor to infighting, and it’s epidemic. When we’re better able to assess the credibility of information and its sources, we’re better positioned to help prevent the spread of false information.
“False information” refers to two kinds of information: misinformation and disinformation. Misinformation is information that’s simply inaccurate. For example, misinformation might be an Instagram post by someone who believes that the earth is flat, stating this belief. Or it could be a tweet circulating a rumor that an NGO leader has misused funds, created by someone who doesn’t know better. Disinformation is intentionally false information that’s designed to deceive or mislead.
False information campaigns
Among justice advocates, false information often takes the form of discrediting a movement influencer, such as an organizational leader—framing them as, for example, in opposition to the cause they’re working for.
Examples include unfounded claims that an LGBTQ+ leader is running a campaign simply because they want to make a name for themselves, that an environmental organization doesn’t care about its mission but just wants to garner donations, or that a BIPOC TikTok influencer is motivated by the money they get from corporate sponsorships rather than by a commitment to anti-racism.
Someone who spreads false information about other advocates in their movement may bully people into supporting their false-information campaigns. This person may, for instance, threaten to “expose” people as being aligned with the supposedly offending advocates if these people don’t comply with demands.
For example, animal rights conference organizers may be told that if they host a particular speaker who doesn’t support the outreach strategy the mis- or disinformers believe in—like a vegan who advocates meat reduction rather than elimination—then the organizers will be painted as “enemies of the movement.” This can, of course, be extremely intimidating.
Not only does false information create division among advocates, it’s also a key reason why counterproductive outreach strategies continue to be used, and why productive strategies continue to be trivialized and even ridiculed.
(It’s important to note that, in some situations, making others aware that someone isn’t aligned with a cause they claim to be an advocate for—such as a lobbyist for intensive animal agriculture who claims to be a climate advocate—is valid.)
Not only does false information create division among advocates, it’s also a key reason why counterproductive outreach strategies (such as the shaming, aggressive confrontational approach) continue to be used, and why productive strategies (such as encouraging people to make smaller changes that can lead to larger ones, like reducing their meat consumption over time) continue to be trivialized and even ridiculed.
Identifying false information
Because most false information is spread online and isn’t within the legislative confines of national borders, those who are directly impacted are often powerless to have the information removed. Since false information is like a virus with no known cure, our best bet is to inoculate ourselves against it and to make a commitment not to spread it to others. Learning to identify false information is therefore key.
According to Brian Southwell, director of Research Triangle Institute International, a global nonprofit research institute, we should be especially skeptical of the validity of information that has the following characteristics:
It seems too good to be true. For example, is an organizational leader claiming that using a particular outreach method will instantly convince anyone, no matter how resistant, to support the cause?
It plays to our own biases. Research shows that we’re much less critical of information that’s in alignment with what we already accept as true, so we should be extra critical of information that reinforces our beliefs rather than challenges them.
It’s also important to understand the psychological and social factors that contribute to the spread of false information. In Belonging Is Stronger Than Facts: The Age of Misinformation, Max Fisher explains how our group identity—the identity we hold as members of an ingroup, such as Democrat, Republican, feminist, queer, or vegan—plays a key role in how we relate to information. We’re more likely to believe false information if we’re told that people in our ingroup believe it. This is in part because, as mentioned above, we’re less critical of information that reflects our beliefs and biases, and in part because we’re sensitive to rejection and fear that if we don’t buy into the party line, we’ll be rejected by other members of our group.
So, for example, imagine that a prominent feminist publicly shames and berates another feminist, a blogger who expresses a different opinion when it comes to the language used to describe pregnancy (“people who can get pregnant” versus “pregnant women”), calling the blogger “anti-feminist.” Other feminists might feel silenced and hold back from expressing their own viewpoints for fear of being berated or rejected by others in the movement.
Brendan Nyhan, a political scientist and author of a 2021 study on false information, says that in attempting to prevent the spread of false information, “the best approach is to disrupt the formation of linkages between group identities.” In other words, it’s important not to reinforce the association between a particular attitude or behavior and an identity. For example, if we say, “Even though wearing masks helps prevent the spread of disease, conservatives are opposed to wearing masks,” we reinforce the link between mask rejection and conservatism—so someone who identifies as conservative assumes that they should be opposed to wearing masks.
Polarization and false information
Nyhan also points out that an individual with more polarized views is more prone to believing falsehoods. So, for example, an advocate who holds a more fundamentalist position in their group or movement (in that they strive to adhere more strictly to the founding principles on which the group or movement is based, and they interpret its ideology more literally) may be more likely to believe false claims, such as the claim that a leader who doesn’t hold a fundamentalist position is harming the cause.
Research also shows that the more polarized a system (e.g., a country, community, or movement) becomes, the more its members feel distrustful of others, of information, and of authorities—and, therefore, the more prone they are to believing false information and the more they cling to their group identities.
For example, a conservative person who lives in a politically polarized town might feel distrustful of their liberal neighbor. As a result, they may be more likely to believe a malicious rumor being spread about this neighbor, and they may also feel the need to be more brazen about their beliefs (such as by displaying conservative lawn signs).
Interestingly, the more we cling to our group identities, the more we crave and seek out information that affirms that we’re in opposition to those on the other pole. We’re less concerned with accuracy than we are with maintaining our sense of “us versus them” and proving that “we” are right and “they” are wrong.
And when leaders in a group encourage their followers to adopt an ingroup mentality—the belief that the values of the ingroup are superior to those of other groups—and they feed the group members’ craving for false information that affirms this mentality, the problem is exacerbated. One way to interrupt the flow of false information, then, is to make sure that influencers, including media sources, don’t reinforce problematic false messages.
Whatever form false information takes, it’s vital that we do all we can to prevent its spread. By sharing only information that we know to be accurate, we can decrease our contribution to infighting and also help make our group or movement more impactful.