Use Change Strategies That Are Proven to Be Effective
A lot of infighting stems from disagreements over which change strategy, or method for getting others to change, advocates should use.
Advocates often argue passionately in favor of a change strategy that has little or no scientific support, based on their own intuition. We may think, “I was shamed and changed, so shaming must be effective,” or, “Being a part of this protest feels so powerful that protests are surely effective,” and so on.
But study after study has shown that, in many circumstances, our intuition can’t be trusted. All of us are subject to a variety of biases that can cause us to hold strong opinions that simply aren’t rational.
When it comes to choosing a change strategy, advocates can avoid a lot of arguing by asking the question “Has the strategy been proven to be effective?”
Although much more research needs to be done, there is a lot of information on what change strategies have been shown to work. Knowing what these strategies are can go a long way toward ending arguments among advocates. For more information, we recommend the Effective Activist Guide.
However, sometimes advocates resist using scientifically supported strategies. So it’s important to understand why this resistance occurs and how to navigate it.
Why advocates may reject research findings
One reason why advocates may reject research findings is because the findings don’t align with their ideology. For example, an advocate who believes in promoting the immediate, total abolition of a harmful practice may reject findings that demonstrate the effectiveness of promoting incremental change.
Another reason why advocates may reject research findings is because they don’t trust the science.
Distrust of science
Advocates may understandably be skeptical of science, given our knowledge of how institutions—including scientific institutions, such as nutrition, medicine, and psychology—often reflect and reinforce the very injustices we’re working to transform.
We know, for instance, that when people study nutrition, they study carnistic nutrition, just as at one point in time (and, to some degree, still today) people who studied medicine and psychology inevitably studied racist medicine and heterosexist psychology.
But these same institutions have also played (and continue to play) a critical role in social transformation.
For example, pro-plant-based food scientists are at the forefront of shifting public attitudes toward eating animals, and medicine and psychology have played leading roles in debunking racist theories and depathologizing nondominant sexual and gender orientations and identities.
Although it makes sense to be skeptical of science, problems arise when this skepticism turns into denial. Science denial is the rejection of facts that are well supported empirically and that are a matter of consensus among those in the scientific community. Science denial tends to go hand in hand with support for unfounded, alternative explanations of phenomena. Common examples include the denial that climate change exists or that it’s driven by human activity.
The three false arguments underlying science denial
The first argument is that a particular scientific conclusion isn’t credible, either because of flawed methodology or because it’s not universally accepted.
Of course, sometimes research methods are flawed, and some research doesn’t have widespread acceptance. However, arguments that attempt to deny science often claim flawed methodology and nonuniversal acceptance when, in fact, the methodology is sound and there’s consensus on the matter among those in the scientific community. These arguments sow the seeds of doubt in nonscientists’ minds. And because competent scientists communicate their conclusions in a way that shows there will always be room for reexamining them—so that conclusions can change if new evidence is uncovered—this doubt is exacerbated and sometimes deliberately exploited.
The second argument is that the researchers are not objective—that they have a personal agenda (to make money, to promote an ideology, etc.). This is sometimes the case, of course, such as when research that’s funded by Big Oil claims that climate change isn’t largely driven by the burning of fossil fuels. But this argument is often used without any basis for believing that the research is compromised by self-interest.
The third argument is that the view being presented is unbalanced—that alternative views aren’t given equal attention in, for example, the media or educational curricula. This argument makes it seem like there are two equally valid claims when, in fact, one claim is based on a scientifically validated conclusion and scientific consensus, while the other, such as the claim that the earth is flat or that the climate crisis has nothing to do with human activity, is based on neither.
The main tactics of science denial
According to the National Center for Biotechnology Information, some of the main tactics that support science denial—which overlap with the arguments described above—are the following:
Drawing on conspiracy theories—theories claiming that scientific evidence and consensus is a conspiracy to mislead the public. Conspiracy theories question the quality and validity of scientific research and conclusions. For example, Dr. Andrew Wakefield and his colleagues authored a 1998 paper, based on their fraudulent study, in which they claimed that the MMR vaccine may cause autism. Wakefield, whose claims have been disproven by a number of large epidemiological studies and who was barred from practicing medicine in the UK, nevertheless galvanized the anti-vaccination movement and continues to be heralded as a hero among those who are skeptical of vaccines.
Invoking fake experts—unqualified individuals or institutions who promote views that contradict those based on established knowledge and who often attack the legitimacy of established experts by, for example, questioning their credentials, ethics, and motives. The tobacco industry, for instance, hired scientists to attack research demonstrating the connection between smoking and cancer.
Cherry-picking data—looking at only the few studies that challenge the established view, while ignoring the many studies that support it. For example, the Wakefield et al. paper that galvanized the anti-vaccination movement was based on a single (fraudulent) study.
Holding impossible expectations for research—focusing on and exaggerating the uncertainty in research. For example, people who don’t believe in human-caused climate change will highlight any reservations expressed by scientists, despite the fact that competent scientists are careful to always express that there’s some degree of uncertainty in any given research.
Misrepresenting established science and scientists—making the science or scientists appear in opposition to the values or needs of the public. This is done by, for example, suggesting that a policy that would be adopted due to scientific evidence, like prohibiting people from smoking in cars with young children, is against personal freedom and that the scientists’ personal views are in alignment with oppressive restrictions.
Trusting the scientific method
Of course, ideology can and does influence scientists, sometimes leading to inaccurate and dangerous conclusions. The Nazis’ scientific conclusions about craniometry and eugenics are perhaps the most common examples of this phenomenon.
The practice of science, however, is not the problem; the scientific method has been designed with the explicit intention to ensure objectivity. It’s when science isn’t practiced as intended that problems arise. So the way to effectively challenge scientific conclusions is by using the scientific method. If, for example, people who claim that climate change isn’t caused by human activity want the scientific community to take that theory into consideration, they need to show evidence that’s based on verifiable and replicable research.