Selective Sharing

Don’t miss the forest for the trees…

What is It?

Selective sharing involves sharing and promoting independent research to frame an issue in a way that is congenial to the propagandist. Drawing on independent research avoids people’s natural suspicions of interest-group funded or produced research (see: biased production). It also makes it harder to detect as propaganda.

How Does Selective Sharing Work?

Selective sharing does two things:

  1. It creates the appearance of disagreement/uncertainty in the scientific literature.
  2. It frames information in a way that distorts its relevance or meaning.

Selective strategy is particularly effective for complex scientific and policy issues. For complex issues where there is a consensus of experts, there will usually be a deep and broad literature exploring a particular issue in a variety of ways. Also, in just about every body of scientific literature, there are inconclusive studies, ranges of effect sizes, and ranges of predictions from different models. This means there will always be some studies that have findings contrary to the general trend in the literature.

If I tell you that 50 studies found no or only inconclusive relationships between X and Y, you’d correctly suppose that’s strong evidence for believing there’s no likely relationship. However, now suppose I also reveal to you that there are also 2500 studies demonstrating a strong relationship between X and Y and that many of the positive studies are of better quality than the previous 50. Now what should you believe?

Propagandists seize on these outlier independent studies and share them to the media through their PR institutions, natural allies, and online. Essentially, they are boosting the prevalence of the outlier studies in the public information environment (see: Frequency bias). This in turn creates the impression to the public that an outlier study is representative of the scientific literature on the issue. The public (or subgroups of the public) frequently hears about the outlier studies but those studies are rarely presented within the context of the full body of literature.

Selective sharing takes other forms too. The basic strategy is still the same though: misleading framing of information. Sometimes a subsection of a study will be cherry-picked while ignoring the broader conclusion. One of the most common methods, however, is to present other possible causes of a phenomena in addition that which is established in the literature.

Let’s take a historical example. By the early 1950s it was well-established that tobacco caused lung cancer. By the mid 1970s even (unpublished) internal tobacco industry research acknowledged this relationship. Nevertheless, the tobacco industry worked hard to create the illusion of scientific uncertainty and disagreement. A major part of this strategy involved selective sharing. This is how they did it:

Anytime an independent study found other causes of lung cancer–for example asbestos, various industrial chemicals, pollution–the Tobacco Institute (the PR arm of the tobacco industry) would spend a lot of money publicizing these studies and promoting them to the media. The purpose is to create doubt and confusion about the scientific consensus: If many other things cause lung cancer, then how can we be sure that various cases of lung cancer are caused by tobacco and not these other variables? (If you ignored the vast literature and consensus among every major scientific body at the time, that is).

We can see this strategy all over the place once we recognize it. Here are some common instances:

  1. Anytime there are wild fires, you will see a massive increase in online articles pointing to all the other causes/contributing factors of wildfires besides climate change. Yet they never share the 10s of thousands of articles that find that human caused CO2 emissions are the primary driver of climate change. Nor do they share the articles that argue that climate change creates the conditions for greater and more frequent fires–regardless of how they’re started.
  2. Climate change deniers will also share/publish articles on other the other variables that can affect climate: “It’s the sun cycles! The climate has always changed!” Yet they never share the 10s of thousands of articles that find that human caused CO2 emissions are the primary driver of climate change.
  3. Opponents of vaccines fill our newsfeeds anytime there is even the weakest correlation between a child getting sick and their having been vaccinated. Yet we don’t see them posting articles showing that the vast majority of people have no adverse effects or articles documenting the virtual elimination of many previously deadly diseases.
  4. The fossil fuel industry (and allies) will often share memes or studies showing how many birds die from windmills. The implied message is that people who support wind energy are hypocrites because they purport to care about nature, but look at all the bird deaths caused by wind energy. Surprisingly, these same people never share the studies that show the number of birds that die due to fossil fuel pollution.

Again, what makes selective sharing such a successful strategy is that everything the propagandist shares usually comes from a credible independent source and, most importantly, nothing the propagandist selectively shares is false. However, selective sharing misleads because it frames the issue in a way that distorts what’s really going on. (See the last section of this page for a more detailed explanation of how the framing is misleading).

Modeling Selective Sharing in an Information Ecosystem

In The Misinformation Age, O’Connor and Weatherall (2019) used a Bayesian model (Bayesian models represent the way we update our beliefs/credence levels as we become aware of new evidence) (Bala-Goyal model) to see how beliefs spread from a scientific community to policymakers and the public. They also modeled what occurs if propagandists infiltrate the information ecosystem. They found that in a wide variety of cases, a propagandist using selective sharing alone (without biased production) cause naive policymakers to converge on the false belief despite there being a scientific consensus to the contrary (p. 112-113). The effect is even stronger when biased production and selective sharing are conjoined.

So far, the Bala-Goyal model of the epistemic community includes policymakers who have some background knowledge of the science. However, when we extend these models to members of the public with little working background knowledge and little to no direct contact with the scientific community, outcomes are even worse with respect to converging on the false belief. Selective sharing is particularly effective with manipulating the public because the shared research comes from independent sources giving it the veneer of legitimacy. The public’s guard is down.

How to Avoid Falling Victim to Selective Sharing

To avoid falling victim to the distorting effects of selective sharing, we need strategies that properly contextualize the information being shared. This is the main problem for the non-expert. If you aren’t an expert then almost by definition you don’t have a deep working knowledge of the relevant body of scientific literature that would allow you to properly frame the shared studies. 

Luckily, there are a few short cuts you can take:

  1. Find out whether there is a strong, medium, or weak consensus among relevant experts. If there’s a strong or medium strong consensus, the odds favor the expert view.
  2. Find out whether there’s a trend in the literature. Look at meta-analyses and systematic reviews rather than individual studies.

Critical Thinking Bonus

Selective sharing is a subspecies of slanting by omission. Important evidence is left out which would allow someone to properly evaluate the matter. To avoid falling prey to slanting by omission (and hence selective sharing) we must employ the total evidence requirement. The total evidence requirement means answering, “what are all the variables we would need to know about make a reasonable judgment on this issue?” This is tricky and takes training. It’s especially hard if you aren’t already an expert on the issue since you won’t always know what questions to ask. But this doesn’t mean we can’t make some progress.

To illustrate how to use the total evidence requirement, let’s use the common example of selective sharing used by the fossil fuel industry against wind energy. The selective sharing strategy makes the following inference:

Windmills kill birds, therefore we should not use windmills.

The first thing we need to do is extract the implied premise:

If a power source kills birds, then we should not use it.

Is this true? Is this the only variable that matters in selecting a power source? Let’s turn our brains off for a second and suppose this is the only thing that matters. The only variable that matters for selecting a power source is whether it kills birds.

The total evidence requirement demands that we also learn how many birds/Gwh (Gigawatt hour) our other energy alternatives kill. Knowing about a single source isn’t enough. We need to make comparisons. It turns out that wind power kills 5 times fewer birds than do fossil fuel power stations. So, by the fossil fuel industry’s own lights, wind power is better than fossil fuel power.

Of course, bird deaths/Gwh aren’t the only thing that matter. To meet the total evidence requirement we’d need to make a list of all the relevant metrics we’d need to compare in order to decide which source of power is best. We’d need to know the cost/Gwh, CO2 emission/Gwh, particulate emissions/Gwh, environmental costs/Gwh, health costs/Gwh, etc…

There’s a lot of stuff we need to know before we can make any reasonable judgment about which power source is preferable. In other words, merely knowing that windmills kill birds doesn’t remotely come close to meeting the total evidence requirement. But without applying the total evidence requirement, it’s very easy to be mislead. 

For more on slanting by omission, the total evidence requirement, and some practice exercises, go to this module in my free online critical thinking course.