People who watched the videos were better able to identify misinformation techniques than those who didn’t see the clips, as the team reports in a study published in the journal Science Advances today. “It’s very possible on social media to reduce vulnerability and susceptibility to being manipulated,” says Jon Roozenbeek, a postdoctoral fellow at the University of Cambridge and the lead author of the study. “Maybe not all misinformation, but you can demonstrably improve people’s ability to detect when they’re being manipulated online.” Misinformation happens when people spread false information, even if it wasn’t the person’s intention to mislead others. Misinformation happens regularly in our daily lives, says Sabrina Romanoff, a clinical psychologist who was not affiliated with the study, and it can be something as small as misremembering something you saw on television and telling someone else the wrong information. “You can think of it as analogous to the childhood game of ‘telephone,’” explains Romanoff, in which small errors become magnified through repetition. But through the megaphone of social media, wrong or misleading claims can become a harmful way to distort the truth. Anyone can fall prey to misinformation online, Romanoff says, though people who click on a story consistent with their pre-established beliefs are more susceptible. Being prone to impulsivity and feeling an overload of information could also make you more likely to spread fake news. The current study focuses on inoculation theory, where people learn about these types of misinformation techniques. Roozenbeek compares this theory to a vaccine: Introducing a weakened virus or virus-like material primes your immune system to recognize and destroy the pathogen in the future. Unlike fact-checking, which takes a more retroactive approach, inoculation theory stops people who are exposed to misinformation from spreading the content in the first place. “The idea was to inoculate people against these tropes, because if someone can successfully recognize a false dichotomy in content they’ve never seen before, they’re more resilient to any use of that particular manipulation technique on social media,” Roozenbeek says. Roozenbeek and his team created five 1.5 minute videos covering common tactics used in online misinformation. To avoid any bias towards one group of people, the videos were designed to be nonpolitical, fictitious, and humorous. In the lab, the team invited over 6,000 participants to randomly watch either a video showing how to identify misinformation techniques or a neutral video that acted as a control. Afterward, the participants were shown 10 made-up social media posts that were manipulative or neutral. Roozenbeek then partnered with Google to expand the study. As part of a public ad campaign on YouTube, nearly 23,000 people watched one of two anti-misinformation videos. One video involved negative and exaggerated emotional language to encourage clicks and belief in fake news (Sample headline: “Baby formula linked to horrific outbreak of news, terrifying disease among helpless infants. Parents despair.”). The other one relied on presenting two points of views or facts as the only available options (The headline: “Improving salaries for workers means businesses will go bankrupt. The choice is between small businesses and workers. It’s simple mathematics.”). Within a day of seeing the video ads, one-third of people who watched the videos were randomly given a test question on YouTube where they were asked to identify the type of manipulation technique in a headline or sentence. People who watched the videos were better able to pick out misinformation techniques and misleading content. “Finding a significant effect was actually quite surprising,” Roozenbeek says. This is because unlike a controlled laboratory setting, people on the internet can get easily distracted by other ads and videos. Additionally, there is no guarantee people actually watched the videos. While the videos were not allowed to be skipped, people could have turned off the sound or moved to another tab. “But despite all that, we still found a large and robust effect.” Roozenbeek and other psychologists are wrapping up another study that looks into how long it takes for people to forget what they’ve learned from the videos. “It’s not reasonable to expect someone to watch a video once and remember the lesson for all eternity. Human memory doesn’t work that way,” he says. Ongoing results suggest people might need a ‘booster shot,’ in the form of repeated video reminders. Another project in the works will use Twitter to see how watching these videos affects people’s behaviors, specifically how much they retweet misleading content. To stay vigilant against misinformation as you scroll through the internet, Romanoff warns about these six common tactics:
Fabricated content: Completely false or made-up storiesManipulated content: Information is intentionally distorted to fit a person’s agendaMisleading content: A person deceives others, such as presenting an opinion as a factFalse context of connection: A person strings together facts to fit the narrative they are trying to convey, such as new stories using real images to create a false narrative of what happenedSatire content: A person creates false but comical stories as if they were trueImposter content: A story is created through the branding and appearance of a legitimate news story, but is false such someone creating a video using someone else’s logo to seem legitimate