© 2024 University of Missouri - KBIA
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Russia Used Twitter to Spread Misinformation during the 2015 MU Protests

At the height of the 2015 protests, Columbia provided a perfect breeding ground for a Kremlin Twitter disinformation campaign, said Mike Kearney, an assistant professor at the Missouri School of Journalism.

The campaign was of the same strategy used to influence the 2016 presidential election, according to a report in the winter 2017 Strategic Studies Quarterly, a peer-reviewed strategic journal of the U.S. Air Force. The campaign was deployed to incite confusion and polarization around the United States, as well as degrade public trust in news media.

The effect in Columbia, though difficult to quantify, seems to have shaped the legacy of the peaceful protests into a fictional one of violence and rioting — causing lasting damage to MU in enrollment, funding and public perception. 

 
 
Kearney, who wrote his doctoral dissertation on Twitter behavior during the 2016 presidential election, said news of Kremlin interference at MU is not surprising. 

The Kremlin propaganda effort would have been ineffective if it were attempting to invent an entire narrative, according to the report, "Commanding the Trend: Social Media as Information Warfare," by Air Force Lt. Col. Jarred Prier. But, the campaign latches onto existing extreme viewpoints and makes them seem more palatable to moderates, then slowly weaves more malevolent fiction into public understanding.

"People are trying to cause chaos and are pandering to our fears and misunderstandings about race," said Berkley Hudson, 2015-2017 chair of the MU Faculty Council Race Relations Committee and a scholar of American media history. "I just think they're exploiting fear and misunderstanding that already exists here, and there's plenty of it without them."

"People wouldn't be surprised to see people have racist thoughts about the world," Kearney said, but when there's many more of them than expected, it skews perceptions of what others think about the topic. That shifts the larger public understanding of a topic in a more extreme direction.

There was general racial uneasiness in the U.S. after a string of police shootings of unarmed African-American men, including in Ferguson two hours away, that led to riots. Columbia is a heavily media-saturated town for its size and has a high percentage of people ages 18 to 29 — the demographic most likely to use social media — and it's generally a Democratic-leaning island in a mostly Republican state.

 

Alyssa Salela

News media and young people are more likely to be social media savvy than the average adult, Kearney said, but during an event like the 2015 protests where everyone is turning to Twitter to understand what's going on, people become vulnerable to misinformation. Despite their experience on social media platforms, young people and news media are, on balance, more likely to be affected by targeted disinformation than others, Kearney said. This is due in large part to their heavy exposure to so much content. 

People usually build a slow understanding of the world through social media as they take in content, Kearney said, but during crises, Twitter makes an outsized difference. People are confused, checking Twitter for breaking news and to get information from people on the scene, so such situations are inherently more vulnerable to manipulation.

Journalists are also most likely to use social media in pursuit of breaking news, according to "The American Journalist in the Digital Age," a 2014 Indiana University study cited in Prier's report — they grow more likely to use social media posts as sources when they lack official or corroborated ones.

When what's happening is especially emotionally charged, Kearney said, people are even more likely to quickly hit "retweet" without doing their due diligence. When there's momentum and a growing chorus of people agreeing with a post, people are naturally inclined to think of that post as more reliable. This creates an expanding loop of disinformation.

The combination of actual protests on campus with fraudulent stories of riots, looting and violence combined to give the appearance of chaos. The tension on campus led to such a highly emotionally charged environment where disinformation was able to take root.

Prier's report looked back on events at MU in early November 2015:

A Twitter account with the handle of "Fanfan1911" told the world that at MU the "cops are marching with the KKK" and that they beat up his little brother, according to Prier's report. Fanfan's tweet included a photo of a bruised African-American child as evidence. 

Payton Head, then president of the Missouri Students Association, posted the same day that the KKK was "confirmed" on campus and that he was working with the MU Police Department, the Missouri State Highway Patrol and the National Guard. Head soon deleted his post, apologizing for the misinformation — but it was enough for people across the country to think there was widespread violence at MU.

Missouri University of Science and Technology student Hunter Park posted on Yik-Yak, a social media app where posts typically remain anonymous, saying, "Some of you are alright. Don’t come to campus tomorrow" and that he planned to "shoot every black person I see." The language was inspired by a mass shooter in Oregon who had posted a similar message on the website 4Chan the night before committing violence. 

 
No violence occurred. The only related arrest was of Park, who was in Rolla. The child pictured in Fanfan's tweet came from a 2014 story in the Huffington Post about police brutality in Ohio. The KKK was nowhere to be seen. And the Fanfan account was a bot — an automated account not affiliated with a human being. 

Along with about 70 others monitored by Prier, Fanfan posted divisive, false information about trending news stories. The network of automated accounts automatically reposted each other  and provided an increased appearance of legitimacy. Users could see under each tweet that dozens or hundreds of others had cosigned it. The accounts swam in streams of preexisting trending topics, so Twitter was unable to identify them easily as fake users.

From 2015 through 2016, the Kremlin bot accounts posted about Neo-Nazis and the KKK at MU, fictional rapes and murders committed by Islamic people in Germany, and right-wing conspiracies such as Pizzagate — where John Podesta, former Hillary Clinton presidential campaign chairman, was said to be running a child sex trafficking organization out of a pizza parlor, according to the report.

"They throw the kitchen sink at the wall and something sticks," Kearney said. Handlers of the bots aim for stories that are extreme enough to be inflammatory, he said, but grounded enough to be believable. 

As Prier monitored them, the network of bot accounts posted in English, then German, then back to English. They posted false information on explosively controversial trending social issues such as police violence, anti-Obama sentiment and right-wing conspiracy theories in the United States, as well as Islamic immigration in Germany. Their campaigns to shift public narratives "happened to align with noted Russian influence operations in Europe and eventually in the U.S. presidential election," the report stated, leaving little doubt of the Kremlin as the source.

After the protests, there was a failure to strongly and clearly communicate what actually happened and "who we are at Mizzou," Hudson said, which led to the emersion of a false narrative that students took over the school — ultimately causing blame to land on the protesters. This, for many people, had the effect of delegitimizing totally legitimate complaints of racism and mistreatment, Hudson said.

The report "provided some clarity as well as confirmation of the origin of the fake news out there at that time," MU spokesman Christian Basi said. "The vast majority of rumors were coming from people not on campus just trying to create a divisive atmosphere."

"If my mother were alive, she'd say it was the devil," Hudson said of the disinformation effort. 

"This could’ve all been over in, like, three days," said Rep. Donna Lichtenegger, R-Jackson, chair of the Higher Education Committee, in a November interview about the protests.

The leadership resignations, long-term critical press and football team strike should have never happened, Lichtenegger said, and false reports throughout the protests were never clarified, which damaged the school in the long term.

"The students on campus had no idea what was going on — they were never in danger," she said. "So, I think a lot of it was leadership, a lot of it was the press, and I think everything was handled wrong. ... The legislature never listened to me so that I could tell them exactly what did happen."

In spring 2016, millions of dollars were cut from the University of Missouri System budget as punishment for its handling of the protests.

"Since the beginning of the story, there have been distortions of what happened," Hudson said.

There seem to be two basic schools of thought for how the public can fight against disinformation efforts going forward: Individuals can develop their own ability to read critically and suspiciously as they see the news, and social media platforms can take efforts to filter out disinformation and bots. 

 
"Ernest Hemingway said the best tool a writer can have is a B.S. detector," Hudson said. "Well, that's the best tool a reader can have, too." 

"Not everyone is an equal purveyor of news and information — and you're a fool if you think they are," he said.

Some of the best ways to tell if something is fake are to evaluate the source, read past the headline, seek corroborating sources and consider if your own biases are tinting your perception, according to the International Federation of Library Associations and Institutions

"People are just getting more savvy," Kearney said of recognizing fictional news. "In the future, we'll just get better at it."

It's also important for journalists to hold themselves to a higher standard for evaluating a source's truthfulness, he said, as spreading disinformation from their role is far more damaging. 

"There's pressure on journalists to be responsible in their social media use," Kearney said. "There's not pressure on the average citizen to do the same."

It could be particularly important for the public to learn to combat this problem going forward, he said. "If you're a politician or business owner, you now have a playbook," he said. "If someone wanted to repeat what happened, they could." 

Some responsibility ultimately falls on Twitter and Facebook to prevent that from happening, Kearney said. "I think social media companies should own up to that." 

Twitter has grown more diligent at identifying and banning fake accounts since 2016, Kearney said, mostly as a result of all the negative attention following the election. Ultimately, however, it's in Twitter's interest to be able to manipulate, to some extent, what people discuss on its platform. 

The social media platform will work hard to try and stop third parties from manipulating conversation without paying for advertising, Kearney said, but they don't have much of an incentive to go beyond that.

"That's how they make money." 

Related Content