Skip to Content

Why Ukraine war misinformation is so hard to police

<i>Adobe Stock</i><br/>The visual nature of much of the misinformation spreading about the war in Ukraine makes it especially hard to detect and counter
IB Photography - stock.adobe.com
Adobe Stock
The visual nature of much of the misinformation spreading about the war in Ukraine makes it especially hard to detect and counter

By Clare Duffy and Rachel Metz, CNN Business

Russia’s invasion of Ukraine is unfolding online like no war in history, providing a real-time stream of information on social platforms like Twitter, Facebook and TikTok. But with that unique view into the conflict comes a flood of misinformation that’s especially hard to root out — effectively creating a digital fog of war.

In recent weeks, for example, clips from video games and scenes from old wars presented as views from Ukraine’s front lines have gone viral alongside legitimate images. Heart-wrenching videos of families torn apart have been shared thousands of times, then debunked. And official government accounts from Ukraine and Russia have each made unfounded or misleading statements, which quickly get amplified online.

In some ways, it’s the latest in a long list of recent crises — from the pandemic to the Capitol riot — that have spurred the spread of potentially harmful misinformation. But misinformation experts say there are key differences between the war in Ukraine and other misinformation events that make false claims about the conflict especially insidious and difficult to counter.

Perhaps most notably, Ukraine-related misinformation has been highly visual and is spreading faster across borders, misinformation experts told CNN Business. The direct involvement of Russia — which is known for spreading misinformation online aimed at sowing discord and confusion — adds an extra layer of complexity. The emotional and visceral nature of the content also makes social media users quick to hit the share button, despite the complex misinformation landscape.

“People feel helpless, they feel like they want to do something and so they’re online scrolling and they’re sharing things that they think are true because they’re trying to be helpful,” said Claire Wardle, a Brown University professor and US director at misinformation-fighting nonprofit First Draft News. “[But] in these moments of upheaval and crisis, this is the time that we are worst at figuring out what’s true or false.”

A ‘torrent of images and videos being shared’

Unlike the ongoing Covid-19 pandemic, when many viral false claims have been text-based, much of the misinformation about the war in Ukraine has been in the form of images and videos. And those visual formats are harder and more time consuming for both automated systems and human fact checkers to evaluate and debunk, to say nothing of everyday social media users.

To vet an image or video, fact checkers typically start by searching the web to see if it has been posted previously, indicating that it is not from the current crisis. If it does appear to be recent, they can use tools to do things such as analyze shadows or compare the terrain shown to satellite images to confirm whether it was truly shot in the location it purports to show.

“Obviously, that’s going to be much more time consuming,” said Carlos Hernández-Echevarría, public policy and institutional development coordinator at Spain-based fact checking organization Maldita.es. By comparison, he said, “plainly false narratives about vaccination, say, like, ‘They create autism’ … all that stuff is pretty easy to debunk.”

And while anyone can run a photo through a reverse-image search engine like Google Image Search or TinEye to see where it may have popped up online in the past, it can be a lot harder for people to find tools to verify videos, noted Reneé DiResta, technical research manager at Stanford Internet Observatory. You might be able to track down the display thumbnail that shows up with the video, she said, but it’s trickier to find an entire video via reverse image search.

This difficulty is clear with the deluge of videos moving through apps such as TikTok. These clips include not just misinformation in its original form but videos perpetuating misinformation as users post their own reaction videos.

“I’ve opened TikTok a few times and the video that pops up is something that is not an accurate presentation of what it claims to be,” DiResta said. “Facebook and Twitter have had some rather extensive experience in content moderation during crises; I think TikTok is finding itself having to get up to speed very quickly.”

The speed with which false claims and narratives are spreading from one country to the next has also increased — from several weeks in the case of the pandemic and other recent crises to just a matter of days or, in some cases, even hours now, Hernández-Echevarría said. This may be due in part to the fact that so much of the content is visual, and thus less reliant on a shared language. Images and videos also often have a more emotional appeal than text-based posts, which experts say makes users more likely to share them.

“Right now there’s this torrent of images and videos being shared,” said Brandie Nonnecke, director of the Center for Information Technology Research in the Interest of Society (CITRIS) Policy Lab at UC Berkeley. “The more the imagery moves you, the quicker it’s going to move through social media networks.”

In one recent example, a video purporting to show Ukrainian soldiers saying emotional goodbyes to their families was viewed thousands of times on Instagram and was shared across various Facebook pages. However, AFP Fact Check found that the video was from 2018 and showed US Marines returning home to their families. Instagram and some pages on Facebook have since placed a label on the video warning users that it is “partly false information,” but the video is available on at least one other Facebook page without a label. (Facebook-parent Meta did not immediately respond to a request for comment.)

Coordinated efforts by Russia to spread false narratives have also become more overt and prominent since the war began. A false claim by Russia that the United States is developing bioweapons in Ukraine and Russian President Vladimir Putin has stepped in to save the day has recently reemerged and gained traction — first among QAnon adherents and, more recently, on more mainstream platforms and even among some lawmakers. There is also a new, troubling trend of videos that appear to be debunking false, pro-Ukrainian images and videos which are themselves fake and designed to sow confusion and doubt about Russia’s actions, ProPublica reported last week.

Some on the Ukranian side have spread misleading information. Earlier this month, as Russian forces were firing on Ukraine’s Zaporizhzhia nuclear power plant, Europe’s largest, Ukrainian Foreign Minister Dmytro Kuleba tweeted that “if (the plant) blows up, it will be 10 times larger than Chernobyl,” referencing the largest nuclear power disaster in history. But while experts expressed serious concerns, they also said that the more modern plant was built differently and more safely than Chernobyl, and was unlikely to be at risk of blowing up.

In many cases, false or misleading narratives are spread through mildly conspiratorial videos or images. Each individual piece of content might not be harmful enough to violate platforms’ guidelines, but when users watch hundreds of videos a day, they may walk away with a skewed idea of what’s happening on the ground, according to Wardle.

“The wider narratives here that are shaping the war, shaping people’s ideas of Europe and NATO and Russia, it’s less about an individual TikTok video. It’s like the drip, drip, drip of what those narratives are doing and the way that they’re making people shape their understanding,” she said.

Platforms fighting back against misinformation

Big social media platforms have taken steps to provide users with context around the Ukraine-related content they see. Twitter and Meta-owned platforms Instagram and Facebook, for instance, have begun removing or labeling and demoting content posted by or linking to Russian state-controlled media, including television network Russia Today (Facebook had said in 2020 that it would start labeling state-controlled media).

TikTok said earlier this month it would pilot a similar effort to label “some state-controlled media accounts.” TikTok also says it prohibits “harmful misinformation,” although it’s not clear how it defines that phrase. The three platforms also work with independent fact-checking organizations to identify policy-violating, false content or surface accurate information.

Twitter and Meta have also said they are working to enforce their policies related to coordinated inauthentic behavior — which refers to bad actors using networks of fake accounts to spread falsehoods online — for potential Ukraine-related activity. Meta recently detailed a pro-Russia disinformation network that it removed, which included fake user profiles complete with AI-generated profile pictures and websites posing as independent news outlets to spread anti-Ukraine propaganda.

Some of these efforts have landed the tech companies in hot water with Russia, resulting in their platforms being restricted or banned in the country and showing the tightrope they must walk as they manage the use of their platforms during the crisis. And the continued rapid spread of misinformation online proves that none of these methods can staunch the flow of falsehoods.

Even if a piece of content is labeled on one platform, content is often repurposed on others that may not have equally robust fact-checking practices. When social networks host misinformation, the platforms’ algorithms can quickly amplify its reach so it’s seen by thousands or millions of users.

There are now some efforts underway to use social media platforms to spread accurate information and teach users how to avoid amplifying falsehoods.

The White House held a briefing last week with top TikTok influencers to answer questions about the war in Ukraine and the United States’ role in the conflict, according to the Washington Post. And Hernández-Echevarría’s Maldita.es has worked with with more than 60 other fact-checking organizations from around the world to create a database of debunked misinformation related to the war, which can be used by social media platforms and users.

In order to cut down on the spread of misinformation online — and in light of constantly changing rules at social media platforms — Nonnecke would like to see a set of standards or best practices that these platforms must engage in during times of war, enforced by an outside group. “They shouldn’t just be deciding on a whim what they want to do,” she said.

Major social media platforms must also boost their content moderation capabilities in languages other than English — in this case, especially in Eastern European languages such as Polish, Romanian and Slovenian, Wardle said.

“My friend who’s from Romania, she’s like, ‘This whole narrative around Putin coming to save Ukrainians from the Nazis, in the West you’re all kind of laughing at it,'” she said, referencing the Russian President’s claims without evidence that the Ukranian government is a “gang of drug addicts and neo-Nazis”. “But she’s like, ‘Here, it’s everywhere.'”

The-CNN-Wire
™ & © 2022 Cable News Network, Inc., a WarnerMedia Company. All rights reserved.

Article Topic Follows: CNN - Social Media/Technology

Jump to comments ↓

CNN Newsource

BE PART OF THE CONVERSATION

ABC 17 News is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content