In February 2022, a video circulated depicting aerial gunfire raining down on civilians in the street. The content was shared more than 20 thousand times, a military helicopter attacking civilians in Kyiv during the Russo-Ukranian War. But this video was not taken in 2022. It was recorded six years earlier in Turkey — actually depicting the violence of the Turkish military against supporters of President Recep Tayyip Erdogan during a failed coup d’état.
During times of international crisis, there is greater risk of misinformation spreading through social media. But just as technology amplifies misinformation, it’s also the only weapon to combat it.
Misinformation has existed forever, but social media has amplified it. The inception of the internet and rise of accessible social media platforms has fueled the spread of misinformation through carefully curated posts and colorful infographics. Social media temporality and repost features allow users to spread information without generating their own ideas and grants them a false sense of freedom knowing the post will only stay up for a short period of time.
Additionally, during crises, there is often a culture of shame for not promoting activism through social media — people can feel powerless when a conflict is across the world and resort to internet activism to alleviate their helplessness. Even more significant, however, is social media algorithms’ promotion of misinformation over credible facts. A study by the Integrity Institute found that a “well-crafted lie” gets more engagement than a truthful post.
The study also found that platforms such as Facebook and X, formerly Twitter, have a higher number of misinformation posts, but platforms such as TikTok and X have a higher misinformation amplification factor — largely due to lack of fact-checking safeguards and content restrictions.
International crises that stem from geopolitical issues can be difficult to understand. Posts containing misinformation about complex issues, fueled by an algorithm that amplifies misleading content, is the perfect combination for the spread of misinformation. Information about conflicts overseas also face a language barrier, increasing difficulty of tracking and interpreting misinformation.
Because of social media amplification, misinformation during times of international crisis can be condensed into three broad categories: accurate content, but inaccurate context or labels, completely distorted or false content, and misleading information or missing context.
For instance, an Instagram post purporting Russian jets flew over Kyiv hours after Russian President Vladimir Putin announced an invasion of Ukraine and implying Russian troops had already entered Ukraine’s capital was actually taken in 2020 during a practice flyover in Moscow. A May 2020 Facebook photo of an explosion claimed to display a Russian attack on Ukraine, but was actually an Israeli airstrike on the Gaza strip in May 2021. A video of Putin speaking in Russian with inaccurate English subtitles surfaced in the wake of the Israel-Hamas War. These are examples of the first category: accurate photo and video but inaccurate context.
Examples of the second type of misinformation — distorted or false content — include a misquotation of Turkish President Recep Tayyip Erdogan threatening to intervene in the Israel-Hamas War, a fake social media post from the Israel Defense Forces confirming it bombed a hospital in Gaza, and a fabricated BBC News report confirming Ukraine provided weapons to Hamas.
These examples of misinformation are few of thousands circulating online. The Israel-Hamas War has come with an unprecedented amount of misinformation in the wake of irresponsible social media leadership.
Other international conflicts in the last 10 years that have received mass social media attention, including the Russo-Ukranian War, have been, arguably, worsened by the increase in misinformation spread through more easily accessible technology and social media. Coordinated disinformation campaigns about the Israel-Hamas War have garnered millions of views of inaccurate content on X. Increasingly sophisticated fabrications can lead to the continuation of wars as public opinion is altered through false information.
Influencing public opinion through misinformation is a dangerous weapon. Coordinated disinformation campaigns have garnered millions of views releasing of content on X. Under the leadership of Elon Musk, who bought the company last year, lax content restrictions and slashed personnel who work to curb misinformation have contributed to the rise of these campaigns.
While technology fuels coordinated and uncoordinated misinformation, it’s also the best tool to uncover and prevent inaccurate content. Access to the internet provides users with a breadth of information — education is more accessible than ever, especially for people with disabilities.
Forensic journalists can use geolocation to pinpoint exactly where an image or video was recorded — dissecting content pixel by pixel to uncover landmarks, particular geographic features, or silhouettes. Tools such as Google Earth or Yandex, the Russian equivalent, allow people to cross-check a photo or video.
In addition, artificial intelligence is a key tool in combating misinformation. Large language models can be fed to posts containing misinformation and trained on signs of inaccurate or sensationalized information. They can be turned into fact-checking software on social media and detect false news articles.
Technology has ultimately worsened international conflicts through its spread and creation of misinformation. But it also serves as a vital tool to detect misinformation and increase education about geopolitical issues and international conflicts.
Comments