With major elections across over 70 countries and regions in 2024, there is mounting vigilance against influence operations. These involve various practices to manipulate public opinion, including disseminating false information through social media (SNS) and artificial intelligence (AI).
The use of AI to convincingly alter the faces and voices of politicians in audio and video is increasing. With the November United States presidential election approaching, there are fears such malign activities will intensify. In January, New Hampshire voters received hoax calls impersonating President Biden, urging them not to vote in the primary elections. "Your vote makes a difference in November, not this Tuesday," the mimicking voice told recipients. US authorities suspect the calls were AI-generated and are investigating the matter.
Misinformation was rampant during Taiwan's January presidential election, as the findings from the Taiwan FactCheck Center attest. According to the nonprofit organization, nearly half of the election rumors echoed or closely resembled those from previous elections.
Creating Voter Uncertainty Through Misinformation
False claims suggested that the ruling party employed special ink or alternate ballot boxes to inflate votes. Such disinformation aimed to instill doubt in voters about the electoral process ahead of elections.
Japan, too, has Lower House by-elections on April 16. Minister for Internal Affairs and Communications Takeaki Matsumoto addressed misinformation concerns. "We recognize that false or misleading information is possibly impacting overseas elections," he stated.
Matsumoto emphasized the need for comprehensive measures to address this issue, including institutional considerations. Using platforms like social media to spread false information and rumors to manipulate public opinion is known as influence operations. These tactics aim to deliberately alter the views and actions of targeted individuals or groups for specific political, social, or economic goals.
Traditionally, visible personal or material damage has been a characteristic of cyber attacks. However, AI advances make it easy to create false information, meaning the threat of influence operations is harder to detect.
Bot Facebook Accounts
In October 2023, the Canadian government cautioned that Chinese bots were behind the "Spamouflage" campaign that had been ongoing since 2018. US internet technology (IT) company Meta subsequently deleted around 7,700 associated Facebook accounts. These accounts were also linked to the spread of false information about the release of treated water from Fukushima's Daiichi Nuclear Power Plant.
Furthermore, in February 2024, 20 IT companies including Microsoft agreed to collaborate to prevent election interference from AI-generated misinformation.
Naoto Narita from the information security firm Trend Micro noted the difficulties in distinguishing true information from falsehoods. "It's challenging for recipients to discern manipulated information due to the skillful blending of various components," he explained. Narita further stressed the importance of "enhancing information literacy and fostering an environment less conducive to false information dissemination."
Three Methods of Interweaving Information
Influence operations involve weaving together various pieces of information, using "MDM" [Misinformation, Disinformation, Malinformation] to accomplish their purpose.
"Misinformation," which is the first "M," refers to intentionally using and spreading incorrect historical facts or misleading statistics. "D" stands for "Disinformation," which is false information created intentionally with malicious intent.
Finally, the second "M" stands for "Malinformation." This refers to information based on truth but presented with harmful intentions. By exaggerating facts or spreading negative information, malinformation can potentially deepen divisions and conflicts.
There are well-known examples. During the initial stages of Russia's invasion of Ukraine, a fake video of Ukrainian President Volodymr Zelenskyy circulated on Facebook. In the footage, Zelenskyy urged soldiers and civilians to surrender to Russia. Similarly, in 2023, a fake video of Prime Minister Fumio Kishida making foul statements was disseminated online in Japan.
Fact-checking Challenges
Professor Shinichi Yamaguchi, Associate Professor at the Center for Global Communications at the International University of Japan (internet media theory).
Some countries have enacted laws regulating fake news. However, journalists criticizing the government have become investigation targets under these laws. Therefore, Japan has refrained from attempting to suppress misinformation and disinformation through legislation. Instead, it has pursued multifaceted measures such as collaboration with platform operators and media literacy education.
Foreign platform operators dominate the industry. However, they need to be capable of supporting users in Japanese. On this point, a Ministry of Internal Affairs and Communications panel interviewed several operators in March. It found that each one of the operators responded differently to the same questions.
Japan is also behind on fact-checking. Compared to the volume of false or misleading information, the country's fact-checking capacity is inadequate. However, three Japanese organizations have recently joined the International Fact-Checking Network.
Compare Japan, for example, to Indonesian fact-checking organizations that conduct approximately 10,000 fact-checks annually. Japan's combined efforts across all organizations amount to only several hundred. Meanwhile, there have been improvements, but numerous challenges persist.
RELATED:
- The Future of Newspapers in the Age of Generative AI
- ALPS Treated Water Disinformation and Fake Photos Surge on Social Media
- EDITORIAL | Purge Social Media of False Information on Earthquake Rescue
(Read the report in Japanese.)
Author: Misaki Owatari