Meta, the parent company of Facebook and Instagram, has identified Russia as the top source of “coordinated inauthentic behavior” globally, raising concerns as the U.S. presidential race approaches.
At a Glance
- Russia has been deemed the leading source of coordinated inauthentic behavior on Facebook and Instagram
- Russian operators are allegedly using generative AI in online deception campaigns
- Meta has been able to disrupt these deceptive influence operations
- Concerns are rising about AI-powered election interference in the U.S. and other countries
- Meta anticipates Russia-backed campaigns will target U.S. political candidates who support Ukraine
Russia’s Coordinated Inauthentic Behavior Efforts
Meta’s recent security report has called out Russia as the primary source of global “coordinated inauthentic behavior” (CIB) efforts. The tech giant has identified at least 39 covert influence operations originating from Russia, highlighting the scale of the issue. These operations have allegedly been particularly focused on undermining Ukraine and its allies since Russia’s 2022 invasion.
Russian operators have allegedly been employing generative AI to create fake news articles and personas for fictitious journalists. However, Meta reports that these AI-powered tactics have only provided “incremental productivity and content-generation gains” for the bad actors involved.
Russia’s AI tactics for US election interference are failing, Meta says https://t.co/ZZUII0XJVO
— Guardian US (@GuardianUS) August 15, 2024
Meta’s Approach to Combating Disinformation
Meta has been proactive in disrupting deceptive influence operations in the past. In 2020, the company stated its strategy focused on account behavior rather than content to detect and counter these types of threats.
“When we investigate and remove these operations, we focus on behavior, not content — no matter who’s behind them, what they post or whether they’re foreign or domestic,” said Meta at the time.
The tech giant describes Russia’s alleged operations as “low-quality, high-volume” with frequent lapses in operational security. Interestingly, Meta has observed that real users often identify and call out these networks as trolls, indicating their struggle to engage authentic audiences effectively.
Concerns for Upcoming U.S. Elections
As the U.S. presidential election approaches, concerns are mounting about the potential use of generative AI to mislead voters. Meta anticipates that Russia-backed deception campaigns will target U.S. political candidates who support Ukraine.
These concerns are particularly pressing given Facebook’s history of being used for election disinformation, notably in the 2016 U.S. election. The ease of using generative AI tools like ChatGPT and Dall-E has experts worried about a potential surge in disinformation.
Collaboration and Challenges in Combating Disinformation
Meta has emphasized its collaboration with other internet firms to combat misinformation. However, changes at other platforms, such as X (formerly Twitter), have raised concerns about the overall effectiveness of these efforts.
“As far as Twitter (X) is concerned, they are still going through a transition,” David Agranovich, Meta’s security policy director, said. “A lot of the people we’ve dealt with in the past there have moved on.”
The reduction of trust and safety teams and content moderation at X has made it more susceptible to disinformation, potentially compromising the broader efforts to combat online deception, The Guardian reported.
As the battle against coordinated inauthentic behavior continues, Meta’s efforts serve as a crucial line of defense against foreign interference in democratic processes. However, the evolving nature of these threats, particularly with the integration of AI, underscores the need for ongoing vigilance and adaptation in the fight against online disinformation.
Sources
- Russia’s AI tactics for US election interference are failing, Meta says
- Russia’s high-tech AI efforts to influence US election are failing: Meta
- Removing Coordinated Inauthentic Behavior