Russia, Iran and Saudi Arabia are the top three proliferators of state-linked Twitter misinformation campaigns, according to a report released Wednesday by the Australian Strategic Policy Institute (ASPI).
The think tank’s International Cyber Policy Centre report and corresponding website examined datasets in Twitter’s Information Operations Archive to understand state willingness, capability and intent to drive disinformation campaigns.
While Russia, Iran and Saudi Arabia scored first, second and third, respectively, in terms of number of campaigns out of the 17 countries examined, China and Venezuela filled the next two places on the list.
Most of the countries’ efforts (9 out of 17) reached their apex in 2019. China peaked in May of that year with 158,611 tweets that month, and Saudi Arabia in October with 2.3 million. A Serbian operation sent the most tweets in one month: 2.7 million in February 2019.
The datasets ASPI analysed reached into terabytes, and the think tank’s researchers therefore restricted their work to tweets published within 90 days of an account’s previous tweet. This measure allowed researchers to isolate the narrative created by an account, since many accounts were repurposed or purchased.
Predictably, these narratives matched geopolitical concerns. Russian links discussed the US more than all countries, with tweets about QAnon, anti-Islamic sentiment, or a certain resident of Florida who previously spent four years in public service. Tweets from Iran, which bans Twitter domestically, focused on managing international perceptions and stirring up adversary nations.
China also blocks the use of Twitter for its residents. Tweets originating from the Middle Kingdom mostly discussed Hong Kong-related matters or sought to influence Chinese citizens living abroad with messages that encourage them to favor the Chinese Communist Party.
“Twitter has been perhaps the most forward leaning entity in the social media industry in terms of its public engagement on information operations,” praised the study authors. However, the team lamented Twitter had recently signalled it would discontinue the archive on which the study relied. The ASPI crew called for social media platforms to continue providing transparency and access to data.
“We need a combination of cross-sectoral collaboration and societal resilience to defend against information operations,” argued the think tank.
While a comprehensive cross-platform approach to cracking down on misinformation on the internet is prudent, focusing on Twitter – as opposed to, say, more video-focused sites – might be an understandable strategy as research has shown it is easier to spread misinformation effectively via text than video. Humans, apparently, can still identify a deepfaked video.
Conversely, other studies have shown that humans can no longer reliably tell the difference between a real human face and an image of a face generated by artificial intelligence.
But while spotting a deepfaked video or AI created image of a face is one thing, spotting a fake personality through limited one-way interactions may be another.
ASPI’s researchers noted that in Iran in particular, the use of fake personas was at times very convincing, with well-rounded characters that gave the appearance of concerned locals – a product that takes commitment and consistency to engineer. ®