Skip links

Think tank calls for monitoring of Chinese AI-enabled products

Chinese made AI-enabled products should spark similar concerns to Middle Kingdom sourced 5G equipment and therefore be regulated, said think tank Australian Strategic Policy Institute (ASPI) on Thursday.

Government bodies across the west have rated Chinese made 5G equipment in national systems as a security risk since at least 2019, resulting in bans and expensive rip and replacement programs. Concerns include possible backdoors that allow for espionage and that Chinese companies are obligated to comply with Beijing, no matter what it requests. Huawei has always denied these claims.

In a report, titled “De-risking Authoritarian AI,” ASPI’s Simeon Gilding argued that AI-enabled products present perhaps an even greater risk than 5G which is also more difficult to mitigate.

While vital, 5G is still restricted mostly to telecoms and therefore limited in scope with bounded costs. Meanwhile AI is embarking on a point where it will touch all aspects of human life in a manner that humans will grow accustomed to.

Once the AI-enabled technologies and systems are installed, they are most likely equipped with automatic internet and software updates that could place their use beyond user line of sight.

But exactly because AI will be implemented in so many ways – lurking in the background while potentially shaping the way societies think by influencing behavior online, enacted in ways that gatekeep jobs and credit, and also implemented in systems like traffic grids, maritime operations or rail systems – it will be impossible to control.

And although governments are currently rushing to regulate AI itself, AI-enabled products and services from authoritarian countries are likely to be overlooked, said the think tank.

“A general prohibition on all Chinese AI-enabled technology would be extremely costly and disruptive,” said Gilding.

Therefore ASPI’s recommendations include a three-part framework of auditing, red teaming and regulating AI-enabled products.

Auditing would involve evaluating both how critical the system is to essential services, public health and safety, democratic processes, open markets, freedom of speech, and the rule of law. It would also look at the scale of exposure to a product or service.

Red teaming would then be used to identify risks in the system internally. Gilding uses TikTok as an example of a product that could be redteamed, with cybersecurity professionals exploring whether they could use it to jump onto connected mobiles and IT systems to plant spying malware.

“If the team revealed serious vulnerabilities that can’t be mitigated, a general ban might be appropriate,” said the report.

Proposed treatment measures include prohibiting Chinese AI-enabled technology in some parts of the network, a ban on government procurement or use, or a general prohibition. Redundancy arrangements or public education efforts are also options.

If there was any doubt about the source of the threat, Gilding makes clear the concern is the People’s Republic of China, which he calls “a revisionist authoritarian power demonstrably hostile to democracy and the rules-based international order, which routinely uses AI to strengthen its own political and social stability at the expense of individual human rights.”

But why the concern over China and not Russia, Iran and North Korea?

Because, as ASPI explained, “China is a technology superpower with global capacity and ambitions and is a major exporter of effective, cost-competitive AI-enabled technology into democracies.”

The ASPI document also states that the closer a country is to the PRC, the more immediate the threat.

“Japan and South Korea should be vigilant, but India even more so,” said the think tank. The two reasons for that, added Gilding, are that India is still industrializing and thus price-competitive, effective Chinese gear will be a “tempting default first choice for its critical-infrastructure requirements”; and because India shares a border with China, which is a nuclear power.

“For India, this is a ground game with trigger fingers, so perhaps the regulation threshold should be lower for India,” recommended ASPI. ®

Source