Australia’s eSafety commissioner has told social media operators it expects them to employ multiple age assurance techniques and technologies to keep children under sixteen off social media, as required by local law from December 10th.
The Land Down Under decided to prevent social media platforms including Facebook, Instagram, Snapchat, TikTok, X and YouTube, from offering their services to kids on grounds that their products are harmful. That decision went down well with many Australians, riled Big Tech, and earned scorn from the technical community because the relevant laws passed before completion of a full assessment of age assurance technology.
A preliminary report on tests of the tech found it works imperfectly. Justin Warren, the Founder and Principal Analyst of Australian firm PivotNine and a technology rights advocate, summarized the findings of a final report on tests of age assurance technology as follows: “Theoretically, if you pick a specific set of tools, and use them under carefully controlled conditions, you can do age assurance sometimes.”
Australia is plowing ahead regardless, and on Tuesday issued guidance [PDF] on how to implement age assurance.
The core requirement is to take “reasonable steps” to ensure that kids can’t use a platform, which means not relying on users to reveal their age, or guessing age after letting people use a platform.
However, the guidance warns “There is no one-size fits all approach for what constitutes the taking of reasonable steps.”
Australia instead wants platforms to adopt a “waterfall approach” in which they use “multiple independent age assurance methods sequentially to establish an age assurance result.”
Techniques that e-Safety, Australia’s cyberspace regulator, feels are useful include:
- Age of account (e.g. the account has existed for 10 or more years)
- Engagement with content targeted at children or early teens
- Linguistic analysis/language processing indicating the end-user is likely a child
- Analysis of end-user-provided information/posts (e.g. analysis of text indicating age)
- Visual content analysis (e.g. facial age analysis performed on photos and videos uploaded to the platform)
- Audio analysis (e.g. age estimation based on voice)
- Activity patterns consistent with school schedules
- Connections with other end-users who appear to be under 16
- Membership in youth-focused groups, forums, or communities.
Platforms get to choose their own adventure, but if their preferred age assurance tech blocks substantial numbers of adult Australians they will fail the reasonableness test.
Communications minister Annika Wells has acknowledged this arrangement won’t keep all kids off social media. “We are not anticipating perfection here,” she told local media.
But Australia does expect social media platforms to act with “kindness, care and clear communication” when it prevents kids from signing up for accounts, or deactivates accounts held by underage users. One suggested act of kindness is giving underage users the chance to suspend their accounts, and preserve data, so they can return to a platform once they turn 16. Helping users move to alternative services that aren’t required to block under 16s is another option.
Platforms that don’t take reasonable steps to prevent under-16s from accessing their services face substantial fines. ®