Age Check Please – Australia’s Social Media Age Trial Steps Up
If you thought “what’s your date of birth?” was just an annoying formality, think again. Australia is now deep into a world-first trial of age verification tech for social media — and the implications for platforms, privacy, and policy will be real.
It’s official: Australia is no longer just talking about age restrictions on social media — it’s testing them. In what’s being described as a world-first, the federal government earlier this year launched the Age Assurance Technology Trial, a trial of age assurance technologies across more than 50 platforms, including heavyweights like Meta, TikTok and Snapchat.
The idea? To test whether it’s technically (and legally) viable to verify a user’s age before they gain access to certain online services, especially platforms known to attract kids and teens.
The goal is to find out whether it’s possible — and proportionate — to verify a user’s age before letting them dive into algorithm-driven feeds, DMs, or digital chaos.
Now, as of mid-May, the trial is expanding — with school students in Perth and Canberra joining the test groups. The trial includes biometric screening (e.g. facial age estimation), document-based verification, and other machine-learning tools and tech designed to assess age and detect users under 16 without necessarily collecting identifying information, in line with recommendations from the eSafety Commissioner and privacy reform proposals.
Initial results are reportedly encouraging, showing strong accuracy for detecting under-16 users. Some methods are accurate 90%+ of the time — but questions linger. How well do these tools work across diverse communities? How do they avoid discrimination? And perhaps most importantly: how do you balance age checks with user privacy?
But this isn’t just a tech exercise — it’s a law-and-policy warm-up. With the Children’s Online Privacy Code set to drop by 2026, and eSafety pushing hard for age-based restrictions, the real question is: can you implement age gates that are privacy-preserving, non-discriminatory, and not easily gamed by a teenager with a calculator and Photoshop?
It’s a tough balance. On one hand, there’s real concern about children’s exposure to online harms. On the other, age verification at scale risks blowing out privacy compliance, embedding surveillance tech, and excluding legitimate users who don’t fit biometric norms.
The final report lands in June 2025, and platforms should expect regulatory consequences soon after. If the trial proves age verification is accurate, scalable, and privacy-compatible, you can bet on mandatory age checks becoming law by the end of the year.
Bottom line? If your platform’s UX depends on open access and anonymity, start thinking now about how that survives an incoming legal obligation to know more about your users, and if not necessarily who they are, at least how young they actually are (as opposed to how old they might claim to be).