YouTube’s Free Pass May Be Up: eSafety Pushes Back on Social Media Carve-Out
The Albanese Government’s plan to restrict under-16s from holding social media accounts is already proving contentious — and now, its one glaring exception has been officially called out. The eSafety Commissioner, Julie Inman Grant, has advised Communications Minister Anika Wells to scrap the carve-out that would exempt YouTube from the new age-gating regime set to kick in this December.
The proposal, which mandates that platforms like TikTok, Instagram, Snapchat, Reddit and X take “reasonable steps” to block account creation by under-16s, currently spares YouTube on the basis that it has a broader educational and health utility. But the Commissioner’s position is clear: if it walks like TikTok and Shorts like TikTok, it’s probably TikTok — and deserves to be regulated accordingly.
YouTube: Too Big to Ban?
Back in November, then-Minister Rowland argued YouTube played a “significant role in enabling young people to access education and health support”, and thus deserved its special treatment. But the eSafety Commissioner’s new advice — now in the hands of Minister Wells — says the data tells a different story.
YouTube isn’t just a fringe player. A recent eSafety survey found it’s used by 76% of 10- to 15-year-olds, making it the dominant platform for that age group. Among kids who encountered harmful content online, 37% said the worst of it happened on YouTube.
In other words, if the aim is to protect children from the harms of social media, YouTube is not just part of the problem — it’s the biggest piece of it.
Functional Similarity, Regulatory Inconsistency
The core of the Commissioner’s argument is that functionality, not branding, should drive regulation. YouTube Shorts mimics the addictive swipe-based short-form video experience of TikTok and Instagram Reels. Carving it out sends mixed messages about the purpose of the law — and creates loopholes large enough for a Shorts binge.
The advice also calls for more adaptable, risk-based rules that focus on a platform’s actual features and threat profile, not how it labels itself. Technology evolves too fast for static category-based exemptions.
But What’s the Threat, Really?
There may be many examples of nanny-state regulation these days – but this isn’t one of them.
YouTube is in this author’s opinion an excellent platform extremely useful and entertaining all at the same time, and that applies to benefits both for adults and under-18s/under-16s.
However, there are also significant dangers for under-16s that can’t be ignored.
In plain terms:
1. Exposure to Inappropriate Content
Even with YouTube Kids and restricted mode, children can still be exposed to:
-
Pornographic or sexually suggestive content (sometimes slipped past filters).
-
Violent or graphic videos (including real-life fights, injuries, or distressing footage).
-
Content promoting self-harm, eating disorders, or suicide (often through seemingly innocuous videos or “coded” messaging).
-
Misinformation or conspiracy theories (e.g., QAnon, anti-vax rhetoric).
These exposures are linked to real psychological harms, especially among younger teens still forming their identity and critical reasoning skills.
2. Contact Risks (Predators & Harassment)
YouTube allows comments, live chat during livestreams, and even community posts — all of which create:
-
Opportunities for unsolicited contact from adults (including grooming behaviour).
-
Exposure to cyberbullying or peer harassment, often via comments.
-
Unfiltered interactions during livestreams — which are harder to moderate in real time.
The eSafety Commissioner sees this as part of a broader “contact harm” risk — it’s not just what kids see, but who can reach them and how they’re targeted.
3. Addictive Design (Shorts, Recommendations)
YouTube’s algorithmic design encourages:
-
Binge-watching and excessive screen time through autoplay and recommendations.
-
Engagement loops in YouTube Shorts (TikTok-style scrollable video snippets).
-
Exposure to more extreme or sensational content the longer a child watches (known as algorithmic “radicalisation”).
This design can disrupt sleep, concentration, and mental wellbeing — particularly in adolescents.
4. Data Privacy & Profiling
YouTube collects vast amounts of user data — even from minors — to personalise recommendations and ads. While Google claims to limit this for users under 18:
-
The eSafety Commissioner is concerned that data-driven profiling may still occur covertly or imperfectly.
-
Kids may also be inadvertently tracked across platforms when logged into a YouTube or Google account.
5. False Sense of Safety
YouTube’s exemption from the new social media rules may give parents the impression it is “safe” or “educational” by default — when, in fact, it often contains the same risks as TikTok or Instagram.
The Commissioner specifically called out that there isn’t sufficient evidence YouTube “predominantly provides beneficial experiences” for under-16s. So the carve-out undermines the purpose of the rules.
In summary, the concern isn’t just about under-16s accessing YouTube, but about the total environment of:
-
Risky content,
-
Risky contact,
-
Addictive design, and
-
Inadequate protective controls.
Risk-Based Reform on the Horizon
The YouTube advice comes as the eSafety Commissioner readies a suite of industry-specific codes targeting harmful online content, including pornography and violent material. New obligations are expected for search engines, hosting services, and telcos — with five more codes in the pipeline. If voluntary industry codes fall short, the Commissioner has flagged she’ll impose mandatory standards before July’s end.
Penalties for breach of these codes — like the new social media rules — could reach $50 million for systemic non-compliance.
What’s Next?
The final decision on YouTube’s exemption sits with Minister Wells, who must table the rules in Parliament for scrutiny. But with pressure now coming from the very regulator tasked with enforcement, and mounting community concern over YouTube’s influence, the carve-out may not survive the next sitting.
The bigger question is whether Australia can strike the right balance between platform accountability, digital literacy, and youth agency — without blunting the tools that help kids learn and connect. In a digital world that resists easy categorisation, risk-based regulation may be the only way forward.
You could be forgiven for thinking Australia’s privacy law just had its big moment — and it did. But don’t get too comfortable. What we’ve seen so far from the December 2024 amendments to the Privacy Act 1988 (Cth) is just Round 1.
Automated decision-making is everywhere now — in the background of your credit check, your insurance quote, your job application, even the price you see for a pair of shoes. For a while, this opaque machine logic operated in a legal blind spot: useful, profitable, and often inscrutable. But no longer.
For many years, privacy enforcement in Australia was a bit… polite. The OAIC could nudge, issue determinations, and make a bit of noise, but it often lacked the real teeth needed to drive compliance in the boardroom. That era is over.
Before the amendments to the Privacy Act 1988 (Cth) on 11 December 2024, if your Australian business wanted to send personal data overseas — say, to a CRM hosted in the US or a support centre in Manila — you had to jump through a slightly vague hoop. Under APP 8.1, you were supposed to take “reasonable steps” to ensure the recipient wouldn’t do anything that would breach the Australian Privacy Principles. And if they did? Thanks to section 16C, you were still on the hook.
For years, Australian Privacy Principle 11 has required businesses to take “reasonable steps” to protect personal information from misuse, interference, or loss. Sounds fair — but also vague. What exactly is “reasonable”? A locked filing cabinet? Two-factor authentication? Asking nicely?