• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

IPMojo

  • About IP Mojo
  • About Scott Coulthart
  • CONTACT
BOOK AN APPOINTMENT

Digital Law

June 25, 2025 by Scott Coulthart

YouTube’s Free Pass May Be Up: eSafety Pushes Back on Social Media Carve-Out

The Albanese Government’s plan to restrict under-16s from holding social media accounts is already proving contentious — and now, its one glaring exception has been officially called out. The eSafety Commissioner, Julie Inman Grant, has advised Communications Minister Anika Wells to scrap the carve-out that would exempt YouTube from the new age-gating regime set to kick in this December.

The proposal, which mandates that platforms like TikTok, Instagram, Snapchat, Reddit and X take “reasonable steps” to block account creation by under-16s, currently spares YouTube on the basis that it has a broader educational and health utility. But the Commissioner’s position is clear: if it walks like TikTok and Shorts like TikTok, it’s probably TikTok — and deserves to be regulated accordingly.

YouTube: Too Big to Ban?

Back in November, then-Minister Rowland argued YouTube played a “significant role in enabling young people to access education and health support”, and thus deserved its special treatment. But the eSafety Commissioner’s new advice — now in the hands of Minister Wells — says the data tells a different story.

YouTube isn’t just a fringe player. A recent eSafety survey found it’s used by 76% of 10- to 15-year-olds, making it the dominant platform for that age group. Among kids who encountered harmful content online, 37% said the worst of it happened on YouTube.

In other words, if the aim is to protect children from the harms of social media, YouTube is not just part of the problem — it’s the biggest piece of it.

Functional Similarity, Regulatory Inconsistency

The core of the Commissioner’s argument is that functionality, not branding, should drive regulation. YouTube Shorts mimics the addictive swipe-based short-form video experience of TikTok and Instagram Reels. Carving it out sends mixed messages about the purpose of the law — and creates loopholes large enough for a Shorts binge.

The advice also calls for more adaptable, risk-based rules that focus on a platform’s actual features and threat profile, not how it labels itself. Technology evolves too fast for static category-based exemptions.

But What’s the Threat, Really?

There may be many examples of nanny-state regulation these days – but this isn’t one of them.

YouTube is in this author’s opinion an excellent platform extremely useful and entertaining all at the same time, and that applies to benefits both for adults and under-18s/under-16s.

However, there are also significant dangers for under-16s that can’t be ignored.

In plain terms:

1. Exposure to Inappropriate Content

Even with YouTube Kids and restricted mode, children can still be exposed to:

  • Pornographic or sexually suggestive content (sometimes slipped past filters).

  • Violent or graphic videos (including real-life fights, injuries, or distressing footage).

  • Content promoting self-harm, eating disorders, or suicide (often through seemingly innocuous videos or “coded” messaging).

  • Misinformation or conspiracy theories (e.g., QAnon, anti-vax rhetoric).

These exposures are linked to real psychological harms, especially among younger teens still forming their identity and critical reasoning skills.


2. Contact Risks (Predators & Harassment)

YouTube allows comments, live chat during livestreams, and even community posts — all of which create:

  • Opportunities for unsolicited contact from adults (including grooming behaviour).

  • Exposure to cyberbullying or peer harassment, often via comments.

  • Unfiltered interactions during livestreams — which are harder to moderate in real time.

The eSafety Commissioner sees this as part of a broader “contact harm” risk — it’s not just what kids see, but who can reach them and how they’re targeted.


3. Addictive Design (Shorts, Recommendations)

YouTube’s algorithmic design encourages:

  • Binge-watching and excessive screen time through autoplay and recommendations.

  • Engagement loops in YouTube Shorts (TikTok-style scrollable video snippets).

  • Exposure to more extreme or sensational content the longer a child watches (known as algorithmic “radicalisation”).

This design can disrupt sleep, concentration, and mental wellbeing — particularly in adolescents.


4. Data Privacy & Profiling

YouTube collects vast amounts of user data — even from minors — to personalise recommendations and ads. While Google claims to limit this for users under 18:

  • The eSafety Commissioner is concerned that data-driven profiling may still occur covertly or imperfectly.

  • Kids may also be inadvertently tracked across platforms when logged into a YouTube or Google account.


5. False Sense of Safety

YouTube’s exemption from the new social media rules may give parents the impression it is “safe” or “educational” by default — when, in fact, it often contains the same risks as TikTok or Instagram.

The Commissioner specifically called out that there isn’t sufficient evidence YouTube “predominantly provides beneficial experiences” for under-16s. So the carve-out undermines the purpose of the rules.


In summary, the concern isn’t just about under-16s accessing YouTube, but about the total environment of:

  • Risky content,

  • Risky contact,

  • Addictive design, and

  • Inadequate protective controls.

Risk-Based Reform on the Horizon

The YouTube advice comes as the eSafety Commissioner readies a suite of industry-specific codes targeting harmful online content, including pornography and violent material. New obligations are expected for search engines, hosting services, and telcos — with five more codes in the pipeline. If voluntary industry codes fall short, the Commissioner has flagged she’ll impose mandatory standards before July’s end.

Penalties for breach of these codes — like the new social media rules — could reach $50 million for systemic non-compliance.

What’s Next?

The final decision on YouTube’s exemption sits with Minister Wells, who must table the rules in Parliament for scrutiny. But with pressure now coming from the very regulator tasked with enforcement, and mounting community concern over YouTube’s influence, the carve-out may not survive the next sitting.

The bigger question is whether Australia can strike the right balance between platform accountability, digital literacy, and youth agency — without blunting the tools that help kids learn and connect. In a digital world that resists easy categorisation, risk-based regulation may be the only way forward.

Filed Under: Digital Law, Regulation, Technology Tagged With: Digital Law, Regulation, Technology

June 24, 2025 by Scott Coulthart

Fair Use or Free Ride? The Case for an AI Blanket Licence

What if AI companies had to pay for the content they train on? Welcome to the next frontier in copyright law — where inspiration meets ingestion.

When AI companies train their models — whether for music, image generation, writing or video — they don’t do it in a vacuum. They train on us. Or more precisely: on our songs, our blogs, our art, our tweets, our books, our interviews.

They harvest it at scale, often scraped from the open web, with or without permission — and certainly without compensation.

This has prompted an increasingly vocal question from creators and content owners:

Shouldn’t we get paid when machines learn from our work?

The proposed answer from some corners: a blanket licensing regime.

What’s a Blanket Licence?

Nothing to do with bedding – a blanket licence is a pre-agreed system for legal reuse. It doesn’t ask for permission each time. Instead, it says:

You can use a defined pool of material for a defined purpose — if you pay.

We already see this in:

  • Music royalties (e.g. APRA, ASCAP, BMI)

  • Broadcast and public performance rights

  • Compulsory licensing of cover songs in the US

Could the same apply to AI?

What the Law Says (or Doesn’t)

AI companies argue that training their models on public material is “fair use” (US) or doesn’t involve “substantial reproduction” (Australia), since no exact copy of the work appears in the output.

However, copies are made during scraping, and substantial parts are almost certainly reproduced during the training process or embedded in derivative outputs — either of which could pose problems under both US and Australian copyright law.

But courts are still catching up.

Pending or recent litigation:

  • The New York Times v OpenAI: scraping articles to train GPT

  • Sarah Silverman v Meta: use of copyrighted books

  • Getty Images v Stability AI: image training and watermark copying

None of these cases have yet resolved the underlying issue:

Is training AI on copyrighted works a use that requires permission — or payment?

What a Blanket Licence Would Do

Under a blanket licence system:

  • Training (and copying or development of derivatives for that purpose) would be lawful, as long as the AI provider paid into a fund

  • Creators and rights holders would receive royalty payments, either directly or via a collecting society

  • A legal baseline would be established, reducing lawsuits and uncertainty

This would mirror systems used in broadcasting and streaming, where revenue is pooled and distributed based on usage data.

Challenges Ahead

1. Who Gets Paid?

Not all data is traceable or attributed. Unlike Spotify, which tracks each song streamed, AI models ingest billions of unlabeled tokens.

How do you determine who owns what — and which parts — of material abstracted, fragmented, and stored somewhere in the cloud?

2. How Much?

Rates would need to reflect:

  • The extent of use

  • The importance of the material to the training corpus

  • The impact on the original market for the work

This is tricky when a model is trained once and then used forever.

3. Which Countries?

Copyright laws vary. A licence in Australia might mean nothing in the US.

A global licence would require multilateral cooperation — and likely WIPO involvement.

Legal Precedent: Australia’s Safe Harbour and Statutory Licensing Models

Australia’s own statutory licensing schemes (e.g. educational copying under Part VB of the Copyright Act) show that:

  • Lawmakers can mandate payment for certain uses,

  • Even if individual rights holders never negotiated the terms,

  • Provided it’s reasonable, transparent, and compensatory.

But those systems also brought:

  • Bureaucratic collection processes

  • Contentious allocation models

  • Endless legal wrangling over definitions (What is “reasonable portion”? What qualifies as “educational purpose”?)

Expect the same for AI.

Creators and Innovation: A Balancing Act

For creators:

  • A blanket licence offers recognition and payment

  • It helps avoid the current “scrape now, settle later” model

  • It could fund new creative work rather than hollowing out industries

For innovators:

  • It provides legal certainty

  • Encourages investment in AI tools

  • Reduces the risk of devastating retroactive litigation

But if set up poorly, it could:

  • Be exclusionary (if licensing fees are too high for small players)

  • Be ineffective (if rights aren’t properly enforced or distributed)

  • Or be too slow to match AI’s pace

What’s Next?

Australia’s Copyright Act doesn’t currently recognise training as a specific form of use. But policy reviews are under way in multiple countries, including by:

  • The UK IPO

  • The European Commission

  • The US Copyright Office

  • And here in Australia, the Attorney-General’s Department is conducting consultations through 2024–25 on how copyright law should respond to AI

Creators, platforms, and governments are all watching the courts. But if consensus forms around the need for structured compensation, a statutory blanket licence might just be the solution.


Bottom Line

We’ve built AI on the backs of human creativity. The question isn’t whether to stop AI — it’s how to make it fair.

A blanket licence won’t solve every problem. But it could be the start of a system where creators aren’t left behind — and where AI learns with permission, not just ambition.

Filed Under: AI, Copyright, Digital Law, IP, Technology Tagged With: AI, Copyright, Digital Law, IP, Technology

June 19, 2025 by Scott Coulthart

Maxim Forgets the Maxim, Chases Nuclear but Bombs

Maxim Media, the publishers behind the well-known men’s lifestyle magazine and brand MAXIM, had minimal success when in Maxim Media Inc. v Nuclear Enterprises Pty Ltd [2024] FCA 1443 they sought urgent Federal Court orders to shut down an Australian company allegedly riding on their name — through magazines, domain names, destination tours, and model management services.

Despite the explosive accusations, the Court delivered a much more subdued response.

Maxim had delayed for some time in coming to Court, but now applied for interlocutory relief, seeking immediate injunctions to restrain:

  • Use of the MAXIM name in any form in Australia;

  • Distribution of a competing Maxim Magazine;

  • Operation of maxim.com.au, destinationmaxim.com.au, and related social handles;

  • Any further unauthorised brand use.

The application relied on trade marks registered in 2020 and 2023 — and on allegations that the Australian respondents, including Nuclear Enterprises and Michael Downs, had no licence or authority to use the name.

Justice Rofe refused the injunction — not because the claim was doomed, but because:

  • Ownership and licensing rights hadn’t been clearly established yet;

  • There were substantial factual disputes that needed a full trial;

  • There was no persuasive case for irreparable harm that couldn’t be remedied later;

  • The balance of convenience didn’t justify urgent intervention — particularly given Maxim’s delay in seeking relief (ironically, Maxim had ignored the equitable maxim regarding laches).

The proceeding will now be allocated to a docket judge for a full hearing.

The main takeaways here are:

  • Interlocutory relief isn’t automatic, even with a registered trade mark — the applicant still needs clean title, urgency, and evidence of irreparable harm.

  • Delays hurt. The longer you wait to challenge a rival’s use of your mark, the harder it is to convince a court that urgent action is needed.

The case could still blow Maxim’s way at a final hearing — but for now, Nuclear gets to keep exploding – and the fallout will be huge.

Filed Under: Digital Law, IP, Trade Marks Tagged With: Digital Law, IP, Trade Marks

May 21, 2025 by Scott Coulthart

Age Check Please – Australia’s Social Media Age Trial Steps Up

If you thought “what’s your date of birth?” was just an annoying formality, think again. Australia is now deep into a world-first trial of age verification tech for social media — and the implications for platforms, privacy, and policy will be real.

It’s official: Australia is no longer just talking about age restrictions on social media — it’s testing them. In what’s being described as a world-first, the federal government earlier this year launched the Age Assurance Technology Trial, a trial of age assurance technologies across more than 50 platforms, including heavyweights like Meta, TikTok and Snapchat.

The idea? To test whether it’s technically (and legally) viable to verify a user’s age before they gain access to certain online services, especially platforms known to attract kids and teens.

The goal is to find out whether it’s possible — and proportionate — to verify a user’s age before letting them dive into algorithm-driven feeds, DMs, or digital chaos.

Now, as of mid-May, the trial is expanding — with school students in Perth and Canberra joining the test groups. The trial includes biometric screening (e.g. facial age estimation), document-based verification, and other machine-learning tools and tech designed to assess age and  detect users under 16 without necessarily collecting identifying information, in line with recommendations from the eSafety Commissioner and privacy reform proposals.

Initial results are reportedly encouraging, showing strong accuracy for detecting under-16 users. Some methods are accurate 90%+ of the time — but questions linger. How well do these tools work across diverse communities? How do they avoid discrimination? And perhaps most importantly: how do you balance age checks with user privacy?

But this isn’t just a tech exercise — it’s a law-and-policy warm-up. With the Children’s Online Privacy Code set to drop by 2026, and eSafety pushing hard for age-based restrictions, the real question is: can you implement age gates that are privacy-preserving, non-discriminatory, and not easily gamed by a teenager with a calculator and Photoshop?

It’s a tough balance. On one hand, there’s real concern about children’s exposure to online harms. On the other, age verification at scale risks blowing out privacy compliance, embedding surveillance tech, and excluding legitimate users who don’t fit biometric norms.

The final report lands in June 2025, and platforms should expect regulatory consequences soon after. If the trial proves age verification is accurate, scalable, and privacy-compatible, you can bet on mandatory age checks becoming law by the end of the year.

Bottom line? If your platform’s UX depends on open access and anonymity, start thinking now about how that survives an incoming legal obligation to know more about your users, and if not necessarily who they are, at least how young they actually are (as opposed to how old they might claim to be).

Filed Under: Digital Law, Technology Tagged With: Digital Law, Technology

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3

Primary Sidebar

Recent Posts

  • 🏇 When the Race Stops a Nation — Who Owns the Moment?
  • AI Training in Australia: Why a Mandatory Licence Could Be the Practical Middle Ground
  • AI-Generated Works & Australian Copyright — What IP Owners Need to Know
  • When Cheaper Medicines Meet Patent Law: Regeneron v Sandoz
  • #NotThatFamous: When Influencer Buzz Fails the s 60 Test

Archives

  • November 2025 (1)
  • October 2025 (14)
  • September 2025 (21)
  • August 2025 (18)
  • July 2025 (16)
  • June 2025 (21)
  • May 2025 (12)
  • April 2025 (4)

Footer

© Scott Coulthart 2025