• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

IPMojo

  • About IP Mojo
  • About Scott Coulthart
  • CONTACT
BOOK AN APPOINTMENT

Digital Law

June 24, 2025 by Scott Coulthart

Fair Use or Free Ride? The Case for an AI Blanket Licence

What if AI companies had to pay for the content they train on? Welcome to the next frontier in copyright law — where inspiration meets ingestion.

When AI companies train their models — whether for music, image generation, writing or video — they don’t do it in a vacuum. They train on us. Or more precisely: on our songs, our blogs, our art, our tweets, our books, our interviews.

They harvest it at scale, often scraped from the open web, with or without permission — and certainly without compensation.

This has prompted an increasingly vocal question from creators and content owners:

Shouldn’t we get paid when machines learn from our work?

The proposed answer from some corners: a blanket licensing regime.

What’s a Blanket Licence?

Nothing to do with bedding – a blanket licence is a pre-agreed system for legal reuse. It doesn’t ask for permission each time. Instead, it says:

You can use a defined pool of material for a defined purpose — if you pay.

We already see this in:

  • Music royalties (e.g. APRA, ASCAP, BMI)

  • Broadcast and public performance rights

  • Compulsory licensing of cover songs in the US

Could the same apply to AI?

What the Law Says (or Doesn’t)

AI companies argue that training their models on public material is “fair use” (US) or doesn’t involve “substantial reproduction” (Australia), since no exact copy of the work appears in the output.

However, copies are made during scraping, and substantial parts are almost certainly reproduced during the training process or embedded in derivative outputs — either of which could pose problems under both US and Australian copyright law.

But courts are still catching up.

Pending or recent litigation:

  • The New York Times v OpenAI: scraping articles to train GPT

  • Sarah Silverman v Meta: use of copyrighted books

  • Getty Images v Stability AI: image training and watermark copying

None of these cases have yet resolved the underlying issue:

Is training AI on copyrighted works a use that requires permission — or payment?

What a Blanket Licence Would Do

Under a blanket licence system:

  • Training (and copying or development of derivatives for that purpose) would be lawful, as long as the AI provider paid into a fund

  • Creators and rights holders would receive royalty payments, either directly or via a collecting society

  • A legal baseline would be established, reducing lawsuits and uncertainty

This would mirror systems used in broadcasting and streaming, where revenue is pooled and distributed based on usage data.

Challenges Ahead

1. Who Gets Paid?

Not all data is traceable or attributed. Unlike Spotify, which tracks each song streamed, AI models ingest billions of unlabeled tokens.

How do you determine who owns what — and which parts — of material abstracted, fragmented, and stored somewhere in the cloud?

2. How Much?

Rates would need to reflect:

  • The extent of use

  • The importance of the material to the training corpus

  • The impact on the original market for the work

This is tricky when a model is trained once and then used forever.

3. Which Countries?

Copyright laws vary. A licence in Australia might mean nothing in the US.

A global licence would require multilateral cooperation — and likely WIPO involvement.

Legal Precedent: Australia’s Safe Harbour and Statutory Licensing Models

Australia’s own statutory licensing schemes (e.g. educational copying under Part VB of the Copyright Act) show that:

  • Lawmakers can mandate payment for certain uses,

  • Even if individual rights holders never negotiated the terms,

  • Provided it’s reasonable, transparent, and compensatory.

But those systems also brought:

  • Bureaucratic collection processes

  • Contentious allocation models

  • Endless legal wrangling over definitions (What is “reasonable portion”? What qualifies as “educational purpose”?)

Expect the same for AI.

Creators and Innovation: A Balancing Act

For creators:

  • A blanket licence offers recognition and payment

  • It helps avoid the current “scrape now, settle later” model

  • It could fund new creative work rather than hollowing out industries

For innovators:

  • It provides legal certainty

  • Encourages investment in AI tools

  • Reduces the risk of devastating retroactive litigation

But if set up poorly, it could:

  • Be exclusionary (if licensing fees are too high for small players)

  • Be ineffective (if rights aren’t properly enforced or distributed)

  • Or be too slow to match AI’s pace

What’s Next?

Australia’s Copyright Act doesn’t currently recognise training as a specific form of use. But policy reviews are under way in multiple countries, including by:

  • The UK IPO

  • The European Commission

  • The US Copyright Office

  • And here in Australia, the Attorney-General’s Department is conducting consultations through 2024–25 on how copyright law should respond to AI

Creators, platforms, and governments are all watching the courts. But if consensus forms around the need for structured compensation, a statutory blanket licence might just be the solution.


Bottom Line

We’ve built AI on the backs of human creativity. The question isn’t whether to stop AI — it’s how to make it fair.

A blanket licence won’t solve every problem. But it could be the start of a system where creators aren’t left behind — and where AI learns with permission, not just ambition.

Filed Under: AI, Copyright, Digital Law, IP, Technology Tagged With: AI, Copyright, Digital Law, IP, Technology

June 19, 2025 by Scott Coulthart

Maxim Forgets the Maxim, Chases Nuclear but Bombs

Maxim Media, the publishers behind the well-known men’s lifestyle magazine and brand MAXIM, had minimal success when in Maxim Media Inc. v Nuclear Enterprises Pty Ltd [2024] FCA 1443 they sought urgent Federal Court orders to shut down an Australian company allegedly riding on their name — through magazines, domain names, destination tours, and model management services.

Despite the explosive accusations, the Court delivered a much more subdued response.

Maxim had delayed for some time in coming to Court, but now applied for interlocutory relief, seeking immediate injunctions to restrain:

  • Use of the MAXIM name in any form in Australia;

  • Distribution of a competing Maxim Magazine;

  • Operation of maxim.com.au, destinationmaxim.com.au, and related social handles;

  • Any further unauthorised brand use.

The application relied on trade marks registered in 2020 and 2023 — and on allegations that the Australian respondents, including Nuclear Enterprises and Michael Downs, had no licence or authority to use the name.

Justice Rofe refused the injunction — not because the claim was doomed, but because:

  • Ownership and licensing rights hadn’t been clearly established yet;

  • There were substantial factual disputes that needed a full trial;

  • There was no persuasive case for irreparable harm that couldn’t be remedied later;

  • The balance of convenience didn’t justify urgent intervention — particularly given Maxim’s delay in seeking relief (ironically, Maxim had ignored the equitable maxim regarding laches).

The proceeding will now be allocated to a docket judge for a full hearing.

The main takeaways here are:

  • Interlocutory relief isn’t automatic, even with a registered trade mark — the applicant still needs clean title, urgency, and evidence of irreparable harm.

  • Delays hurt. The longer you wait to challenge a rival’s use of your mark, the harder it is to convince a court that urgent action is needed.

The case could still blow Maxim’s way at a final hearing — but for now, Nuclear gets to keep exploding – and the fallout will be huge.

Filed Under: Digital Law, IP, Trade Marks Tagged With: Digital Law, IP, Trade Marks

May 21, 2025 by Scott Coulthart

Age Check Please – Australia’s Social Media Age Trial Steps Up

If you thought “what’s your date of birth?” was just an annoying formality, think again. Australia is now deep into a world-first trial of age verification tech for social media — and the implications for platforms, privacy, and policy will be real.

It’s official: Australia is no longer just talking about age restrictions on social media — it’s testing them. In what’s being described as a world-first, the federal government earlier this year launched the Age Assurance Technology Trial, a trial of age assurance technologies across more than 50 platforms, including heavyweights like Meta, TikTok and Snapchat.

The idea? To test whether it’s technically (and legally) viable to verify a user’s age before they gain access to certain online services, especially platforms known to attract kids and teens.

The goal is to find out whether it’s possible — and proportionate — to verify a user’s age before letting them dive into algorithm-driven feeds, DMs, or digital chaos.

Now, as of mid-May, the trial is expanding — with school students in Perth and Canberra joining the test groups. The trial includes biometric screening (e.g. facial age estimation), document-based verification, and other machine-learning tools and tech designed to assess age and  detect users under 16 without necessarily collecting identifying information, in line with recommendations from the eSafety Commissioner and privacy reform proposals.

Initial results are reportedly encouraging, showing strong accuracy for detecting under-16 users. Some methods are accurate 90%+ of the time — but questions linger. How well do these tools work across diverse communities? How do they avoid discrimination? And perhaps most importantly: how do you balance age checks with user privacy?

But this isn’t just a tech exercise — it’s a law-and-policy warm-up. With the Children’s Online Privacy Code set to drop by 2026, and eSafety pushing hard for age-based restrictions, the real question is: can you implement age gates that are privacy-preserving, non-discriminatory, and not easily gamed by a teenager with a calculator and Photoshop?

It’s a tough balance. On one hand, there’s real concern about children’s exposure to online harms. On the other, age verification at scale risks blowing out privacy compliance, embedding surveillance tech, and excluding legitimate users who don’t fit biometric norms.

The final report lands in June 2025, and platforms should expect regulatory consequences soon after. If the trial proves age verification is accurate, scalable, and privacy-compatible, you can bet on mandatory age checks becoming law by the end of the year.

Bottom line? If your platform’s UX depends on open access and anonymity, start thinking now about how that survives an incoming legal obligation to know more about your users, and if not necessarily who they are, at least how young they actually are (as opposed to how old they might claim to be).

Filed Under: Digital Law, Technology Tagged With: Digital Law, Technology

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2

Primary Sidebar

Recent Posts

  • Whose Footage Is It Anyway? Game Meats v Farm Transparency Heads for the High Court
  • Watson Webb v Comino: When Valves Burst Into a Multi-IP Flood
  • Aristocrat’s Jackpot: Full Court Revives Gaming Machine Patents
  • Epic Won the Battle. Now Developers Want Their Refunds.
  • Copy That, Part 10 – Copyright Myths Busted: Top Misunderstandings

Archives

  • September 2025 (15)
  • August 2025 (18)
  • July 2025 (16)
  • June 2025 (21)
  • May 2025 (12)
  • April 2025 (4)

Footer

© Scott Coulthart 2025