• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

IPMojo

  • About IP Mojo
  • About Scott Coulthart
  • CONTACT
BOOK AN APPOINTMENT

Archives for June 2025

June 24, 2025 by Scott Coulthart

Fair Use or Free Ride? The Case for an AI Blanket Licence

What if AI companies had to pay for the content they train on? Welcome to the next frontier in copyright law — where inspiration meets ingestion.

When AI companies train their models — whether for music, image generation, writing or video — they don’t do it in a vacuum. They train on us. Or more precisely: on our songs, our blogs, our art, our tweets, our books, our interviews.

They harvest it at scale, often scraped from the open web, with or without permission — and certainly without compensation.

This has prompted an increasingly vocal question from creators and content owners:

Shouldn’t we get paid when machines learn from our work?

The proposed answer from some corners: a blanket licensing regime.

What’s a Blanket Licence?

Nothing to do with bedding – a blanket licence is a pre-agreed system for legal reuse. It doesn’t ask for permission each time. Instead, it says:

You can use a defined pool of material for a defined purpose — if you pay.

We already see this in:

  • Music royalties (e.g. APRA, ASCAP, BMI)

  • Broadcast and public performance rights

  • Compulsory licensing of cover songs in the US

Could the same apply to AI?

What the Law Says (or Doesn’t)

AI companies argue that training their models on public material is “fair use” (US) or doesn’t involve “substantial reproduction” (Australia), since no exact copy of the work appears in the output.

However, copies are made during scraping, and substantial parts are almost certainly reproduced during the training process or embedded in derivative outputs — either of which could pose problems under both US and Australian copyright law.

But courts are still catching up.

Pending or recent litigation:

  • The New York Times v OpenAI: scraping articles to train GPT

  • Sarah Silverman v Meta: use of copyrighted books

  • Getty Images v Stability AI: image training and watermark copying

None of these cases have yet resolved the underlying issue:

Is training AI on copyrighted works a use that requires permission — or payment?

What a Blanket Licence Would Do

Under a blanket licence system:

  • Training (and copying or development of derivatives for that purpose) would be lawful, as long as the AI provider paid into a fund

  • Creators and rights holders would receive royalty payments, either directly or via a collecting society

  • A legal baseline would be established, reducing lawsuits and uncertainty

This would mirror systems used in broadcasting and streaming, where revenue is pooled and distributed based on usage data.

Challenges Ahead

1. Who Gets Paid?

Not all data is traceable or attributed. Unlike Spotify, which tracks each song streamed, AI models ingest billions of unlabeled tokens.

How do you determine who owns what — and which parts — of material abstracted, fragmented, and stored somewhere in the cloud?

2. How Much?

Rates would need to reflect:

  • The extent of use

  • The importance of the material to the training corpus

  • The impact on the original market for the work

This is tricky when a model is trained once and then used forever.

3. Which Countries?

Copyright laws vary. A licence in Australia might mean nothing in the US.

A global licence would require multilateral cooperation — and likely WIPO involvement.

Legal Precedent: Australia’s Safe Harbour and Statutory Licensing Models

Australia’s own statutory licensing schemes (e.g. educational copying under Part VB of the Copyright Act) show that:

  • Lawmakers can mandate payment for certain uses,

  • Even if individual rights holders never negotiated the terms,

  • Provided it’s reasonable, transparent, and compensatory.

But those systems also brought:

  • Bureaucratic collection processes

  • Contentious allocation models

  • Endless legal wrangling over definitions (What is “reasonable portion”? What qualifies as “educational purpose”?)

Expect the same for AI.

Creators and Innovation: A Balancing Act

For creators:

  • A blanket licence offers recognition and payment

  • It helps avoid the current “scrape now, settle later” model

  • It could fund new creative work rather than hollowing out industries

For innovators:

  • It provides legal certainty

  • Encourages investment in AI tools

  • Reduces the risk of devastating retroactive litigation

But if set up poorly, it could:

  • Be exclusionary (if licensing fees are too high for small players)

  • Be ineffective (if rights aren’t properly enforced or distributed)

  • Or be too slow to match AI’s pace

What’s Next?

Australia’s Copyright Act doesn’t currently recognise training as a specific form of use. But policy reviews are under way in multiple countries, including by:

  • The UK IPO

  • The European Commission

  • The US Copyright Office

  • And here in Australia, the Attorney-General’s Department is conducting consultations through 2024–25 on how copyright law should respond to AI

Creators, platforms, and governments are all watching the courts. But if consensus forms around the need for structured compensation, a statutory blanket licence might just be the solution.


Bottom Line

We’ve built AI on the backs of human creativity. The question isn’t whether to stop AI — it’s how to make it fair.

A blanket licence won’t solve every problem. But it could be the start of a system where creators aren’t left behind — and where AI learns with permission, not just ambition.

Filed Under: AI, Copyright, Digital Law, IP, Technology Tagged With: AI, Copyright, Digital Law, IP, Technology

June 24, 2025 by Scott Coulthart

What Didn’t Happen (Yet): The Privacy Reforms Still Waiting in the Wings

You could be forgiven for thinking Australia’s privacy law just had its big moment — and it did. But don’t get too comfortable. What we’ve seen so far from the December 2024 amendments to the Privacy Act 1988 (Cth) is just Round 1.

Welcome to the final instalment of our 9-part Privacy 2.0 series.

There’s a long queue of proposed changes that didn’t make it into the latest legislation, many of them quietly simmering in government inboxes, consultation drafts and “agreed in principle” footnotes.

Some of these postponed reforms could reshape the privacy landscape even more profoundly than the current crop. If you’re trying to future-proof your compliance or understand where the law is going next, here’s what to watch.

1. The Small Business Exemption — Still Alive (for Now)

Right now, businesses with an annual turnover under $3 million are generally exempt from the Privacy Act. That’s tens of thousands of data-handling entities with zero formal privacy obligations. The reform process flagged this as outdated — and it’s clear the exemption will eventually go. When it does, thousands of SMEs will be pulled into the privacy net for the first time. It’s not a question of if. It’s when.

2. Controllers vs Processors — Coming Soon to a Framework Near You

Unlike the GDPR(s), Australia’s privacy law still doesn’t distinguish between data “controllers” (who decide the purpose and means of processing) and “processors” (who process data on someone else’s behalf). That distinction brings clarity and proportionality in many overseas regimes. Expect pressure to harmonise with global norms — especially from businesses operating across borders who are tired of legal whiplash.

3. The Right to Object, Delete, Port — Not Yet, But On Deck

Australia still lacks a formal, standalone right to object to certain uses of data, to demand deletion (the famed “right to be forgotten”), or to port your data from one provider to another. These rights — core pillars of the GDPR(s) — have been agreed to in principle, are popular with the public, and would bring us closer to GDPR standards (and make life very interesting for adtech, fintech, and platform businesses).

4. De-Identified Data? Still A Grey Zone

The reform process acknowledged that re-identification of supposedly anonymous data is a real risk — and that de-identified information still needs regulation. But the law hasn’t caught up yet. Watch for future reforms to APPs 8 and 11 that would bring de-identified data into scope and make re-identification attempts a regulatory red flag.

5. Privacy by Design & Mandatory PIAs — Still Optional (for Now)

There was also discussion of codifying “privacy by design” and making Privacy Impact Assessments mandatory for high-risk activities. The idea? Embed privacy into planning, not just cleanup. It didn’t land this time, but expect it to return — particularly as AI, biometric tech and behavioural profiling go mainstream.


Bottom line? This is just the intermission. The Privacy Act is evolving — slowly, but deliberately — toward a framework that looks more like the GDPR(s) and less like its 1980s self. Businesses that treat the current reforms as the finish line are missing the point. The smart ones are already adapting to what’s next.

That’s a wrap on our Privacy 2.0 reform series. If you’ve made it this far, congratulations — you now know more about privacy law than most of Parliament.

Now, go fix your privacy policy — and maybe tell your AI to behave while you’re at it.

Filed Under: Privacy, Privacy 2.0, Regulation Tagged With: Privacy, Privacy 2.0, Privacy 2.0 Part 9, Regulation

June 23, 2025 by Scott Coulthart

🟫 Cantarella Bros v Lavazza: The Espresso Shot Heard Around the IP World

There’s a certain irony in watching a decades-long trade mark fight over a word that literally means “gold” end up in ashes.

After a turbulent three-year legal grind, Cantarella Bros v Lavazza has finally run its course — with the Full Federal Court siding squarely with Lavazza and the High Court rejecting Cantarella’s special leave application in June 2025.

At stake? Ownership of the word ORO — Italian for “gold” — as a trade mark for coffee.

🧾 ORO, Take Two: The Sequel to Modena

You might recall Modena, the 2014 High Court showdown where Cantarella successfully defended ORO as being inherently adapted to distinguish Cantarella’s goods from those of others. That win secured their mark’s survival from a descriptiveness challenge under s 41 of the Trade Marks Act 1995.

But Modena never tested ownership. And that’s where things have now unravelled.

Enter Molinari — an Italian roaster whose Caffè Molinari Oro blends were apparently in Australia before Cantarella’s first use. Lavazza, whose own Qualità Oro has long glittered on shelves, used this to challenge Cantarella’s ownership under section 58 of the Act.

⚖️ The Trial Decision (2023): Ownership Is Everything

In October 2023, Justice Yates in the Federal Court found that Molinari used the mark ORO in a trade mark sense in Australia as early as 1995 — a full year before Cantarella. That meant Cantarella wasn’t the first user, and thus not the true owner of the ORO mark.

Even though Molinari hadn’t used it themselves for years, the court found no clear evidence of abandonment.

The result? The ORO registrations were invalidated. No valid mark, no infringement.

🧭 The Appeal (2025): Nice Try, But Still No Gold

Cantarella ran multiple grounds on appeal. They challenged the trial judge’s acceptance of evidence, the interpretation of what constituted trade mark use, and even suggested they had become an “honest concurrent user” (which might have had flow-on effects allowing them to keep it registered).

But the Full Federal Court wasn’t buying it. It affirmed the trial findings — particularly that:

  • Molinari’s use of ORO was use as a trade mark,

  • Molinari’s rights had not been abandoned,

  • and Cantarella’s own arguments about honest concurrent use were too little, too late (that did not raise that argument at trial so could not raise it as a new ground on appeal … the result in that regard might have been different if they had raised it at trial).

They also dismissed Lavazza’s own cross-appeal on costs and distinctiveness. No party walked away with an espresso shot of victory on that front.

🏛️ High Court: Application Denied

On 12 June 2025, the High Court rejected Cantarella’s special leave bid — making the Full Court’s decision final.

It’s the second time Cantarella’s ORO mark has come before the High Court. But this time, the door was firmly closed.

🥊 Why It Matters

This is the latest in a string of cases reminding IP owners that first use means first rights — even if you think you’ve been using a mark for decades.

Some takeaways:

  • Section 58 (ownership) is a potent weapon in cancellation proceedings.

  • Evidence of early use — even murky invoices and decades-old packaging — can carry surprising weight.

  • A prior foreign user who supplied products into Australia through distributors can claim ownership if the mark was used as a badge of origin here.

  • The High Court’s Modena decision still stands, but it doesn’t immunise a mark from being struck out on ownership grounds.

☕ Final Sip

Cantarella’s gold-standard run with ORO has come to an end. With the marks cancelled and infringement claims torpedoed, it’s back to the blend board.

Meanwhile, Lavazza walks away vindicated — perhaps with a slightly smug crema.

Filed Under: IP, Trade Marks Tagged With: IP, Trade Marks

June 23, 2025 by Scott Coulthart

Black Box, Meet Sunlight: Australia’s New Rules for Automated Decision-Making

Automated decision-making is everywhere now — in the background of your credit check, your insurance quote, your job application, even the price you see for a pair of shoes. For a while, this opaque machine logic operated in a legal blind spot: useful, profitable, and often inscrutable. But no longer.

Welcome to part 8 of our 9-part Privacy 2.0 series.

Australia’s latest privacy reforms are dragging automated decisions into the daylight. Starting 10 December 2026, organisations will be legally required to disclose in their privacy policies whether and how they use automated decision-making that significantly affects the rights of individuals. It’s the first real attempt under Australian law to impose some transparency obligations on algorithmic systems — not just AI, but any automation that crunches personal data and outputs a decision with real-world consequences.

So what do these changes demand? Two key things:

  1. Your privacy policy must (from 10 December 2026) clearly describe:

    • the types of personal information used in any substantially automated decision-making process, and

    • the kinds of decisions made using that information.

  2. It will apply wherever those decisions significantly affect an individual’s rights or interests — eligibility for credit, pricing, recruitment shortlists, fraud flags, algorithmic exclusions from essential services like housing or employment, and more. It’s not limited to full automation either. Even “mostly automated” systems — where human review is token or rubber-stamp — are caught.

The goal here is transparency, not prohibition. The law doesn’t say you can’t automate — but it does say you will have to own it, explain it, and flag it. That means no more hiding behind UX, generic privacy blurbs, or vague disclaimers. And if your systems are complex, decentralised, or involve third-party algorithms? No excuses — you’ll need to understand them anyway, and track them over time so your policy stays accurate.

In short, if your business relies on automated decisions in any meaningful way, you’ll need to:

  • Map those processes now (don’t wait until 2026),

  • Build a system for tracking how and when they change, and

  • Craft plain-language disclosures that are specific, truthful, and meaningful.

This isn’t just a ‘legal’ problem anymore — customers, regulators, and journalists are watching. No one wants to be the next brand caught auto-rejecting job applicants for having a gap year or charging loyal customers more than first-timers.

Tomorrow: we wrap our Privacy 2.0 series with what didn’t make it into the legislation (yet) — and where the next battle lines in Australian privacy reform are likely to be drawn.

Filed Under: Privacy, Privacy 2.0, Regulation Tagged With: Privacy, Privacy 2.0, Privacy 2.0 Part 8, Regulation

June 20, 2025 by Scott Coulthart

XJS Falls Short: No Liquidated Damages Without a Completion Date

The NSW Court of Appeal has handed down a sharp reminder that contract clauses don’t enforce themselves — and that if you leave key blanks unfilled in a standard form agreement, you may be left with no recourse when things go sideways.

In XJS World Pty Ltd v Central West Civil Pty Ltd [2025] NSWCA 133, a property developer sued its former contractor over delays in civil works on a Bathurst land project, seeking liquidated and general damages. The contractor, Central West Civil, cross-claimed for unpaid variation invoices — and won.

XJS appealed. The Court of Appeal wasn’t persuaded.

🧱 In brief …

🔹 No Completion Date, No Liquidated Damages
XJS relied on a standard-form construction contract that allowed the parties to insert a “Date for Completion” in Part D. They didn’t. The Court held that where parties choose not to activate a key provision (by leaving it blank), they can’t later act as if it were operative.

“The parties had the option of setting a Date for Completion and they chose not to do so… that contractual purpose is not to be undermined by seeking to stretch inapposite words.”

🔹 Delays Not Proven to Be the Contractor’s Fault
XJS alleged delays were CWC’s doing — but failed to provide persuasive evidence to back that claim. The Court reinforced that in construction disputes, you bear the burden of proving not just delay, but culpable delay.

🔹 No Breach of Council Requirements
XJS argued that the contractor failed to meet certain council conditions. The Court found that any issues were technical and minor, and not sufficient to constitute a breach.

🔹 Termination Was Repudiatory
When XJS terminated the contract, it did so without legal entitlement — which made it the party in breach. The trial judge’s conclusion that this amounted to repudiation was upheld.

🧠 Key Takeaways

If you’re using standard form construction contracts:

  • Don’t leave blanks you plan to enforce later. If it’s not filled in, it may not apply.

  • Document delay causes precisely — especially when multiple contractors are involved.

  • Termination without cause is dangerous. Even in commercial stand-offs, you need a firm contractual footing to walk away.

In the end, XJS World discovered that skipping a few contract fields can cost a lot more than time — it can cost the whole case.

Filed Under: Commercial Law, Contracts Tagged With: Commercial Law, Contracts

June 20, 2025 by Scott Coulthart

Productivity or Pink Slips? The Rise of Agentic AI

Ok, enough with the scaremongering – let’s thrash it out.

Is AI going to replace us any time soon?

One perspective’s look at the medium term future:

    “Cancer is cured, the economy grows at 10% a year… and 20% of people don’t have jobs.”

So said Dario Amodei, CEO of Anthropic, in one of the most jarring soundbites to emerge from the AI sector this year. It’s not a dystopian movie pitch — it’s a plausible trajectory.

The Brisbane Times recently spotlighted how Telstra is leaning hard into this future, and it starts with deploying so-called Agentic AI – discrete AI tools able to do a bunch of things with minimal oversight.  From automating customer service to writing code, the $54 billion telco is betting big that its next era won’t be driven just by fibre and frequency, but by “digital agents“: AI tools with autonomy to act, learn and optimise at scale.

While Telstra CEO Vicki Brady didn’t give hard numbers on expected job cuts, she did suggest the company’s workforce will likely be smaller by 2030. No bold claims — just quiet math. That’s the real face of the AI revolution: not mass firings, but jobs that never get hired in the first place.

Enter the Digital Employee

Nvidia’s Jensen Huang calls them “digital employees” — autonomous, specialised AI agents that handle roles from cybersecurity to network monitoring to legal summarisation. Unlike your flesh-and-blood team, they don’t sleep, unionise, or call in sick.

Tech giants like Microsoft, Canva, and Shopify are already eliminating roles that generative AI can perform faster, cheaper or more reliably. Shopify’s test for approving new hires? Prove the job can’t be done by AI.

Even highly paid software engineers and technical writers are now brushing up résumés — or joining unions. The shock isn’t just the job losses — it’s the redefinition of what work is.

The Illusion of Understanding

And yet — for all its prowess, there’s a lot that AI still doesn’t understand.

It doesn’t feel shame, pride, love, loyalty or regret. It doesn’t know the weight of a moral dilemma or the subtle ache of ambiguity. It doesn’t take responsibility. It hasn’t grown up anywhere. It’s very good at simulating humanity, but it hasn’t cracked what it means to be human.

Here are just a few areas where that matters:

• Moral Judgment & Empathy

AI doesn’t feel anything. It can mimic empathetic language, but it doesn’t understand suffering, joy, duty, shame, or dignity. That matters in:

  • law (e.g. sentencing, equitable remedies)

  • medicine (e.g. breaking bad news)

  • management (e.g. mentoring, handling conflict)

  • creative industries (e.g. stories that evoke genuine emotion)

• Contextual Wisdom and Ethical Trade-Offs

Humans weigh competing priorities in fluid, unquantifiable ways. A judge balancing public policy with individual hardship, or a parent navigating fairness between siblings — AI can model it, but not feel the stakes or bear the consequences.

• Lived Experience and Cultural Intuition

Even with perfect training data, AI lacks a body, a history, a community. It hasn’t known pain or formed personal relationships. It cannot speak authentically from or to a place of real cultural knowledge.

• Responsibility and Accountability

We trust humans with hard decisions because they can be held responsible. There’s no moral courage or ethical failure in the output of a large language model — only the illusion of one.

These aren’t just philosophical quibbles. They’re pressing questions for:

  • Law: Who bears blame when an AI agent misfires?

  • Healthcare: Who decides whether aggressive treatment is compassionate or cruel?

  • Leadership: Can you coach courage into someone via algorithm?

The Uncomfortable Part

AI already mimics a lot of that better than expected.  Consider:

• Empathy Simulation

GPT-4, Claude and others can write with stunning emotional acuity. They generate responses that feel empathetic, artistic or wise. It’s not authentic — but it’s increasingly indistinguishable, and often considered “good enough” by the humans receiving it.

• Decision-Making and Pattern Recognition at Scale

AI already outperforms humans at certain medical diagnoses, legal research, contract review and logistics. Its consistency and recall beat even expert practitioners — and that pushes decision-making downstream to human review of AI output.

• Creative Collaboration

AI is co-authoring books, scoring music, designing buildings. The raw ideas remain human-led (for now), but AI increasingly does the scaffolding. The assistant as co-creator is here.

• Agentic AI and Task Autonomy

Agentic AI can take a task, plan it, execute it, and evaluate the results. That’s edging close to synthetic intentionality. In limited domains, it already feels like independent judgment.

The Upshot

What AI can do — increasingly well — is mimic language, logic and even tone. It can co-author your policy doc, diagnose your anomaly, draft your contract (although still terribly at present – which, frankly, makes the contracts lawyer in me feel safe for now), and script your empathy.

But ask it to weigh competing values in an evolving ethical context — or even just draft a nuanced commercial agreement, conduct accurate scientific or legal research, or develop a strategy based on historical fact — and you quickly meet its limits.

Those edge cases still belong to humans in the loop.

So Who Owns the Output?

As businesses delegate more high-order tasks to autonomous agents, legal questions are multiplying:

  • Who owns the IP generated by a self-directed AI agent?
    → At this stage, probably no one — though ordinary IP rules apply to any human-developed improvements.

  • Can AI-created processes be patented or protected as trade secrets?
    → Not patented without significant human input — at least not under current Australian (or global) standards. Trade secrets? Only if the process was generated in confidential circumstances, and even then, likely only protected contractually — or by a very sympathetic equity judge with a soft spot for machines and a broad view of what counts as confidence.

  • Will the law begin to treat AI output as a kind of quasi-employee contribution?
    → Hard to say. But this author’s view: yes — we’re likely to see forms of legal recognition for things created wholly or partly by generative AI, especially as its use becomes ubiquitous.

Telstra’s ambition to shift from “network provider” to “bespoke experience platform” only deepens the stakes. If AI manages your venue’s mobile traffic to prioritise EFTPOS over selfies, who owns that logic? What’s the IP — and who gets paid?

We’re very likely to find out soon.

We May Not Be Replaced, But We Are Being Rerouted

What’s unfolding isn’t the erasure of human work — but its redistribution.

Jobs once seen as safe — legal drafting, coding, customer care — are being sliced up and reassembled into workflows where humans supervise, train or rubber-stamp what AI proposes.

We’re becoming fewer creators, more editors. Fewer builders, more overseers.

This is the heart of the AI transition: it’s not about making us obsolete.  It’s about making us team players — not to say optional — in a landscape of role transformation, driven by the pursuit of results.

That’s why this isn’t just an IP question. It’s a human one.

So yes — cancer might be cured. The economy might boom.  But as the digital employee clocks in, we’ll need more than productivity gains.

We’ll need new answers — about ownership, ethics, responsibility and value.  Not just in law, but in how we define a fair and meaningful future.

Filed Under: AI, IP, Technology Tagged With: AI, IP, Technology

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • 🏇 When the Race Stops a Nation — Who Owns the Moment?
  • AI Training in Australia: Why a Mandatory Licence Could Be the Practical Middle Ground
  • AI-Generated Works & Australian Copyright — What IP Owners Need to Know
  • When Cheaper Medicines Meet Patent Law: Regeneron v Sandoz
  • #NotThatFamous: When Influencer Buzz Fails the s 60 Test

Archives

  • November 2025 (1)
  • October 2025 (14)
  • September 2025 (21)
  • August 2025 (18)
  • July 2025 (16)
  • June 2025 (21)
  • May 2025 (12)
  • April 2025 (4)

Footer

© Scott Coulthart 2025