• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

IPMojo

  • About IP Mojo
  • About Scott Coulthart
  • CONTACT
BOOK AN APPOINTMENT

AI

October 28, 2025 by Scott Coulthart

AI Training in Australia: Why a Mandatory Licence Could Be the Practical Middle Ground

Over the weekend the Australian Government finally drew a line in the sand: no special copyright carve-out to let AI developers freely train on Australians’ creative works. In rejecting a broad text-and-data-mining (TDM) exception, the Attorney-General signalled that any reform must protect creators first, and that “sensible and workable solutions” are the goal. Creators and peak bodies quickly welcomed the stance; the TDM exception floated by the Productivity Commission in August met fierce resistance from authors, publishers, music and media groups.

So where to from here? One pragmatic path is a mandatory licensing regime for AI training: no free use; transparent reporting; per-work remuneration; and money flowing to the rightsholders who opt in (or register) to be paid. Below I sketch how that could work in Australia, grounded in our existing statutory licensing DNA.


What just happened (and why it matters)

  • Government position (27 Oct 2025): The Commonwealth has ruled out a new TDM exception for AI training at this time and instead is exploring reforms that ensure fair compensation and stronger protections for Australian creatives. The Copyright and AI Reference Group (CAIRG) continues to advise, with transparency and compensation high on the agenda.

  • The alternative that was floated: In August, the Productivity Commission suggested consulting on a TDM exception to facilitate AI. That proposal drew a rapid backlash from creators, who argued it would amount to uncompensated mass copying.

  • The direction of travel: With an exception off the table, the policy energy now shifts to licensing — how to enable AI while paying creators and bringing sunlight to training data.


Australia already knows how to do “copy first, pay fairly”

We are not starting from scratch. Australia’s Copyright Act has long used compulsory (statutory) licences to reconcile mass, socially valuable uses with fair payment:

  • Education: Part VB/related schemes allow teachers to copy and share text and images for students, in return for licence fees distributed to rightsholders.

  • Broadcast content for education & government: Screenrights administers statutory licences for copying and communicating broadcast TV/radio content by educators and government agencies, with royalties paid out to rightsholders.

These schemes prove a simple point: when individual permissions are unfeasible at scale, mandatory licensing with collective administration can align public interest and creator remuneration.


A mandatory licence for AI training: the core design

Scope

The scope of a mandatory licence regime would need to cover the reproduction and ingestion of copyright works for the purpose of training AI models (foundation and domain-specific).

To ensure it doesn’t go too far, it would need to exclude public distribution of training copies.  Output uses would remain governed by ordinary copyright (no licence for output infringement, style-cloning or substitutional uses).

Ideally, the licence would cover all works protected under the Copyright Act 1968 (literary, artistic, musical, dramatic, films, sound recordings, broadcasts), whether online or offline, Australian or foreign (subject to reciprocity).

Mandatory

The licence would be mandatory for any developer (or deployer) who assembles or fine-tunes models using copies of protected works (including via third-party dataset providers).

Absent a specific free-to-use status (e.g. CC-BY with TDM permission or public domain), all AI training using covered works would require a licence and reporting.

Transparency/Reporting

Licensees would be required to maintain auditable logs identifying sources used (dataset manifests, crawling domains, repositories, catalogues).

They would also be required to provide regular transparency reports to the regulator and collecting society, with confidential treatment for genuinely sensitive items (trade secrets protected but not a shield for non-compliance). CAIRG has already identified copyright-related AI transparency as a live issue—this would operationalise it.

Register

A register of creators/rightsholders would be established with the designated collecting society (or societies) to receive distributions.

All unclaimed funds would be held and later distributed via usage-based allocation rules (with rolling claims windows), mirroring existing statutory practice in education/broadcast licences.

Rates

Setting rates and allocating royalties would be a little more complex.  One way to do that would be to blend:

  1. Source-side weighting (how much of each catalogue was ingested, adjusted for “substantial part” analysis); and

  2. Impact-side proxies (e.g. similarity retrieval hits during training/validation; reference counts in tokenizer vocabularies; contribution metrics from dataset cards).

Rates could be set by Copyright Tribunal-style determination or by periodic ministerial instrument following public consultation.

Opt out/in

In this proposal, there would be an opt-out process with all works covered by default on a “copy first, pay fairly” basis – which would replicate current education/broadcast models and avoid a data-black-market.

Into that could be layered an opt-out right for rightsholders who object on principle (with enforceable dataset deletion duties).

An added twist could be the inclusion of opt-in premium tiers, where, for example, native-format corpora or pre-cleared archives would be priced above the baseline.

Small model & research safe harbours

A de minimis / research tier for non-commercial, low-scale research could be applied with strict size and access limits (registered institutions; no commercial deployment) to keep universities innovating without trampling rights.

Enforcement

Civil penalties could be issued for unlicensed training; aggravated penalties for concealment or falsified dataset reporting.

The regulator/collecting society could also be given audit powers , with privacy and trade-secret safeguards.


Governance: who would run it?

Australia already has experienced collecting societies and government infrastructure:

  • Text/image sector: Copyright Agency (education/government experience, distribution pipelines).

  • Screen & broadcast: Screenrights (large-scale repertoire matching, competing claims processes).

  • Music (for audio datasets): APRA AMCOS/PPCA (licensing, cue sheets, ISRC/ISWC metadata).

The Government could designate a lead collecting society per repertoire (text/image; audio; AV) under ministerial declaration, with a single one-stop portal to keep compliance simple.


Why this beats both extremes

Versus a TDM exception (now rejected):

  • Ensures real money to creators, not just “innovation” externalities.

  • Reduces litigation risk for AI companies by replacing guesswork about “fair dealing/fair use” with clear rules and receipts.

Versus a pure consent-only world:

  • Avoids impossible transaction costs of millions of one-off permissions.

  • Preserves competition by allowing local model builders to license at predictable rates instead of being locked out by big-tech private deals.


Practical details to get right (and how to solve them)

  1. Identifiability of works inside massive corpora

    • Require dataset manifests and hashed URL lists on ingestion; favour sources with reliable identifiers (ISBN/ISSN/DOI/ISRC/ISWC).

    • Permit statistical allocation where atom-level matching is infeasible, backed by audits.

  2. Outputs vs training copies

    • This licence covers training copies only. Output-side infringement, passing-off, and “style cloning” remain governed by ordinary law (and other reforms). Government focus on broader AI guardrails continues in parallel.

  3. Competition & concentration

    • Prevent “most favoured nation” clauses and ensure FRAND-like access to the scheme so smaller labs can participate.

  4. Privacy & sensitive data

    • Exclude personal information categories by default; align with privacy reforms and sectoral data controls.

  5. Cross-border reciprocity

    • Pay foreign rightsholders via society-to-society deals; receive for Australians used overseas, following established collecting society practice.


How this could be enacted fast

  • Amend the Copyright Act 1968 (Cth) to insert a new Part establishing an AI Training Statutory Licence, with regulation-making power for:

    • eligible uses;

    • reporting and audit;

    • tariff-setting criteria;

    • distribution rules and claims periods;

    • penalties and injunctions for non-compliance.

  • Designate collecting societies by legislative instrument.

  • Set up a portal with standard dataset disclosure templates and quarterly reporting.

  • Transitional window (e.g., 9–12 months) to allow existing models to come into compliance (including back-payment or corpus curation).


What this could mean for your organisation (now)

  • AI developers & adopters: Start curating dataset manifests and chain-of-licence documentation. If your vendors can’t or won’t identify sources, treat that as a red flag.

  • Publishers, labels, studios, creators: Register and prepare your repertoire metadata so you’re discoverable on day one. Your leverage improves if you can prove usage and ownership.

  • Boards & GCs: Build AI IP risk registers that assume licensing is coming, not that training will be exempt. The government’s latest signals align with this.


Bottom line

Australia has rejected the “train now, pay never” pathway. The cleanest middle ground is not a loophole, but a licence with teeth: mandatory participation for AI trainers, serious transparency, and fair money to the people whose works are powering the models.

We already run national-scale licences for education and broadcast. We can do it again for AI—faster than you think, and fairer than the alternatives.

Filed Under: AI, Copyright, IP, Regulation Tagged With: AI, Copyright, IP, Regulation

October 21, 2025 by Scott Coulthart

AI-Generated Works & Australian Copyright — What IP Owners Need to Know

Artificial intelligence isn’t just a tool anymore — it’s a collaborator, a co-author, a designer, a composer, and sometimes, a headache. As generative AI models keep reshaping creative industries, the question for lawyers, founders, and creators is simple: Who owns what when AI helps create it?

The Australian Position: Still (Mostly) Human

Under Australian copyright law, protection only arises for an “original work” that has a human author. Section 32 of the Copyright Act 1968 (Cth) still assumes that authorship is a human act — one involving independent intellectual effort and sufficient human skill and judgment.

That means:

  • If an AI system generates a work entirely on its own, it’s not a “work” under the Act.

  • If a human uses AI as a creative aid, and the human’s contribution involves real creative choice — not just typing a prompt — the resulting work may be protected.

  • But if the human’s input is minimal or mechanical, protection is shaky at best.

There have been plenty of official hints that legislative reform is coming, but for now, the position is clear: no human, no copyright.

Human + AI: Collaboration or Confusion?

In practice, most creative or technical outputs sit in a grey zone between human authorship and full automation.

Take these examples:

  • A marketing team uses Midjourney to create a logo based on multiple prompts and manual refinements.

  • A software developer uses GitHub Copilot to generate snippets, then curates and rewrites them.

  • A songwriter uses Suno or Udio to generate backing tracks, then layers vocals and structure.

In each case, the key question is: how much creative control did the human exert? Ownership (and enforceability) often depends less on the tool, and more on the human story behind the output.

Overseas Comparisons: Diverging Paths

  • United States: The U.S. Copyright Office has refused registration for purely AI-generated works (Thaler v Perlmutter), but allows copyright in human-authored parts of mixed works.

  • United Kingdom: Section 9(3) of the Copyright, Designs and Patents Act 1988 nominally attributes authorship to “the person by whom the arrangements necessary for the creation of the work are undertaken” — a possible (though untested) foothold for AI users.

  • Europe: The EU’s AI Act (Artificial Intelligence Act (Regulation (EU) 2024/1689)) leans heavily toward transparency and data-source disclosure, rather than redefining authorship.

  • China and Japan: Both have started to recognise limited copyright protection for AI-assisted works where human creativity remains substantial.

Australia hasn’t chosen a path yet — but in my view, any eventual reform is likely to echo the UK or EU models rather than the U.S. approach.

What IP Owners Should Do Now

Until the law catches up, contractual clarity is your best protection.

  1. Define ownership up-front: Ensure contracts, employment agreements, and service terms specify who owns outputs “created with the assistance of AI tools.” Clauses that tie ownership to “human input” and “creative control” can avoid later disputes.

  2. Track human contribution: Keep records of prompts, edits, decisions, and drafts — proof of human creativity can become decisive evidence if ownership is challenged.

  3. Check your inputs: Many AI systems are trained on datasets containing copyrighted material. Using their outputs commercially could expose you to infringement risk if the output is too close to the training data.

  4. Disclose AI use where relevant: For regulators (and some clients), transparency is now part of “reasonable steps” under APP 11 of the Privacy Act 1988 (Cth) and emerging AI-governance frameworks.

  5. Consider alternative protection: Where copyright may fail, consider trade marks, registered designs, or even confidential-information regimes for valuable AI-assisted outputs.

The Next Frontier: Authorship by Prompt?

The next legal battleground may well be prompt authorship — whether the person crafting complex or structured prompts can claim copyright in the resulting output or in the prompt itself. Early commentary suggests yes, if the prompt reflects creative skill and judgment, but this remains untested in Australian courts.

Final Thoughts …

AI isn’t erasing copyright — it’s forcing it to evolve. For now, the safest position is that human-directed, human-shaped works remain protectable. Purely machine-generated ones don’t.

But the creative frontier is moving fast, and so is the legal one. If you’re commissioning, creating, or commercialising AI-generated content, assume that ownership must be earned through human input — and documented accordingly.

Australia’s copyright and privacy frameworks are both in flux. Expect further reform by mid-2026, when the government’s broader AI and IP Reform Roadmap is due.

Until then: contract it, track it, and own it.

Filed Under: AI, Copyright, Digital Law, IP Tagged With: AI, Copyright, Digital Law, IP

September 29, 2025 by Scott Coulthart

Deepfakes on Trial: First Civil Penalties Under the Online Safety Act

The Federal Court has handed down its first civil penalty judgment under the Online Safety Act 2021 (Cth), in eSafety Commissioner v Rotondo (No 4) [2025] FCA 1191.

Justice Longbottom ordered Anthony (aka Antonio) Rotondo to pay $343,500 in penalties for posting a series of non-consensual deepfake intimate images of six individuals, and for failing to comply with removal notices and remedial directions issued by the eSafety Commissioner.


Key Points

1. First penalties under the Online Safety Act

This is the first time civil penalties have been imposed under the Act, making it a landmark enforcement case.

The Commissioner sought both declarations and penalties, with the Court emphasising deterrence as its guiding principle.

2. Deepfakes squarely captured

The Court confirmed that non-consensual deepfake intimate images fall within the Act’s prohibition on posting “intimate images” without consent.

Importantly, it rejected Rotondo’s submission that only defamatory or “social media” posts should be captured.

3. Regulatory teeth and enforcement

Rotondo received notices under the Act but responded defiantly (“Get an arrest warrant if you think you are right”) before later being arrested by Queensland Police on related matters.

His lack of remorse and framing of deepfakes as “fun” aggravated the penalty.

4. Platform anonymity

Although the Commissioner did not object, the Court chose to anonymise the name of the website hosting the deepfakes — reflecting a policy judgment not to amplify harmful platforms.

That said, the various newspapers reporting on this story all revealed the website’s address, but noted it has now been taken down.

IP Mojo is choosing not to reveal that website.

5. Civil vs criminal overlap

Alongside the civil penalties, the Court noted criminal charges under Queensland’s Criminal Code.

This illustrates how civil, regulatory and criminal enforcement can run in parallel.


Why It Matters

  • For regulators: This case confirms the Act has teeth. Regulators can secure significant financial penalties even where offenders are self-represented.

  • For platforms: The Court’s approach signals that services hosting deepfakes are firmly in scope, even if located offshore.

  • For the public: The judgment highlights the law’s adaptability to AI-driven harms — and sends a clear deterrence message.

  • For practitioners: Expect more proceedings of this kind, particularly as the prevalence of AI-generated abuse grows.

Filed Under: AI, Digital Law, Privacy, Regulation, Technology Tagged With: AI, Digital Law, Privacy, Regulation, Technology

September 23, 2025 by Scott Coulthart

Australia’s courts are no longer sitting on the sidelines of the AI debate. Within just a few months of each other, the Supreme Courts of New South Wales, Victoria, and Queensland have each published their own rules or guidance on how litigants may (and may not) use generative AI.

The result? A patchwork of approaches — from strict prohibition to principles-based guidance to pragmatic policy.

NSW: Rules with Teeth

The NSW Supreme Court’s Practice Note SC Gen 23 is the most prescriptive of the three.

  • Affidavits & evidence: AI must not be used to generate affidavits, witness statements, or character references. Each must disclose that no AI was used.

  • Written submissions: AI assistance is permitted, but every citation and authority must be personally verified.

  • Confidential material: Suppression-protected or subpoenaed docs must not go near an AI tool unless strict safeguards exist.

  • Experts: AI use in expert reports requires prior leave of the Court, with detailed disclosure obligations.

This is a black-letter approach: firm rules, mandatory disclosures, and penalties if ignored.

Victoria: Trust and Principles

The Victorian Supreme Court Guidelines (2025) are principles-based.

  • Disclosure of AI-use is encouraged: Especially for self-represented parties, transparency helps judges understand context.

  • Cautions are flagged: Generative AI can be inaccurate, out-of-date, incomplete, jurisdictionally inapplicable, or biased.

  • Responsibility is clear: Lawyers remain fully accountable for accuracy and proper basis. “The AI made me do it” is no defence.

  • Judicial use: Courts confirm AI is not to be used to prepare judgments or reasons.

It’s a trust-but-verify model, leaning on professional responsibility rather than outright bans.

Queensland: Pragmatic First Step

The Queensland Supreme Court AI Guidelines (2025) (which we covered in yesterday’s IP Mojo post) sit somewhere in the middle.

  • The tone is more pragmatic, focused on practicalities like accuracy, confidentiality, and proper verification.

  • The scope is less prescriptive than NSW, but more directive than Victoria.

  • The positioning signals that AI is already here, but stresses that obligations of candour and accuracy remain unchanged.

Qld’s approach reads more like a policy statement than binding rules — but it makes clear that AI use is under judicial scrutiny.

A Timeline of Moves

  • NSW: First issued SC Gen 23 in Nov 2024 (updated Jan 2025, effective Feb).

  • Victoria: Released guidelines in early 2025.

  • Queensland: Followed in Feb 2025 with its policy framework.

So while NSW and Victoria were the early movers, all three now have frameworks in play.

Three States, Three Philosophies

State Approach Key Features
NSW Prescriptive “hard law” Strict bans on affidavits/witnesses, leave required for experts, mandatory disclosure
Victoria Principles-based “soft law” Encourages disclosure, flags risks, trusts practitioner responsibility
Queensland    Pragmatic policy Practical guidance, verification and accuracy focus, less formal but watchful

Why This Matters

For practitioners, this divergence isn’t academic:

  • Forum-specific compliance is now a reality — what’s permissible in Brisbane may be prohibited in Sydney.

  • Harmonisation vs patchwork: Will the states converge over time, or continue down separate paths?

  • Strategic implications: Could litigants engage in forum shopping if one jurisdiction feels more AI-friendly?

One thing is clear: Australian courts are acting fast, and the rules of the litigation game are being rewritten — jurisdiction by jurisdiction.

Filed Under: AI, Digital Law, Regulation Tagged With: AI, Digital Law, Regulation

September 22, 2025 by Scott Coulthart

From ChatGPT hallucinations to deepfakes in affidavits, Queensland’s courts have drawn a line in the sand.

Two new guidelines, released on 15 September 2025, map out how judges and court users should (and shouldn’t) use AI in litigation.

Two Audiences, One Big Message

Queensland is the latest Australian jurisdiction to publish formal, court-wide rules for generative AI – and it hasn’t stopped at one audience.

  • Judicial Officers: The first guideline is aimed at judges and tribunal members. It stresses confidentiality, accuracy, and ethical responsibility, and makes clear that AI must never be used to prepare or decide judgments.

  • Non-Lawyers: The second is written in plain English for self-represented litigants, McKenzie friends, lay advocates and employment advocates. It’s blunt: AI is not a substitute for a qualified lawyer.

Together, they show the courts know AI isn’t a future problem — it’s already walking into the courtroom (and it’s not hiding under the desk).

What the Courts Are Worried About

The guidelines read like a checklist of every AI-related nightmare scenario:

  • Hallucinations: Fabricated cases, fake citations, and quotes that don’t exist.

  • Confidentiality breaches: Entering suppressed or private information into a chatbot could make it “public to all the world”.

  • Copyright and plagiarism: Summarising textbooks or IP materials via AI may breach copyright.

  • Misleading affidavits: Self-reps relying on AI risk filing persuasive-looking documents that contain substantive errors.

  • Deepfakes: Courts warn of AI-generated forgeries in text, images and video.

The judicial guideline even suggests judges may need to ask outright if AI has been used when dodgy submissions appear — especially if the citations “don’t sound familiar”.

Consequences for Misuse

The courts aren’t treating this as academic theory. Practical consequences are built in:

  • Costs orders: Non-lawyers who waste court time by filing AI-generated fakes could be hit with costs.

  • Judicial oversight: Judges may require lawyers to confirm that AI-assisted research has been independently verified.

  • Expert reports: Experts may be asked to disclose the precise way they used AI in forming an opinion.

That’s real accountability — not just “guidance”.

Why IP Lawyers Should Care

For IP practitioners, one section stands out: the copyright and plagiarism warnings. Both sets of guidelines caution that using AI to re-package copyrighted works can infringe rights if the summary or reformulation substitutes for the original.

This matters for more than pleadings. It cuts across creative industries, publishing, and expert evidence. Expect to see copyright creeping into arguments about how AI-assisted evidence is prepared and presented.

The Bigger Picture

Queensland now joins Victoria and NSW in representing Australia’s formal approach to use of AI in the courtroom.  Courts in the US, UK and Canada have already started issuing AI guidance. Australia now joins the global conversation on how to balance innovation, access to justice, and the integrity of the judicial process.

For lawyers, the message is simple: use AI carefully, verify everything, and never outsource your professional responsibility. For litigants in person, the message is even simpler: AI is not your lawyer.

IP Mojo Takeaway

Queensland’s twin AI guidelines are a watershed moment. They bring generative AI out of the shadows and into the courtroom spotlight.

And whether you’re a judge, a barrister, or a self-rep with a smartphone, the new rules are clear: if you use AI in court, you own the risks.

At the time of publishing this post, you can find the guidelines here and here.

Filed Under: AI, Digital Law, Regulation Tagged With: AI, Digital Law, Regulation

August 7, 2025 by Scott Coulthart

Your Data, My Model? Why AI Ambitions Demand a Contract Check-Up

As AI capabilities become standard fare in SaaS platforms, software providers are racing to retrofit intelligence into their offerings. But if your platform dreams of becoming the next ChatXYZ, you may need to look not to your engineering team, but to your legal one.

The Problem with “Your Data”

Most software providers already have mountains of processed, transformed and inferred data—data shaped by customer inputs and platform logic. That data could supercharge AI development, from powering smarter dashboards to training predictive algorithms.

But here’s the rub: just because the data isn’t raw customer input doesn’t mean you can freely use it.

You may assume your standard software licence or SaaS agreement gives you all the rights you need. It probably doesn’t.

What Does the Contract Say?

Take a typical clause like this:

“The Customer grants the Provider a non-exclusive, irrevocable licence to use Customer Data to the extent reasonably required to provide the Services and for use in the Provider’s business generally.”

Even a broad “use in our business generally” clause won’t necessarily cover:

  • Using processed or aggregated data from multiple customers

  • Training an AI model whose outputs are shared with others

  • Commercialising new AI-powered features not contemplated in the original deal

And if the data is derived from inputs that were themselves confidential or personal, you’ve got even more legal landmines—Privacy Law, confidentiality obligations, and IP ownership issues if the customer contributed meaningful structure to the dataset.

Is Deidentification Enough (or even Allowed)?

A common fallback is: “We’ll just deidentify the data.” But that’s not a bulletproof strategy.

Under most privacy regimes, data is only considered deidentified if re-identification is not reasonably possible—a high bar, especially in small or specialised datasets. Even deidentified data may still be contractually protected if it originates from information the customer expects to be confidential.

More fundamentally, your contract might not give you the right to deidentify the data at all, unless required to do so by law.

Most software licences and SaaS agreements treat customer data as confidential information. Unless the contract expressly permits you to transform, aggregate or deidentify that data for secondary use (like AI training), doing so could itself amount to a breach. Moreover, if the data includes personal information, you’ll need to navigate privacy laws that impose their own limits—regardless of your contractual rights.

So before you start feeding your LLM, make sure you’re not breaching your SLA.

What to Look For (or Add)

If you’re a provider:

  • Check whether your agreement expressly allows you to create, collate, and use aggregated and deidentified customer data for AI training and product development.

  • Ensure the licence to use data extends beyond service delivery and includes improvements, analytics, and R&D.

  • Include language around data governance, privacy compliance, and ownership of AI outputs.

If you’re a customer:

  • Scrutinise clauses that allow use of data for “business purposes” or “analytics”—these may reach further than you think.

  • Consider negotiating limits, notice obligations, or opt-out rights when your data could be used to build broadly deployed AI systems—unless, of course, that can be turned to your advantage.

In the Age of AI, Contracts Are Training Data Too

Training AI on customer data can unlock immense value—but only if your agreements keep up. Your model is only as smart as your data. And your data rights are only as strong as your contract.

Filed Under: AI, Commercial Law, Contracts, Digital Law, Technology Tagged With: AI, Commercial Law, Contracts, Digital Law, Technology

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • 🏇 When the Race Stops a Nation — Who Owns the Moment?
  • AI Training in Australia: Why a Mandatory Licence Could Be the Practical Middle Ground
  • AI-Generated Works & Australian Copyright — What IP Owners Need to Know
  • When Cheaper Medicines Meet Patent Law: Regeneron v Sandoz
  • #NotThatFamous: When Influencer Buzz Fails the s 60 Test

Archives

  • November 2025 (1)
  • October 2025 (14)
  • September 2025 (21)
  • August 2025 (18)
  • July 2025 (16)
  • June 2025 (21)
  • May 2025 (12)
  • April 2025 (4)

Footer

© Scott Coulthart 2025