• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

IPMojo

  • About IP Mojo
  • About Scott Coulthart
  • CONTACT
BOOK AN APPOINTMENT

Regulation

October 28, 2025 by Scott Coulthart

AI Training in Australia: Why a Mandatory Licence Could Be the Practical Middle Ground

Over the weekend the Australian Government finally drew a line in the sand: no special copyright carve-out to let AI developers freely train on Australians’ creative works. In rejecting a broad text-and-data-mining (TDM) exception, the Attorney-General signalled that any reform must protect creators first, and that “sensible and workable solutions” are the goal. Creators and peak bodies quickly welcomed the stance; the TDM exception floated by the Productivity Commission in August met fierce resistance from authors, publishers, music and media groups.

So where to from here? One pragmatic path is a mandatory licensing regime for AI training: no free use; transparent reporting; per-work remuneration; and money flowing to the rightsholders who opt in (or register) to be paid. Below I sketch how that could work in Australia, grounded in our existing statutory licensing DNA.


What just happened (and why it matters)

  • Government position (27 Oct 2025): The Commonwealth has ruled out a new TDM exception for AI training at this time and instead is exploring reforms that ensure fair compensation and stronger protections for Australian creatives. The Copyright and AI Reference Group (CAIRG) continues to advise, with transparency and compensation high on the agenda.

  • The alternative that was floated: In August, the Productivity Commission suggested consulting on a TDM exception to facilitate AI. That proposal drew a rapid backlash from creators, who argued it would amount to uncompensated mass copying.

  • The direction of travel: With an exception off the table, the policy energy now shifts to licensing — how to enable AI while paying creators and bringing sunlight to training data.


Australia already knows how to do “copy first, pay fairly”

We are not starting from scratch. Australia’s Copyright Act has long used compulsory (statutory) licences to reconcile mass, socially valuable uses with fair payment:

  • Education: Part VB/related schemes allow teachers to copy and share text and images for students, in return for licence fees distributed to rightsholders.

  • Broadcast content for education & government: Screenrights administers statutory licences for copying and communicating broadcast TV/radio content by educators and government agencies, with royalties paid out to rightsholders.

These schemes prove a simple point: when individual permissions are unfeasible at scale, mandatory licensing with collective administration can align public interest and creator remuneration.


A mandatory licence for AI training: the core design

Scope

The scope of a mandatory licence regime would need to cover the reproduction and ingestion of copyright works for the purpose of training AI models (foundation and domain-specific).

To ensure it doesn’t go too far, it would need to exclude public distribution of training copies.  Output uses would remain governed by ordinary copyright (no licence for output infringement, style-cloning or substitutional uses).

Ideally, the licence would cover all works protected under the Copyright Act 1968 (literary, artistic, musical, dramatic, films, sound recordings, broadcasts), whether online or offline, Australian or foreign (subject to reciprocity).

Mandatory

The licence would be mandatory for any developer (or deployer) who assembles or fine-tunes models using copies of protected works (including via third-party dataset providers).

Absent a specific free-to-use status (e.g. CC-BY with TDM permission or public domain), all AI training using covered works would require a licence and reporting.

Transparency/Reporting

Licensees would be required to maintain auditable logs identifying sources used (dataset manifests, crawling domains, repositories, catalogues).

They would also be required to provide regular transparency reports to the regulator and collecting society, with confidential treatment for genuinely sensitive items (trade secrets protected but not a shield for non-compliance). CAIRG has already identified copyright-related AI transparency as a live issue—this would operationalise it.

Register

A register of creators/rightsholders would be established with the designated collecting society (or societies) to receive distributions.

All unclaimed funds would be held and later distributed via usage-based allocation rules (with rolling claims windows), mirroring existing statutory practice in education/broadcast licences.

Rates

Setting rates and allocating royalties would be a little more complex.  One way to do that would be to blend:

  1. Source-side weighting (how much of each catalogue was ingested, adjusted for “substantial part” analysis); and

  2. Impact-side proxies (e.g. similarity retrieval hits during training/validation; reference counts in tokenizer vocabularies; contribution metrics from dataset cards).

Rates could be set by Copyright Tribunal-style determination or by periodic ministerial instrument following public consultation.

Opt out/in

In this proposal, there would be an opt-out process with all works covered by default on a “copy first, pay fairly” basis – which would replicate current education/broadcast models and avoid a data-black-market.

Into that could be layered an opt-out right for rightsholders who object on principle (with enforceable dataset deletion duties).

An added twist could be the inclusion of opt-in premium tiers, where, for example, native-format corpora or pre-cleared archives would be priced above the baseline.

Small model & research safe harbours

A de minimis / research tier for non-commercial, low-scale research could be applied with strict size and access limits (registered institutions; no commercial deployment) to keep universities innovating without trampling rights.

Enforcement

Civil penalties could be issued for unlicensed training; aggravated penalties for concealment or falsified dataset reporting.

The regulator/collecting society could also be given audit powers , with privacy and trade-secret safeguards.


Governance: who would run it?

Australia already has experienced collecting societies and government infrastructure:

  • Text/image sector: Copyright Agency (education/government experience, distribution pipelines).

  • Screen & broadcast: Screenrights (large-scale repertoire matching, competing claims processes).

  • Music (for audio datasets): APRA AMCOS/PPCA (licensing, cue sheets, ISRC/ISWC metadata).

The Government could designate a lead collecting society per repertoire (text/image; audio; AV) under ministerial declaration, with a single one-stop portal to keep compliance simple.


Why this beats both extremes

Versus a TDM exception (now rejected):

  • Ensures real money to creators, not just “innovation” externalities.

  • Reduces litigation risk for AI companies by replacing guesswork about “fair dealing/fair use” with clear rules and receipts.

Versus a pure consent-only world:

  • Avoids impossible transaction costs of millions of one-off permissions.

  • Preserves competition by allowing local model builders to license at predictable rates instead of being locked out by big-tech private deals.


Practical details to get right (and how to solve them)

  1. Identifiability of works inside massive corpora

    • Require dataset manifests and hashed URL lists on ingestion; favour sources with reliable identifiers (ISBN/ISSN/DOI/ISRC/ISWC).

    • Permit statistical allocation where atom-level matching is infeasible, backed by audits.

  2. Outputs vs training copies

    • This licence covers training copies only. Output-side infringement, passing-off, and “style cloning” remain governed by ordinary law (and other reforms). Government focus on broader AI guardrails continues in parallel.

  3. Competition & concentration

    • Prevent “most favoured nation” clauses and ensure FRAND-like access to the scheme so smaller labs can participate.

  4. Privacy & sensitive data

    • Exclude personal information categories by default; align with privacy reforms and sectoral data controls.

  5. Cross-border reciprocity

    • Pay foreign rightsholders via society-to-society deals; receive for Australians used overseas, following established collecting society practice.


How this could be enacted fast

  • Amend the Copyright Act 1968 (Cth) to insert a new Part establishing an AI Training Statutory Licence, with regulation-making power for:

    • eligible uses;

    • reporting and audit;

    • tariff-setting criteria;

    • distribution rules and claims periods;

    • penalties and injunctions for non-compliance.

  • Designate collecting societies by legislative instrument.

  • Set up a portal with standard dataset disclosure templates and quarterly reporting.

  • Transitional window (e.g., 9–12 months) to allow existing models to come into compliance (including back-payment or corpus curation).


What this could mean for your organisation (now)

  • AI developers & adopters: Start curating dataset manifests and chain-of-licence documentation. If your vendors can’t or won’t identify sources, treat that as a red flag.

  • Publishers, labels, studios, creators: Register and prepare your repertoire metadata so you’re discoverable on day one. Your leverage improves if you can prove usage and ownership.

  • Boards & GCs: Build AI IP risk registers that assume licensing is coming, not that training will be exempt. The government’s latest signals align with this.


Bottom line

Australia has rejected the “train now, pay never” pathway. The cleanest middle ground is not a loophole, but a licence with teeth: mandatory participation for AI trainers, serious transparency, and fair money to the people whose works are powering the models.

We already run national-scale licences for education and broadcast. We can do it again for AI—faster than you think, and fairer than the alternatives.

Filed Under: AI, Copyright, IP, Regulation Tagged With: AI, Copyright, IP, Regulation

October 9, 2025 by Scott Coulthart

Privacy’s First Big Hit: Australian Clinical Labs Fined $5.8 Million for Data Breach Failures

When 86 gigabytes of patient data — including health, financial and identity information — hit the dark web after a ransomware attack, the fallout was always going to be brutal.

Now, in Australian Information Commissioner v Australian Clinical Labs Limited (No 2) [2025] FCA 1224, the Federal Court has handed down a $5.8 million penalty — marking the first civil penalty judgment under the Privacy Act.

And it’s a warning shot for every business holding personal information in Australia.


⚖️ The Case in a Nutshell

Australian Clinical Labs (ACL) — one of the country’s largest private pathology providers — bought Medlab Pathology in late 2021.

What it didn’t buy (or even check properly) were Medlab’s crumbling IT systems: unsupported Windows servers, weak authentication, no encryption, and logs that deleted themselves every hour.

In February 2022, the inevitable happened — a ransomware group calling itself “Quantum” infiltrated Medlab’s servers, exfiltrated 86GB of data, and dumped it online.

ACL’s response was painfully slow. Despite early signs of exfiltration, it:

  • Relied almost entirely on an external consultant’s limited review;

  • Concluded (wrongly) that no data had been stolen;

  • Ignored early warnings from the Australian Cyber Security Centre; and

  • Waited over three months before notifying the OAIC.


🧩 The Breaches

Justice Halley found ACL had seriously interfered with the privacy of 223,000 individuals through three major contraventions of the Privacy Act 1988 (Cth):

  1. Breach of APP 11.1 — Failure to take reasonable steps to protect personal information from unauthorised access or disclosure.

    • The Medlab systems were riddled with vulnerabilities.

    • ACL failed to identify or patch them after acquisition.

    • Overreliance on third-party providers compounded the problem.

  2. Breach of s 26WH(2) — Failure to carry out a reasonable and expeditious assessment of whether the incident was an eligible data breach.

    • ACL’s “assessment” was based on incomplete data and unsupported assumptions.

    • The Court called it unreasonable and inadequate.

  3. Breach of s 26WK(2) — Failure to notify the Commissioner as soon as practicable after forming the belief that an eligible data breach had occurred.

    • ACL delayed nearly a month after confirmation that personal and financial information was on the dark web.

Each breach amounted to a “serious interference with privacy” under s 13G, attracting civil penalties.


💰 The Penalty Breakdown

ACL agreed to pay a total of $5.8 million:

Contravention Section Penalty
Breach of APP 11.1 (223,000 contraventions, treated as one course of conduct) s 13G(a) $4.2 million
Failure to assess breach s 26WH(2) $800,000
Failure to notify OAIC s 26WK(2) $800,000
Total $5.8 million

ACL also agreed to pay $400,000 in costs.

While the theoretical maximum exceeded $495 billion, the Court accepted the agreed penalty as being within the permissible range — particularly given ACL’s cooperation, remorse, and post-breach reforms.


⚙️ “Reasonable Steps” — The New Legal Standard

This judgment finally gives judicial colour to APP 11.1’s “reasonable steps” requirement.
Justice Halley said reasonableness must be assessed objectively, considering:

  • the sensitivity of the information;

  • the potential harm from unauthorised disclosure;

  • the size and sophistication of the entity;

  • the cyber risk landscape; and

  • any prior threats or attacks.

Critically, “reasonable steps” cannot be outsourced — delegation to an IT vendor does not discharge responsibility.  ACL’s overreliance on StickmanCyber was no defence.


🚨 Why It Matters

This decision rewrites the playbook for privacy compliance in Australia:

  • Civil penalties are real — the OAIC now has judicial precedent for enforcement.

  • Each affected individual counts — the Court held that each person’s privacy breach is a separate contravention.

  • “Serious” breaches will be taken seriously — health and financial data, inadequate security, and systemic failures will all tip the scales.

  • M&A due diligence must cover cybersecurity — buying a business means inheriting its data liabilities.

  • Notification delays will cost you — the OAIC expects “as soon as practicable,” not weeks or months.


💡 IP Mojo Take

Privacy can’t be treated anymore like it is just a paperwork exercise — it’s a governance test you can fail in the Federal Court.

This case cements privacy law as a compliance discipline with teeth.

The OAIC now has a roadmap for future actions — and the Court has made clear that “reasonable steps” means measurable, auditable, and proactive security governance.

For corporate Australia, this is ASIC v RI Advice for the health sector — but under the Privacy Act instead of the Corporations Act.

Expect to see:

  • Increased OAIC enforcement in healthcare, finance, and tech sectors;

  • Board-level scrutiny of data protection measures; and

  • Class actions waiting in the wings, armed with a judicial finding of “serious interference with privacy.”

The privacy bar has just been raised — permanently.

Filed Under: Digital Law, Privacy, Regulation Tagged With: Digital Law, Privacy, Regulation

September 29, 2025 by Scott Coulthart

Deepfakes on Trial: First Civil Penalties Under the Online Safety Act

The Federal Court has handed down its first civil penalty judgment under the Online Safety Act 2021 (Cth), in eSafety Commissioner v Rotondo (No 4) [2025] FCA 1191.

Justice Longbottom ordered Anthony (aka Antonio) Rotondo to pay $343,500 in penalties for posting a series of non-consensual deepfake intimate images of six individuals, and for failing to comply with removal notices and remedial directions issued by the eSafety Commissioner.


Key Points

1. First penalties under the Online Safety Act

This is the first time civil penalties have been imposed under the Act, making it a landmark enforcement case.

The Commissioner sought both declarations and penalties, with the Court emphasising deterrence as its guiding principle.

2. Deepfakes squarely captured

The Court confirmed that non-consensual deepfake intimate images fall within the Act’s prohibition on posting “intimate images” without consent.

Importantly, it rejected Rotondo’s submission that only defamatory or “social media” posts should be captured.

3. Regulatory teeth and enforcement

Rotondo received notices under the Act but responded defiantly (“Get an arrest warrant if you think you are right”) before later being arrested by Queensland Police on related matters.

His lack of remorse and framing of deepfakes as “fun” aggravated the penalty.

4. Platform anonymity

Although the Commissioner did not object, the Court chose to anonymise the name of the website hosting the deepfakes — reflecting a policy judgment not to amplify harmful platforms.

That said, the various newspapers reporting on this story all revealed the website’s address, but noted it has now been taken down.

IP Mojo is choosing not to reveal that website.

5. Civil vs criminal overlap

Alongside the civil penalties, the Court noted criminal charges under Queensland’s Criminal Code.

This illustrates how civil, regulatory and criminal enforcement can run in parallel.


Why It Matters

  • For regulators: This case confirms the Act has teeth. Regulators can secure significant financial penalties even where offenders are self-represented.

  • For platforms: The Court’s approach signals that services hosting deepfakes are firmly in scope, even if located offshore.

  • For the public: The judgment highlights the law’s adaptability to AI-driven harms — and sends a clear deterrence message.

  • For practitioners: Expect more proceedings of this kind, particularly as the prevalence of AI-generated abuse grows.

Filed Under: AI, Digital Law, Privacy, Regulation, Technology Tagged With: AI, Digital Law, Privacy, Regulation, Technology

September 23, 2025 by Scott Coulthart

Australia’s courts are no longer sitting on the sidelines of the AI debate. Within just a few months of each other, the Supreme Courts of New South Wales, Victoria, and Queensland have each published their own rules or guidance on how litigants may (and may not) use generative AI.

The result? A patchwork of approaches — from strict prohibition to principles-based guidance to pragmatic policy.

NSW: Rules with Teeth

The NSW Supreme Court’s Practice Note SC Gen 23 is the most prescriptive of the three.

  • Affidavits & evidence: AI must not be used to generate affidavits, witness statements, or character references. Each must disclose that no AI was used.

  • Written submissions: AI assistance is permitted, but every citation and authority must be personally verified.

  • Confidential material: Suppression-protected or subpoenaed docs must not go near an AI tool unless strict safeguards exist.

  • Experts: AI use in expert reports requires prior leave of the Court, with detailed disclosure obligations.

This is a black-letter approach: firm rules, mandatory disclosures, and penalties if ignored.

Victoria: Trust and Principles

The Victorian Supreme Court Guidelines (2025) are principles-based.

  • Disclosure of AI-use is encouraged: Especially for self-represented parties, transparency helps judges understand context.

  • Cautions are flagged: Generative AI can be inaccurate, out-of-date, incomplete, jurisdictionally inapplicable, or biased.

  • Responsibility is clear: Lawyers remain fully accountable for accuracy and proper basis. “The AI made me do it” is no defence.

  • Judicial use: Courts confirm AI is not to be used to prepare judgments or reasons.

It’s a trust-but-verify model, leaning on professional responsibility rather than outright bans.

Queensland: Pragmatic First Step

The Queensland Supreme Court AI Guidelines (2025) (which we covered in yesterday’s IP Mojo post) sit somewhere in the middle.

  • The tone is more pragmatic, focused on practicalities like accuracy, confidentiality, and proper verification.

  • The scope is less prescriptive than NSW, but more directive than Victoria.

  • The positioning signals that AI is already here, but stresses that obligations of candour and accuracy remain unchanged.

Qld’s approach reads more like a policy statement than binding rules — but it makes clear that AI use is under judicial scrutiny.

A Timeline of Moves

  • NSW: First issued SC Gen 23 in Nov 2024 (updated Jan 2025, effective Feb).

  • Victoria: Released guidelines in early 2025.

  • Queensland: Followed in Feb 2025 with its policy framework.

So while NSW and Victoria were the early movers, all three now have frameworks in play.

Three States, Three Philosophies

State Approach Key Features
NSW Prescriptive “hard law” Strict bans on affidavits/witnesses, leave required for experts, mandatory disclosure
Victoria Principles-based “soft law” Encourages disclosure, flags risks, trusts practitioner responsibility
Queensland    Pragmatic policy Practical guidance, verification and accuracy focus, less formal but watchful

Why This Matters

For practitioners, this divergence isn’t academic:

  • Forum-specific compliance is now a reality — what’s permissible in Brisbane may be prohibited in Sydney.

  • Harmonisation vs patchwork: Will the states converge over time, or continue down separate paths?

  • Strategic implications: Could litigants engage in forum shopping if one jurisdiction feels more AI-friendly?

One thing is clear: Australian courts are acting fast, and the rules of the litigation game are being rewritten — jurisdiction by jurisdiction.

Filed Under: AI, Digital Law, Regulation Tagged With: AI, Digital Law, Regulation

September 22, 2025 by Scott Coulthart

From ChatGPT hallucinations to deepfakes in affidavits, Queensland’s courts have drawn a line in the sand.

Two new guidelines, released on 15 September 2025, map out how judges and court users should (and shouldn’t) use AI in litigation.

Two Audiences, One Big Message

Queensland is the latest Australian jurisdiction to publish formal, court-wide rules for generative AI – and it hasn’t stopped at one audience.

  • Judicial Officers: The first guideline is aimed at judges and tribunal members. It stresses confidentiality, accuracy, and ethical responsibility, and makes clear that AI must never be used to prepare or decide judgments.

  • Non-Lawyers: The second is written in plain English for self-represented litigants, McKenzie friends, lay advocates and employment advocates. It’s blunt: AI is not a substitute for a qualified lawyer.

Together, they show the courts know AI isn’t a future problem — it’s already walking into the courtroom (and it’s not hiding under the desk).

What the Courts Are Worried About

The guidelines read like a checklist of every AI-related nightmare scenario:

  • Hallucinations: Fabricated cases, fake citations, and quotes that don’t exist.

  • Confidentiality breaches: Entering suppressed or private information into a chatbot could make it “public to all the world”.

  • Copyright and plagiarism: Summarising textbooks or IP materials via AI may breach copyright.

  • Misleading affidavits: Self-reps relying on AI risk filing persuasive-looking documents that contain substantive errors.

  • Deepfakes: Courts warn of AI-generated forgeries in text, images and video.

The judicial guideline even suggests judges may need to ask outright if AI has been used when dodgy submissions appear — especially if the citations “don’t sound familiar”.

Consequences for Misuse

The courts aren’t treating this as academic theory. Practical consequences are built in:

  • Costs orders: Non-lawyers who waste court time by filing AI-generated fakes could be hit with costs.

  • Judicial oversight: Judges may require lawyers to confirm that AI-assisted research has been independently verified.

  • Expert reports: Experts may be asked to disclose the precise way they used AI in forming an opinion.

That’s real accountability — not just “guidance”.

Why IP Lawyers Should Care

For IP practitioners, one section stands out: the copyright and plagiarism warnings. Both sets of guidelines caution that using AI to re-package copyrighted works can infringe rights if the summary or reformulation substitutes for the original.

This matters for more than pleadings. It cuts across creative industries, publishing, and expert evidence. Expect to see copyright creeping into arguments about how AI-assisted evidence is prepared and presented.

The Bigger Picture

Queensland now joins Victoria and NSW in representing Australia’s formal approach to use of AI in the courtroom.  Courts in the US, UK and Canada have already started issuing AI guidance. Australia now joins the global conversation on how to balance innovation, access to justice, and the integrity of the judicial process.

For lawyers, the message is simple: use AI carefully, verify everything, and never outsource your professional responsibility. For litigants in person, the message is even simpler: AI is not your lawyer.

IP Mojo Takeaway

Queensland’s twin AI guidelines are a watershed moment. They bring generative AI out of the shadows and into the courtroom spotlight.

And whether you’re a judge, a barrister, or a self-rep with a smartphone, the new rules are clear: if you use AI in court, you own the risks.

At the time of publishing this post, you can find the guidelines here and here.

Filed Under: AI, Digital Law, Regulation Tagged With: AI, Digital Law, Regulation

June 25, 2025 by Scott Coulthart

Ready, Set, Comply: Queensland’s IPOLA Reforms Launch 1 July 2025

This July marks a pivotal moment for Queensland public sector entities, agencies, and their contractors. The Information Privacy and Other Legislation Amendment (IPOLA) Act 2023 comes into full effect from 1 July 2025, ushering in sweeping updates to Queensland’s Information Privacy Act 2009, Right to Information Act 2009, and the rules governing data-breach notifications.

Let’s break it down.

1. Unified Access Rights & RTI Overhaul

What’s Changing:

  • As of 1 July, Queensland merges personal and non-personal document access into a single, unified right under the RTI Act.

  • Expect streamlined procedural rules: revised timeframes, adjusted decision-maker roles, and consolidated fees.

  • New requirements for disclosure logs and proactive release of information also come into force.

Why It Matters:

  • RTI applicants apply once—and agencies can’t dodge questions by splitting personal and non-personal requests.

  • Agencies must refresh policies, train staff, and implement systems that can handle integrated workflows.

  • Transparency expectations heighten. Agencies will be judged not just on compliance, but also disclosure culture.

2. Queensland Privacy Principles (QPPs) & Binding Codes

What’s Changing:

  • A fresh suite of 12 Queensland Privacy Principles takes effect—covering collection, disclosure, accuracy, retention, security, and more.

  • Binding QPP Codes can be issued by the Information Commissioner.

  • Importantly: contractual obligations with service providers (e.g., cloud, IT, data analytics) must now include binding QPP compliance clauses.

Why It Matters:

  • IT contracts across private and public sectors need rewriting to mandate QPP compliance.

  • Outsourced services—especially those involving personal data—must adhere to QPP requirements in practice, not just in documentation.

3. Mandatory Notification of Data Breach (MNDB) Scheme

Note: While the broader IPOLA reforms kick in July 2025, the MNDB requirement for local governments is delayed until July 2026.

What’s Happening Now:

  • State government Agencies adopt MNDB notifications from July 2025.

  • Local governments have an additional year to prepare.

Why It Matters:

  • MNDB templates, policies, and flowcharts from OIC are now live and ready.

  • All entities need clear internal breach response tech and training—or risk non-compliance.

  • Local councils have a 12-month window to align with the Scheme before 2026 rollout.

4. Training & Resources at the OIC

The Office of the Information Commissioner (OIC) has curated an extensive IPOLA onboarding program:

  • Stage 1 Awareness sessions (Aug–Sep 2024), attended by 1,000+ staff across 19 venues.

  • Stage 2 Build‑Knowledge workshops (Oct 2024–Mar 2025), reaching 3,000+ participants over modules covering MNDB, QPPs, and RTI.

  • Stage 3 Topic‑based training commenced in May 2025—delving into MNDB and RTI templates, including a Local‑Government‑specific workshop on 11 June 2025.

Why It Matters:

  • Poly‑themed, modular, and scenario‑driven sessions (including Q&A panels) are freely available and compressed into SCORM packages—but note: the SCORM kit is only available until 30 June 2025.

  • Agencies should download before then and integrate into internal LMS if you haven’t already—no extensions.

5. Practical Tools & Templates

To smooth your compliance journey, OIC offers (at their website oic.qld.gov.au:

  • Checklists: “Prepare for IPOLA” workbook, Access & Amendment Application checklist.

  • Policy templates: breach policy, eligible data‑breach registers, response plans.

  • Privacy Impact Assessment (PIA) tools: threshold forms, risk registers.

  • Contractor & collection‑notice guides: for binding providers and updating public info notices.

🚨 What You Should Do Before 1 July 2025

For Agencies & Departments:

  1. Download & embed SCORM training content by 30 June 2025.

  2. Deploy team training using Stage 2/3 modules or in-house adaptations.

  3. Revise internal systems for unified access rights, disclosure logs, and fee handling.

  4. Update contracts with QPP compliance clauses for all service providers.

  5. Implement MNDB policies and breach-response tech for July rollout.

For Contractors & Vendors:

  1. Review contracts—you’ll likely be legally required to comply with QPPs by July.

  2. Audit your data systems: implement encryption, retention, and access protocols matching QPPs.

  3. Train staff on breach detection, logging, and your obligations to notify.

For Local Government Entities:

  • Use 2025–26 as a setup year for MNDB readiness. Download checklists, test templates, and tap into OIC’s LG-specific training.

Final Word: Compliance Is Non-Negotiable

Come 1 July 2025, Queensland’s public-facing privacy and information regime becomes holistic:

  • Single RTI access request = one-stop for all documents.

  • QPPs apply across the lifecycle of personal data—including handling by contracted parties.

  • MNDB enforcement begins for state bodies (councils get a 12‑month grace period).

  • Training content won’t be available post 30 June.

The concrete tools, training, and structure are all out now—so aim to have your systems fully aligned before end of June. Delay is not an option.

Filed Under: Government Law, Privacy, Regulation Tagged With: Government Law, Privacy, Regulation

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • 🏇 When the Race Stops a Nation — Who Owns the Moment?
  • AI Training in Australia: Why a Mandatory Licence Could Be the Practical Middle Ground
  • AI-Generated Works & Australian Copyright — What IP Owners Need to Know
  • When Cheaper Medicines Meet Patent Law: Regeneron v Sandoz
  • #NotThatFamous: When Influencer Buzz Fails the s 60 Test

Archives

  • November 2025 (1)
  • October 2025 (14)
  • September 2025 (21)
  • August 2025 (18)
  • July 2025 (16)
  • June 2025 (21)
  • May 2025 (12)
  • April 2025 (4)

Footer

© Scott Coulthart 2025