• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

IPMojo

  • About IP Mojo
  • About Scott Coulthart
  • CONTACT
BOOK AN APPOINTMENT

Regulation

June 3, 2025 by Scott Coulthart

Reasonable Steps Just Got Real: What APP 11 Now Demands

For years, Australian Privacy Principle 11 has required businesses to take “reasonable steps” to protect personal information from misuse, interference, or loss. Sounds fair — but also vague. What exactly is “reasonable”? A locked filing cabinet? Two-factor authentication? Asking nicely?

In this 4th part of IP Mojo’s exclusive Privacy 2.0 blog series, we discuss how the latest privacy law amendments haven’t rewritten APP 11 — they’ve sharpened it. Specifically, they’ve clarified that “reasonable steps” include both technical and organisational measures. It’s a simple sentence, but it changes the conversation. Because now, the standard isn’t just what you thought was reasonable. It’s what you can prove you’ve done to make security part of your systems, your structure, and your staff’s day-to-day behaviour.

Let’s break it down. Technical measures? Think encryption, firewalls, intrusion detection systems, and strong password protocols. Organisational measures? Employee training, incident response plans, documented data handling procedures, and privacy-by-design baked into new systems and tools. It’s not just about buying tech — it’s about building a culture.

Of course, “reasonable” still depends on context: the nature of your business, the sensitivity of the data, the volume you handle. But this update sends a signal: the era of set-and-forget privacy compliance is over. If your team’s still using outdated software or storing customer records on someone’s laptop, that’s not going to cut it.

Here’s the kicker: while the amendment itself is modest — just a new clause (11.3) — the implications are not. It gives regulators clearer footing. It gives courts a stronger hook. And it gives businesses a chance to get ahead — by documenting what you’re doing, auditing what you’re not, and showing your privacy policies aren’t just legalese, but lived practice.

Tune in tomorrow for: a look at the new data breach response powers, and how the government can now legally share your customers’ personal information — yes, really — in a post-hack crisis.

Filed Under: Privacy, Privacy 2.0, Regulation Tagged With: Privacy, Privacy 2.0, Privacy 2.0 Part 4, Regulation

June 2, 2025 by Scott Coulthart

Back in our Privacy 2.0 series, we unpacked the upcoming Children’s Online Privacy Code — a new legal framework aimed at improving how kids’ personal information is handled online. Now, we’re hearing more from the people it’s meant to protect.

Children themselves have made it clear: they don’t want to be tracked, profiled, or buried in unreadable consent forms. And for companies whose business depends on that data, the reforms aren’t just a policy shift — they’re a threat to the model.

What Children Say

Our earlier blog post focused on the what: a binding industry code to strengthen children’s data protections under the Privacy Act 1988 (Cth).

Now we’re seeing more of the why — and it’s coming straight from the kids.

According to findings from consultations with children conducted by Reset Tech Australia, the message is loud and clear: children aren’t just passive subjects of data collection. They have opinions — strong ones.

Among the most consistent themes:

  • Nearly 90% of children surveyed want default privacy settings set to high, and geolocation turned off by default.

  • Many want simpler, age-appropriate explanations of how their data is used.

  • Crucially, they want the ability to delete their data — a right currently absent from Australia’s privacy framework.

As Privacy Commissioner Carly Kind put it (in her recent discussions with The Australian newspaper):

“Kids aren’t going to read 50 pages of terms and conditions when they sign up to an app… How do we give them actual choices, and not just the ability to click ‘I consent’ when they haven’t even read something, and it’s not a genuine form of consent?”

That question goes to the heart of what the Children’s Privacy Code — and broader privacy reform — is trying to fix.

Not Just Social Media — The Code’s Expanding Reach

Commissioner Kind also confirmed that the Children’s Privacy Code will work in parallel with the upcoming ban on under-16s using social media — but its scope is much broader. The Code will apply to:

  • Websites and online services accessed by children,

  • Wearable devices and fitness trackers, and

  • Education technology and apps, including those used in schools.

In other words, the Code is not just about excluding children from certain online spaces. It’s about protecting them wherever they are — especially in digital environments they’re required to engage with for school or social connection.

This dual approach — platform bans on one side, enforceable data safeguards on the other — reflects a recognition that meaningful participation in digital life shouldn’t come at the expense of privacy.

Who’s Worried — and Why

Stronger children’s privacy rules are good policy — but they’re also bad news for some very profitable business models. Behind the push for transparency and consent reform lies a quieter question: who stands to lose when kids gain more control over their data?

Let’s follow the data trail.

Adtech platforms are the obvious players at risk. Targeting, profiling, and retargeting of under-18s — even if indirect — fuels everything from engagement strategies to dynamic pricing models. If default privacy settings go “high” by law, or profiling becomes opt-in (or outright banned), that revenue stream starts to dry up.

Social platforms, even those ostensibly closed to under-16s, have powerful incentives to retain youth users — both for ad revenue and for maintaining long-term brand stickiness. The idea that kids might have a right to delete their data, opt out of tracking, or receive age-appropriate disclosures cuts into their legal risk model.

Then there’s edtech — the often-overlooked battleground. Many school-deployed tools gather extensive user-level data but offer limited controls to students (or even schools). Vendors that haven’t built privacy-by-design tooling may soon be scrambling to comply.

And finally, consumer IoT and smart toys — products that rely on voice input, biometric sensors, or location tracking — may find their compliance and legal risk profile radically changed if the Code’s protections become enforceable.

Most won’t publicly oppose child protection. But you can expect to see:

  • Lobbying for “flexible” implementation timelines,

  • Calls for “self-regulation”,

  • Quiet legal arguments around the scope of “reasonable access by children”, and

  • Industry pushback on making data deletion or privacy impact assessments mandatory.

What This Means for Industry

If your platform, app, device, or service is used by children, or may reasonably be accessed by children, you’ll need to start preparing for compliance now. That means:

  • Reviewing your default settings for privacy, location, and profiling;

  • Translating privacy policies into plain, age-appropriate language;

  • Building functionality for data deletion — even if not (yet) mandatory; and

  • Moving beyond consent as your only compliance crutch — especially if that consent comes from a user too young to legally or meaningfully provide it.

The consultations have made it clear: children want transparency, choice, and respect for their privacy — and regulators are listening.

Filed Under: Privacy, Regulation Tagged With: Privacy, Regulation

May 29, 2025 by Scott Coulthart

For years, when an Australian company suffered a data breach, the script was pretty simple: notify the OAIC, maybe tell your customers, and brace for PR blowback. But in a landscape of ransomware gangs, deepfake scams, and real-world harm flowing from leaked personal info, that old approach started to feel… inadequate. The new privacy law amendments (to the Privacy Act 1988 (Cth)) try to fix that.

In this 5th instalment of our Privacy 2.0 series, we look at the new regime of EDB declarations and emergency declarations — new legal tools that give the government power to coordinate how personal data is shared during and after a crisis. If that sounds like overreach, it’s not. It’s actually quite surgical. These powers are about enabling targeted, temporary, lawful information sharing when the goal is harm minimisation — not surveillance.

Under new Division 5, Part IIIC, of the Act, which concerns EDB declarations and commenced with effect from 11 December 2024, the Minister can authorise specific entities to collect, use, or disclose personal information otherwise restricted by the APPs — but only for clearly defined, time-limited purposes like fraud prevention, identity verification, or cyber response, when there has been an eligible data breach that ticks certain boxes.

Banks, credit bureaus, and government agencies may be brought into the loop — but not media outlets.  There are some safeguards, mainly comprised of transparency requirements, consultation with the OAIC, and criminal offences for going rogue with the info.

Then there are emergency declarations — a reboot of existing powers to deal with natural disasters, pandemics, and national emergencies. These let the Prime Minister or a designated Minister approve personal data handling across public and private sectors for things like locating missing persons or coordinating aid.

Again, it’s tightly scoped: the declarations can’t be used for general surveillance, expire after 12 months unless renewed, and once again exclude media outlets entirely – so there are no free-for-alls, no “Minister for Metadata” moment.

The lesson for businesses? Don’t assume your data handling obligations end at “notify the OAIC.” In breach or emergency scenarios, you may now be authorised — or even expected — to share personal information, as long as it aligns with a declaration. Legal and compliance teams should track these developments — your incident response plan may need a serious update.

In short: breach response is no longer just about damage control. It’s about lawful coordination. And if you’re caught flat-footed without internal protocols for handling this new regime, you’re behind the curve.

Next week in the Privacy 2.0 series: how the law now reimagines overseas data transfers — and whether your Singapore-based SaaS platform still cuts it.

Filed Under: Privacy, Privacy 2.0, Regulation Tagged With: Privacy, Privacy 2.0, Privacy 2.0 Part 5, Regulation

May 27, 2025 by Scott Coulthart

Kids, Code & Clicks: Australia’s New Children’s Privacy Push

Privacy laws used to treat kids like a rounding error — cute, inconvenient, and mostly left to the “parental supervision” fine print. Not anymore.

In this 3rd part of IP Mojo’s exclusive Privacy 2.0 blog series, we see how Australia’s privacy regime is finally catching up to a reality every parent knows: children are online, they’re being tracked, and they deserve more than vague guidance from a dusty regulator’s website.

Enter the Children’s Online Privacy Code, a centrepiece of the latest privacy law reforms. For the first time, Australia is baking enforceable obligations into law when it comes to how children’s personal information is collected, used and shared. Not just “you should be careful” — but “you must comply.”

So what’s changing? The new law requires the Information Commissioner to develop a binding code that sits within the framework of the Australian Privacy Principles (APPs). The Code must be finalised within two years and will apply to organisations likely to interact with children — particularly social media services, games, streaming platforms, and apps that know full well they’ve got under-18s clicking through. It won’t apply to health services, but that’s about the only carve-out. In essence: if your platform’s got kids on it, this is now your problem.

The Code will apply to social media services, games, streaming platforms, and other digital services as defined in the Online Safety Act 2021 (Cth) — including so-called “relevant electronic services” and “designated internet services.” If you already know those definitions, you’ve probably had dealings with the eSafety Commissioner. Now, add the Privacy Commissioner to your contact list. These reforms don’t replace online safety obligations — they layer on top of them. That means double compliance, and potentially double trouble if you get it wrong.

Here’s the takeaway: if you run an app, platform, game or service that might appeal to kids — even if you didn’t intend it to — it’s time to review your privacy practices. Don’t wait for the Code to land in 2026. The direction of travel is clear: children’s data is no longer fair game. It’s protected space. And if you’re not designing with that in mind, your business model may need a rethink — or your lawyers may need a bigger budget.

Tune in next week for: a look at the revamped APP 11, where “reasonable steps” for data protection just got a lot more real.

Filed Under: Privacy, Privacy 2.0, Regulation Tagged With: Privacy, Privacy 2.0, Privacy 2.0 Part 3, Regulation

May 20, 2025 by Scott Coulthart

I Tort I Saw a Privacy Breach: Australia’s New Right to Sue

In barely a few weeks’ time, for the first time in Australian legal history, individuals will be able to sue for a serious invasion of privacy — with the new statutory tort coming into force on 10 June 2025.

It’s a landmark moment. While the Privacy Act has long offered regulatory protections (mainly enforced by the OAIC), this new law gives individuals a direct, personal legal remedy in court. If someone invades your privacy — by spying on you, hacking your data, or misusing personal information — you can now bring a tort claim for compensation, injunctions, apologies, or other relief.

But it’s not a free-for-all. Let’s unpack how it works.

What Exactly Do You Have to Prove?

To win a case under the new law, a plaintiff must establish five elements, all of which are based on ALRC recommendations:

  1. An invasion of privacy — either by intrusion upon seclusion (e.g. surveillance, unlawful entry, voyeurism) or misuse of private information (e.g. disclosing or using someone’s personal details without permission).

  2. A reasonable expectation of privacy — determined in context. This takes into account factors like place (e.g. home vs public), the sensitivity of the information, age or cultural background of the plaintiff, and whether they invited publicity.

  3. Intentional or reckless conduct — negligence isn’t enough. The defendant must have acted deliberately or with reckless disregard.

  4. A serious invasion — not just annoying or embarrassing. The harm must be objectively significant (e.g. likely to cause offence, distress or harm to dignity to a person of ordinary sensibilities).

  5. Public interest balancing — the court must be satisfied that the plaintiff’s right to privacy outweighs any public interest raised by the defendant (such as free expression, national security, or open justice).

You don’t need to prove economic loss or damage — it’s actionable without it. However, the nature and impact of the harm (e.g. emotional distress, reputational damage, or humiliation) will affect the seriousness of the invasion and any damages awarded.

Not Retrospective — and Watch the Clock

The new tort is not retrospective. That means you can’t sue for conduct that occurred before 10 June 2025, no matter how bad it was. The law only applies to invasions of privacy on or after the commencement date.

And there are strict time limits:

  • You must start proceedings within one year of becoming aware of the invasion, and in any event within three years of when it happened — whichever is earlier.

  • If the plaintiff was under 18 at the time, they can sue any time up until their 21st birthday.

  • In exceptional circumstances, the court can extend the period up to six years after the event — for instance, where trauma or lack of awareness delayed action.

What About Defences?

It’s not all a one-sided affair. There’s a structured list of statutory defences, including where:

  • The conduct was required or authorised by law

  • The plaintiff consented

  • It was necessary to prevent serious harm

  • The conduct was part of a lawful defence of person or property

There are also defamation-style defences for things like absolute privilege, publication of public documents, and fair reporting of public proceedings — and journalists enjoy an exemption when acting in a professional capacity under a recognised code of conduct.

Law enforcement and intelligence agencies are also exempt when acting within their lawful functions.

What Remedies Are Available?

The court can award compensation for emotional distress, injury to dignity, and reputational harm — capped at the same maximum as defamation damages (currently $478,550, indexed).

It can also award exemplary damages in egregious cases (like malicious distribution of private images), and make orders for apologies, corrections, injunctions, and destruction of material.

Importantly, apologies won’t count as admissions of guilt — so defendants can say sorry without conceding liability (though it might reduce damages).

What Now?

This is a big deal for Australian privacy law. The new statutory tort fills a long-standing gap between regulation and personal rights — and will likely open the door to more litigation, especially in areas like:

  • image-based abuse

  • unauthorised publication of intimate content

  • intrusive surveillance

  • data misuse or unethical tech deployment

For businesses, publishers, digital platforms and public institutions, now is the time to review policies, train staff, and sanity-check any borderline practices. Reckless handling of sensitive information — even without publication — could now be very costly.

Tune in next week for: a deep dive into the new Children’s Online Privacy Code. Because in 2025, even kids’ data isn’t child’s play.

Filed Under: Privacy, Privacy 2.0, Regulation Tagged With: Privacy, Privacy 2.0, Privacy 2.0 Part 2, Regulation

May 19, 2025 by Scott Coulthart

Privacy 2.0: Why the Law Had to Change

It’s not every day a 1980s law gets a 2020s reboot — but that’s exactly what’s happening with Australia’s privacy regime.

After years of community anxiety, OAIC submissions, and a few too many headlines about mega-breaches, the Privacy Act 1988 (Cth) is finally stepping out of its shoulder-padded past and into the digital present.

The latest round of reform — passed at the end of 2024 and now live — marks the biggest shake-up to Australia’s privacy framework in over a decade … and while the updates aren’t a total rewrite, they’re a bold start. From new breach response tools to sharper enforcement powers, and from kids’ data codes to the long-awaited statutory tort of privacy invasion, this is no longer just a compliance issue for GCs. It’s a reputational and risk issue for boards — and a tech/design challenge for operational teams.

So why now? Because the old law just wasn’t cutting it anymore. The last major reform (the 13 APPs) happened in 2014. That was before TikTok existed. Before mass data scraping, AI-driven insurance risk profiling, and customer loyalty schemes that know your breakfast habits better than your spouse. Fast forward a decade, and we’re living in an environment where personal data isn’t just a risk category — it’s a currency, and one that criminals, governments, and companies alike are eager to trade.

Add to that the global pressure. Australia has fallen behind the GDPR club, and even the US (the privacy laggard) is now rolling out state-level data laws with real teeth. If we want to be taken seriously in trade deals, tech partnerships, or cross-border enforcement, our domestic rules have to look credible. That means: Transparency. Accountability. Teeth.

This 9-part twice-per-week Privacy 2.0 blog series will unpack the key changes — what’s landed, what’s coming, and what businesses need to do now (not in 2026, when the AI rules kick in). We’ll also ask the hard questions: is this regulation or reaction? Is it about protecting individuals — or just managing headlines? And what does it mean for those of us navigating the line between innovation and intrusion?

Tune in tomorrow for: an in-depth look at Australia’s new statutory tort of serious invasion of privacy, commencing on 10 June 2025.

Filed Under: Privacy, Privacy 2.0, Regulation Tagged With: Privacy, Privacy 2.0, Privacy 2.0 Part 1, Regulation

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Copy Paste App? The Pleasures and Pitfalls of Screenshot-to-Code Tools
  • Brand Control, Part 7: “Beyond the Logo” — Trade Marking Product Shapes, Sounds, and Scents
  • Confidential No More? New Aim Took Their Shot and Missed
  • Brand Control, Part 6: “Use It or Lose It” — Genuine Use and Trade Mark Non-Use Risks
  • Fanatics vs FanFirm: When Coexistence Crashes and Burns

Archives

  • August 2025 (1)
  • July 2025 (16)
  • June 2025 (21)
  • May 2025 (12)
  • April 2025 (4)

Footer

© Scott Coulthart 2025