• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

IPMojo

  • About IP Mojo
  • About Scott Coulthart
  • CONTACT
BOOK AN APPOINTMENT

Archives for June 2025

June 3, 2025 by Scott Coulthart

Reasonable Steps Just Got Real: What APP 11 Now Demands

For years, Australian Privacy Principle 11 has required businesses to take “reasonable steps” to protect personal information from misuse, interference, or loss. Sounds fair — but also vague. What exactly is “reasonable”? A locked filing cabinet? Two-factor authentication? Asking nicely?

In this 4th part of IP Mojo’s exclusive Privacy 2.0 blog series, we discuss how the latest privacy law amendments haven’t rewritten APP 11 — they’ve sharpened it. Specifically, they’ve clarified that “reasonable steps” include both technical and organisational measures. It’s a simple sentence, but it changes the conversation. Because now, the standard isn’t just what you thought was reasonable. It’s what you can prove you’ve done to make security part of your systems, your structure, and your staff’s day-to-day behaviour.

Let’s break it down. Technical measures? Think encryption, firewalls, intrusion detection systems, and strong password protocols. Organisational measures? Employee training, incident response plans, documented data handling procedures, and privacy-by-design baked into new systems and tools. It’s not just about buying tech — it’s about building a culture.

Of course, “reasonable” still depends on context: the nature of your business, the sensitivity of the data, the volume you handle. But this update sends a signal: the era of set-and-forget privacy compliance is over. If your team’s still using outdated software or storing customer records on someone’s laptop, that’s not going to cut it.

Here’s the kicker: while the amendment itself is modest — just a new clause (11.3) — the implications are not. It gives regulators clearer footing. It gives courts a stronger hook. And it gives businesses a chance to get ahead — by documenting what you’re doing, auditing what you’re not, and showing your privacy policies aren’t just legalese, but lived practice.

Tune in tomorrow for: a look at the new data breach response powers, and how the government can now legally share your customers’ personal information — yes, really — in a post-hack crisis.

Filed Under: Privacy, Privacy 2.0, Regulation Tagged With: Privacy, Privacy 2.0, Privacy 2.0 Part 4, Regulation

June 2, 2025 by Scott Coulthart

Whose Work Is It Anyway? The Remix War, AI, Coffee Plungers and Swimsuits

From Elton John to anonymous meme-makers, a battle is raging over what it means to be “creative” — and whether it starts with permission.

Two stories made waves in copyright circles last week:

  • In the UK, Sir Elton John, Sir Paul McCartney and other musical heavyweights called for stronger rules to stop AI from “scraping” their songs without a licence.

  • In India, news agency ANI drew criticism for aggressively issuing YouTube copyright claims — even for sub-10 second clips — triggering takedown threats against creators.

At first glance, these might seem worlds apart. But they highlight the same question:

At what point does using someone else’s work become exploitation, not inspiration?

And who decides?

Creators vs Reusers: Two Sides of the Copyright Culture Clash

On one side: Creators — musicians, writers, filmmakers, photographers — frustrated by tech platforms and algorithms ingesting their work without permission. Whether it’s AI training data or news footage embedded in political commentary, their message is the same:
“You’re building on our backs. Pay up.”

On the other side: Remixers, meme-makers, educators, and critics argue that strict copyright regimes chill creativity. “How can we critique culture,” they ask, “if we’re not allowed to reference it?”

This isn’t new — hip hop, collage art, satire, and even pop music are full of samples and nods. But AI has industrialised the scale of reuse. It doesn’t borrow one beat or a single shot. It eats the entire catalogue — then spits out something “new.”

So what counts as originality anymore?

Australian Lens: Seafolly, Bodum, and the Meaning of “Original”

Seafolly v Madden [2012] FCA 1346

In this high-profile swimwear spat, designer Leah Madden accused Seafolly of copying her designs. She posted comparison images on social media implying that Seafolly had engaged in plagiarism. Seafolly sued for misleading and deceptive conduct under ss 52 and 53 of the Trade Practices Act 1974 (predecessors to s18 of the Australian Consumer Law – which had by then commenced but the relevant conduct being sued for took place before it had commenced).

The Federal Court found that Madden’s claims were not only misleading but also unsubstantiated, because the design similarities were not the result of actual copying. The case reinforced that:

  • Independent creation is a valid defence, even if the resulting works are similar

  • Superficial resemblance isn’t enough — there must be a causal connection

It’s a reminder that derivation must be substantial and material, not speculative or assumed.

Bodum v DKSH [2011] FCAFC 98

This case involved Bodum’s iconic French press coffee plunger — the Chambord — and whether a rival product sold by DKSH under the “Euroline” brand misled consumers or passed off Bodum’s get-up as its own.

Bodum alleged misleading or deceptive conduct and passing off, based not on name or logo, but on the visual appearance of the product: a clear glass beaker, metal band, and distinctive handles, which had come to be strongly associated with Bodum.

At trial, the Federal Court rejected Bodum’s claims. But on appeal, the Full Federal Court reversed that decision, holding that:

  • Bodum had a substantial reputation in the get-up alone;

  • The Euroline plunger was highly similar in appearance; and

  • DKSH’s failure to adequately differentiate its product through branding or design gave rise to a misleading impression.

Both passing off and misleading/deceptive conduct (also under the old s52) were found. The Court emphasised that reputation in shape and design can be enough — and differentiation must be meaningful, not tokenistic.

The AI Angle: Who Trains Whom?

AI tools like ChatGPT, Midjourney, and Suno don’t just copy works. They learn patterns from thousands of inputs. But in doing so, they arguably absorb creative expression — chord progressions, phrasing, brushstroke styles — and then make new outputs in that same vein.

AI developers claim this is fair use or transformative. Artists argue it’s a form of invisible appropriation — no different from copying and tweaking a painting, but with zero attribution or compensation.

It’s the Seafolly and Bodum problem, scaled up: if AI’s “original” work was trained on 10,000 human ones, is it really original? Or just a remix with plausible deniability?

The Bottom Line

Copyright law is meant to balance:

  • Encouraging creativity

  • Rewarding labour

  • Allowing critique and cultural dialogue

But that balance is breaking under the weight of machine learning models and automated copyright bots. As Seafolly and Bodum show, the law still values intention, process, and context — not just resemblance.

Yet in a world of remix and AI, intention is opaque, and process is synthetic.

So where do we draw the line?

Filed Under: AI, Copyright, Entertainment, IP Tagged With: AI, Copyright, Entertainment, IP

June 2, 2025 by Scott Coulthart

Back in our Privacy 2.0 series, we unpacked the upcoming Children’s Online Privacy Code — a new legal framework aimed at improving how kids’ personal information is handled online. Now, we’re hearing more from the people it’s meant to protect.

Children themselves have made it clear: they don’t want to be tracked, profiled, or buried in unreadable consent forms. And for companies whose business depends on that data, the reforms aren’t just a policy shift — they’re a threat to the model.

What Children Say

Our earlier blog post focused on the what: a binding industry code to strengthen children’s data protections under the Privacy Act 1988 (Cth).

Now we’re seeing more of the why — and it’s coming straight from the kids.

According to findings from consultations with children conducted by Reset Tech Australia, the message is loud and clear: children aren’t just passive subjects of data collection. They have opinions — strong ones.

Among the most consistent themes:

  • Nearly 90% of children surveyed want default privacy settings set to high, and geolocation turned off by default.

  • Many want simpler, age-appropriate explanations of how their data is used.

  • Crucially, they want the ability to delete their data — a right currently absent from Australia’s privacy framework.

As Privacy Commissioner Carly Kind put it (in her recent discussions with The Australian newspaper):

“Kids aren’t going to read 50 pages of terms and conditions when they sign up to an app… How do we give them actual choices, and not just the ability to click ‘I consent’ when they haven’t even read something, and it’s not a genuine form of consent?”

That question goes to the heart of what the Children’s Privacy Code — and broader privacy reform — is trying to fix.

Not Just Social Media — The Code’s Expanding Reach

Commissioner Kind also confirmed that the Children’s Privacy Code will work in parallel with the upcoming ban on under-16s using social media — but its scope is much broader. The Code will apply to:

  • Websites and online services accessed by children,

  • Wearable devices and fitness trackers, and

  • Education technology and apps, including those used in schools.

In other words, the Code is not just about excluding children from certain online spaces. It’s about protecting them wherever they are — especially in digital environments they’re required to engage with for school or social connection.

This dual approach — platform bans on one side, enforceable data safeguards on the other — reflects a recognition that meaningful participation in digital life shouldn’t come at the expense of privacy.

Who’s Worried — and Why

Stronger children’s privacy rules are good policy — but they’re also bad news for some very profitable business models. Behind the push for transparency and consent reform lies a quieter question: who stands to lose when kids gain more control over their data?

Let’s follow the data trail.

Adtech platforms are the obvious players at risk. Targeting, profiling, and retargeting of under-18s — even if indirect — fuels everything from engagement strategies to dynamic pricing models. If default privacy settings go “high” by law, or profiling becomes opt-in (or outright banned), that revenue stream starts to dry up.

Social platforms, even those ostensibly closed to under-16s, have powerful incentives to retain youth users — both for ad revenue and for maintaining long-term brand stickiness. The idea that kids might have a right to delete their data, opt out of tracking, or receive age-appropriate disclosures cuts into their legal risk model.

Then there’s edtech — the often-overlooked battleground. Many school-deployed tools gather extensive user-level data but offer limited controls to students (or even schools). Vendors that haven’t built privacy-by-design tooling may soon be scrambling to comply.

And finally, consumer IoT and smart toys — products that rely on voice input, biometric sensors, or location tracking — may find their compliance and legal risk profile radically changed if the Code’s protections become enforceable.

Most won’t publicly oppose child protection. But you can expect to see:

  • Lobbying for “flexible” implementation timelines,

  • Calls for “self-regulation”,

  • Quiet legal arguments around the scope of “reasonable access by children”, and

  • Industry pushback on making data deletion or privacy impact assessments mandatory.

What This Means for Industry

If your platform, app, device, or service is used by children, or may reasonably be accessed by children, you’ll need to start preparing for compliance now. That means:

  • Reviewing your default settings for privacy, location, and profiling;

  • Translating privacy policies into plain, age-appropriate language;

  • Building functionality for data deletion — even if not (yet) mandatory; and

  • Moving beyond consent as your only compliance crutch — especially if that consent comes from a user too young to legally or meaningfully provide it.

The consultations have made it clear: children want transparency, choice, and respect for their privacy — and regulators are listening.

Filed Under: Privacy, Regulation Tagged With: Privacy, Regulation

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4

Primary Sidebar

Recent Posts

  • 🏇 When the Race Stops a Nation — Who Owns the Moment?
  • AI Training in Australia: Why a Mandatory Licence Could Be the Practical Middle Ground
  • AI-Generated Works & Australian Copyright — What IP Owners Need to Know
  • When Cheaper Medicines Meet Patent Law: Regeneron v Sandoz
  • #NotThatFamous: When Influencer Buzz Fails the s 60 Test

Archives

  • November 2025 (1)
  • October 2025 (14)
  • September 2025 (21)
  • August 2025 (18)
  • July 2025 (16)
  • June 2025 (21)
  • May 2025 (12)
  • April 2025 (4)

Footer

© Scott Coulthart 2025