• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

IPMojo

  • About IP Mojo
  • About Scott Coulthart
  • CONTACT
BOOK AN APPOINTMENT

Scott Coulthart

June 3, 2025 by Scott Coulthart

Reasonable Steps Just Got Real: What APP 11 Now Demands

For years, Australian Privacy Principle 11 has required businesses to take “reasonable steps” to protect personal information from misuse, interference, or loss. Sounds fair — but also vague. What exactly is “reasonable”? A locked filing cabinet? Two-factor authentication? Asking nicely?

In this 4th part of IP Mojo’s exclusive Privacy 2.0 blog series, we discuss how the latest privacy law amendments haven’t rewritten APP 11 — they’ve sharpened it. Specifically, they’ve clarified that “reasonable steps” include both technical and organisational measures. It’s a simple sentence, but it changes the conversation. Because now, the standard isn’t just what you thought was reasonable. It’s what you can prove you’ve done to make security part of your systems, your structure, and your staff’s day-to-day behaviour.

Let’s break it down. Technical measures? Think encryption, firewalls, intrusion detection systems, and strong password protocols. Organisational measures? Employee training, incident response plans, documented data handling procedures, and privacy-by-design baked into new systems and tools. It’s not just about buying tech — it’s about building a culture.

Of course, “reasonable” still depends on context: the nature of your business, the sensitivity of the data, the volume you handle. But this update sends a signal: the era of set-and-forget privacy compliance is over. If your team’s still using outdated software or storing customer records on someone’s laptop, that’s not going to cut it.

Here’s the kicker: while the amendment itself is modest — just a new clause (11.3) — the implications are not. It gives regulators clearer footing. It gives courts a stronger hook. And it gives businesses a chance to get ahead — by documenting what you’re doing, auditing what you’re not, and showing your privacy policies aren’t just legalese, but lived practice.

Tune in tomorrow for: a look at the new data breach response powers, and how the government can now legally share your customers’ personal information — yes, really — in a post-hack crisis.

Filed Under: Privacy, Privacy 2.0, Regulation Tagged With: Privacy, Privacy 2.0, Privacy 2.0 Part 4, Regulation

June 2, 2025 by Scott Coulthart

Whose Work Is It Anyway? The Remix War, AI, Coffee Plungers and Swimsuits

From Elton John to anonymous meme-makers, a battle is raging over what it means to be “creative” — and whether it starts with permission.

Two stories made waves in copyright circles last week:

  • In the UK, Sir Elton John, Sir Paul McCartney and other musical heavyweights called for stronger rules to stop AI from “scraping” their songs without a licence.

  • In India, news agency ANI drew criticism for aggressively issuing YouTube copyright claims — even for sub-10 second clips — triggering takedown threats against creators.

At first glance, these might seem worlds apart. But they highlight the same question:

At what point does using someone else’s work become exploitation, not inspiration?

And who decides?

Creators vs Reusers: Two Sides of the Copyright Culture Clash

On one side: Creators — musicians, writers, filmmakers, photographers — frustrated by tech platforms and algorithms ingesting their work without permission. Whether it’s AI training data or news footage embedded in political commentary, their message is the same:
“You’re building on our backs. Pay up.”

On the other side: Remixers, meme-makers, educators, and critics argue that strict copyright regimes chill creativity. “How can we critique culture,” they ask, “if we’re not allowed to reference it?”

This isn’t new — hip hop, collage art, satire, and even pop music are full of samples and nods. But AI has industrialised the scale of reuse. It doesn’t borrow one beat or a single shot. It eats the entire catalogue — then spits out something “new.”

So what counts as originality anymore?

Australian Lens: Seafolly, Bodum, and the Meaning of “Original”

Seafolly v Madden [2012] FCA 1346

In this high-profile swimwear spat, designer Leah Madden accused Seafolly of copying her designs. She posted comparison images on social media implying that Seafolly had engaged in plagiarism. Seafolly sued for misleading and deceptive conduct under ss 52 and 53 of the Trade Practices Act 1974 (predecessors to s18 of the Australian Consumer Law – which had by then commenced but the relevant conduct being sued for took place before it had commenced).

The Federal Court found that Madden’s claims were not only misleading but also unsubstantiated, because the design similarities were not the result of actual copying. The case reinforced that:

  • Independent creation is a valid defence, even if the resulting works are similar

  • Superficial resemblance isn’t enough — there must be a causal connection

It’s a reminder that derivation must be substantial and material, not speculative or assumed.

Bodum v DKSH [2011] FCAFC 98

This case involved Bodum’s iconic French press coffee plunger — the Chambord — and whether a rival product sold by DKSH under the “Euroline” brand misled consumers or passed off Bodum’s get-up as its own.

Bodum alleged misleading or deceptive conduct and passing off, based not on name or logo, but on the visual appearance of the product: a clear glass beaker, metal band, and distinctive handles, which had come to be strongly associated with Bodum.

At trial, the Federal Court rejected Bodum’s claims. But on appeal, the Full Federal Court reversed that decision, holding that:

  • Bodum had a substantial reputation in the get-up alone;

  • The Euroline plunger was highly similar in appearance; and

  • DKSH’s failure to adequately differentiate its product through branding or design gave rise to a misleading impression.

Both passing off and misleading/deceptive conduct (also under the old s52) were found. The Court emphasised that reputation in shape and design can be enough — and differentiation must be meaningful, not tokenistic.

The AI Angle: Who Trains Whom?

AI tools like ChatGPT, Midjourney, and Suno don’t just copy works. They learn patterns from thousands of inputs. But in doing so, they arguably absorb creative expression — chord progressions, phrasing, brushstroke styles — and then make new outputs in that same vein.

AI developers claim this is fair use or transformative. Artists argue it’s a form of invisible appropriation — no different from copying and tweaking a painting, but with zero attribution or compensation.

It’s the Seafolly and Bodum problem, scaled up: if AI’s “original” work was trained on 10,000 human ones, is it really original? Or just a remix with plausible deniability?

The Bottom Line

Copyright law is meant to balance:

  • Encouraging creativity

  • Rewarding labour

  • Allowing critique and cultural dialogue

But that balance is breaking under the weight of machine learning models and automated copyright bots. As Seafolly and Bodum show, the law still values intention, process, and context — not just resemblance.

Yet in a world of remix and AI, intention is opaque, and process is synthetic.

So where do we draw the line?

Filed Under: AI, Copyright, Entertainment, IP Tagged With: AI, Copyright, Entertainment, IP

June 2, 2025 by Scott Coulthart

Back in our Privacy 2.0 series, we unpacked the upcoming Children’s Online Privacy Code — a new legal framework aimed at improving how kids’ personal information is handled online. Now, we’re hearing more from the people it’s meant to protect.

Children themselves have made it clear: they don’t want to be tracked, profiled, or buried in unreadable consent forms. And for companies whose business depends on that data, the reforms aren’t just a policy shift — they’re a threat to the model.

What Children Say

Our earlier blog post focused on the what: a binding industry code to strengthen children’s data protections under the Privacy Act 1988 (Cth).

Now we’re seeing more of the why — and it’s coming straight from the kids.

According to findings from consultations with children conducted by Reset Tech Australia, the message is loud and clear: children aren’t just passive subjects of data collection. They have opinions — strong ones.

Among the most consistent themes:

  • Nearly 90% of children surveyed want default privacy settings set to high, and geolocation turned off by default.

  • Many want simpler, age-appropriate explanations of how their data is used.

  • Crucially, they want the ability to delete their data — a right currently absent from Australia’s privacy framework.

As Privacy Commissioner Carly Kind put it (in her recent discussions with The Australian newspaper):

“Kids aren’t going to read 50 pages of terms and conditions when they sign up to an app… How do we give them actual choices, and not just the ability to click ‘I consent’ when they haven’t even read something, and it’s not a genuine form of consent?”

That question goes to the heart of what the Children’s Privacy Code — and broader privacy reform — is trying to fix.

Not Just Social Media — The Code’s Expanding Reach

Commissioner Kind also confirmed that the Children’s Privacy Code will work in parallel with the upcoming ban on under-16s using social media — but its scope is much broader. The Code will apply to:

  • Websites and online services accessed by children,

  • Wearable devices and fitness trackers, and

  • Education technology and apps, including those used in schools.

In other words, the Code is not just about excluding children from certain online spaces. It’s about protecting them wherever they are — especially in digital environments they’re required to engage with for school or social connection.

This dual approach — platform bans on one side, enforceable data safeguards on the other — reflects a recognition that meaningful participation in digital life shouldn’t come at the expense of privacy.

Who’s Worried — and Why

Stronger children’s privacy rules are good policy — but they’re also bad news for some very profitable business models. Behind the push for transparency and consent reform lies a quieter question: who stands to lose when kids gain more control over their data?

Let’s follow the data trail.

Adtech platforms are the obvious players at risk. Targeting, profiling, and retargeting of under-18s — even if indirect — fuels everything from engagement strategies to dynamic pricing models. If default privacy settings go “high” by law, or profiling becomes opt-in (or outright banned), that revenue stream starts to dry up.

Social platforms, even those ostensibly closed to under-16s, have powerful incentives to retain youth users — both for ad revenue and for maintaining long-term brand stickiness. The idea that kids might have a right to delete their data, opt out of tracking, or receive age-appropriate disclosures cuts into their legal risk model.

Then there’s edtech — the often-overlooked battleground. Many school-deployed tools gather extensive user-level data but offer limited controls to students (or even schools). Vendors that haven’t built privacy-by-design tooling may soon be scrambling to comply.

And finally, consumer IoT and smart toys — products that rely on voice input, biometric sensors, or location tracking — may find their compliance and legal risk profile radically changed if the Code’s protections become enforceable.

Most won’t publicly oppose child protection. But you can expect to see:

  • Lobbying for “flexible” implementation timelines,

  • Calls for “self-regulation”,

  • Quiet legal arguments around the scope of “reasonable access by children”, and

  • Industry pushback on making data deletion or privacy impact assessments mandatory.

What This Means for Industry

If your platform, app, device, or service is used by children, or may reasonably be accessed by children, you’ll need to start preparing for compliance now. That means:

  • Reviewing your default settings for privacy, location, and profiling;

  • Translating privacy policies into plain, age-appropriate language;

  • Building functionality for data deletion — even if not (yet) mandatory; and

  • Moving beyond consent as your only compliance crutch — especially if that consent comes from a user too young to legally or meaningfully provide it.

The consultations have made it clear: children want transparency, choice, and respect for their privacy — and regulators are listening.

Filed Under: Privacy, Regulation Tagged With: Privacy, Regulation

May 30, 2025 by Scott Coulthart

Dealing with Daily Weakly

The High Court will shortly decide on how long a client has before they can no longer sue their lawyer for an earlier stuff up. 

In October 2024, the Federal Circuit and Family Court of Australia handed down a decision that every contracts lawyer — including those in the IP and tech law trenches — should pay attention to. Daily & Daily (No 4) [2024] FedCFamC1A 185 isn’t just a family law dispute — it’s a cautionary tale about negligence, professional responsibility, and the limits of liability when legal drafting goes sideways.

The facts were messy, as most long-running family property cases are. But at the heart of it lay this: a Binding Financial Agreement (BFA) that was meant to protect a husband’s property interests in case of divorce turned out to be void — thanks in part to inadequate legal advice when the agreement was first inked. The husband sued his former lawyers. They argued the claim was out of time and that their advice wasn’t negligent. The Court disagreed on both counts, upholding the finding of negligence and ruling that the claim was not out of time (although the damages award was to be reassessed at a new hearing).

Most significantly, the High Court has now granted special leave to appeal that part of the decision — signalling this issue is far from settled.

The current judgment underscores the point that lawyers — especially those drafting complex commercial or technology agreements — cannot rely on generic advice or boilerplate disclaimers.  In Daily & Daily, the solicitor didn’t draft the agreement originally, but did advise on and amend it.  The Court found that the advice given was cursory, failing to warn of the risk the agreement might be void for uncertainty or vulnerable under s 90K of the Family Law Act 1975 if the couple later had children.

Think of how many SaaS agreements, licensing terms, or IP assignments rely on template structures — or gloss over jurisdiction-specific requirements for enforceability. This case is a reminder of at least two things: First, when a client pays for certainty, delivering ambiguity is actionable. Second, but just as importantly, if you are asked to advise on and touch up an agreement you didn’t draft, you are taking responsibility for all of it, not just the bits you tweak.

One of the most interesting parts of the appeal will be about timing. The lawyers tried to argue that any negligence claim was statute-barred — that is, out of time. But the Court said no: in cases involving contingency-based loss (like a BFA only becoming relevant on separation), damage doesn’t “crystallise” until the adverse event happens. That’s a powerful precedent for all kinds of delayed-impact contract failures — including option agreements, royalties, or licensing deals that collapse years later.

The appeal will now head to the High Court, which could reshape how limitation periods apply to negligent drafting in complex personal and commercial transactions.

Specifically, when does damage occur after negligent drafting? At the time the agreement is entered into, or at the time there are financial consequences down the track?

The High Court’s answer may redefine professional liability timelines — and not just in family law.

Filed Under: Uncategorized

May 29, 2025 by Scott Coulthart

For years, when an Australian company suffered a data breach, the script was pretty simple: notify the OAIC, maybe tell your customers, and brace for PR blowback. But in a landscape of ransomware gangs, deepfake scams, and real-world harm flowing from leaked personal info, that old approach started to feel… inadequate. The new privacy law amendments (to the Privacy Act 1988 (Cth)) try to fix that.

In this 5th instalment of our Privacy 2.0 series, we look at the new regime of EDB declarations and emergency declarations — new legal tools that give the government power to coordinate how personal data is shared during and after a crisis. If that sounds like overreach, it’s not. It’s actually quite surgical. These powers are about enabling targeted, temporary, lawful information sharing when the goal is harm minimisation — not surveillance.

Under new Division 5, Part IIIC, of the Act, which concerns EDB declarations and commenced with effect from 11 December 2024, the Minister can authorise specific entities to collect, use, or disclose personal information otherwise restricted by the APPs — but only for clearly defined, time-limited purposes like fraud prevention, identity verification, or cyber response, when there has been an eligible data breach that ticks certain boxes.

Banks, credit bureaus, and government agencies may be brought into the loop — but not media outlets.  There are some safeguards, mainly comprised of transparency requirements, consultation with the OAIC, and criminal offences for going rogue with the info.

Then there are emergency declarations — a reboot of existing powers to deal with natural disasters, pandemics, and national emergencies. These let the Prime Minister or a designated Minister approve personal data handling across public and private sectors for things like locating missing persons or coordinating aid.

Again, it’s tightly scoped: the declarations can’t be used for general surveillance, expire after 12 months unless renewed, and once again exclude media outlets entirely – so there are no free-for-alls, no “Minister for Metadata” moment.

The lesson for businesses? Don’t assume your data handling obligations end at “notify the OAIC.” In breach or emergency scenarios, you may now be authorised — or even expected — to share personal information, as long as it aligns with a declaration. Legal and compliance teams should track these developments — your incident response plan may need a serious update.

In short: breach response is no longer just about damage control. It’s about lawful coordination. And if you’re caught flat-footed without internal protocols for handling this new regime, you’re behind the curve.

Next week in the Privacy 2.0 series: how the law now reimagines overseas data transfers — and whether your Singapore-based SaaS platform still cuts it.

Filed Under: Privacy, Privacy 2.0, Regulation Tagged With: Privacy, Privacy 2.0, Privacy 2.0 Part 5, Regulation

May 29, 2025 by Scott Coulthart

IP Australia Knocks Canva to the Canvas

How many patents could a patent combatant patent if a patent combatant could patent patents?  It turns out possibly none, if they’re IT-based patents …

It’s not often that a legal decision about slide deck formatting gets a 230-page appendix and a judicial tone verging on exasperation. But that’s exactly what happened in April when IP Australia handed down the Delegate’s ruling on two Canva patent applications. And for tech lawyers, it’s a masterclass in where the edges of software patentability still lie in Australia.

What Canva Sought to Patent

Canva — Australia’s SaaS design darling — had sought patent protection for two computer-implemented inventions. One described how to take content from a document and reflow it into a deck format automatically. The other focused on the math behind mapping design “fills” into limited layout “frames.”

The applications were meticulous. Detailed. Full of flowcharts, hierarchy data, bounding boxes, fills, and pagination logic. They read like a design engineer’s epic poem – a love letter to structured templates.

But in the eyes of IP Australia, they weren’t inventions. Not in the legal sense, anyway.

The Legal Lesson: “Manner of Manufacture” Still Matters

Australia’s test for whether a computer-implemented invention is patentable remains the “manner of manufacture” test — essentially asking whether the claimed invention involves more than just abstract ideas, business rules, or well-known computer functions.

And here, the Delegate of the Commissioner of Patents said: nope.

Despite Canva’s argument that their invention transformed how users generate designs and templates, IP Australia saw it differently: the claims described a process for applying “rules” to content in order to lay it out aesthetically — something a human designer could do, and something that didn’t, in substance, solve a technical problem or enhance the functioning of a computer. It was more scheme than science.

More Than Just a Canva Problem

Why should the rest of us care?

Because this is yet another signal — after decisions like Research Affiliates and Rokt — that Australia continues to draw a relatively narrow line on software patents. If you’re advising a client on patent strategy in the digital design, AI, or UX tooling space, the key takeaway is this: just because something’s hard to code doesn’t mean it’s patentable.

The software has to do more than automate — it must yield a technical effect or improvement that isn’t just the automation itself.

So, What’s Next?

For Canva, it’s back to the drawing board — or perhaps, back to their formidable brand and copyright moat. And they still have six months to reframe the claims in a way that may survive scrutiny.

For the rest of us, it’s another sharp reminder that patenting in the tech sector remains as much an art as a science. Don’t just ask “Is it clever?” Ask, “Is it a manner of manufacture?”

Because in Australian patent law, not all clicks are created equal.

Filed Under: IP, Patents Tagged With: IP, Patents

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 14
  • Go to page 15
  • Go to page 16
  • Go to page 17
  • Go to page 18
  • Go to page 19
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Unf*cking the Register: IP Australia Accepts “UNFVCK YOURSELF” Trade Mark
  • Firework Fizzles (For Now): The High Court Re-stitches the Katy Perry Trade Mark Battle
  • 🏇 When the Race Stops a Nation — Who Owns the Moment?
  • AI Training in Australia: Why a Mandatory Licence Could Be the Practical Middle Ground
  • AI-Generated Works & Australian Copyright — What IP Owners Need to Know

Archives

  • March 2026 (2)
  • November 2025 (1)
  • October 2025 (14)
  • September 2025 (21)
  • August 2025 (18)
  • July 2025 (16)
  • June 2025 (21)
  • May 2025 (12)
  • April 2025 (4)

Footer

© Scott Coulthart 2025