• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

IPMojo

  • About IP Mojo
  • About Scott Coulthart
  • CONTACT
BOOK AN APPOINTMENT

AI

August 1, 2025 by Scott Coulthart

Copy Paste App? The Pleasures and Pitfalls of Screenshot-to-Code Tools

Imagine this: you take a screenshot of your favourite SaaS dashboard, upload it to a no-code AI tool, and minutes later you have a functioning version of the same interface — layout, buttons, styling, maybe even a working backend prototype. Magic? Almost.

Welcome to the world of screenshot-to-code generators — tools that use AI and no-code logic to replicate functional software from images. These platforms (like Galileo AI, Builder.io, and Uizard) promise rapid prototyping, faster MVP launches, and a lower barrier to entry for founders, designers, and product teams alike.

But while the tech is impressive, the legal waters are murkier. Here’s the pleasure and the pitfall.


🚀 The Pleasure: Design to Prototype at Lightspeed

The promise is seductive:

  • Rapid prototyping: What used to take weeks of front-end dev can now take hours — sometimes minutes.

  • Visual to functional: AI converts static designs (or even screenshots of existing apps) into working interfaces with mock data or basic logic.

  • Lower costs: Startups or solo devs can build more for less — less code, less labour, and less time.

Tools like Galileo AI and Uizard are being used to generate mock admin panels, mobile UI concepts, and even pitch-ready MVPs. They’re ideal for internal dashboards, client demos, or iterating fast before investing in full-stack builds.

But many users go further — taking screengrabs from existing platforms (think Notion, Salesforce, Figma, Xero) and asking the AI to “make me one of these.”

And that’s where the problems begin.


⚠️ The Pitfall: Copyright, Clones, and Clean Hands

Just because a tool can replicate an interface doesn’t mean you should — especially if your starting point is a screenshot of someone else’s software.

Here are the big legal traps to watch out for:

1. Copyright in the Interface

While copyright doesn’t protect ideas, it does protect expressions — including graphic design, layout, icons, fonts, and even the “look and feel” of certain interfaces. If your cloned UI copies the visual design of another product too closely, you may be infringing copyright (or at least inviting a legal headache).

Australia’s Desktop Marketing Systems v Telstra [2002] FCAFC 112 reminds us that copyright can exist in compilations of data or structure — not just in pretty pictures.

2. Trade Dress and Reputation

Even if your app doesn’t copy the code, a lookalike interface could fall foul of passing off or misleading conduct laws under the Australian Consumer Law if it creates confusion with an established brand. That risk increases if you’re operating in a similar space or targeting the same user base.

The global tech giants have deep pockets — and they’ve sued for less.

3. Terms of Use Breaches

Many platforms prohibit copying or reverse engineering their interfaces. Uploading screenshots of their product to an AI builder might violate their terms of service — even if your clone is only for internal use.

This isn’t just theory: platforms like OpenAI and Figma already use automated tools to detect and act on terms breaches — especially those that risk commercial leakage or brand dilution.

4. No Excuse Just Because the Tool Did It

You can’t hide behind the AI. If your clone infringes IP rights, you’re liable — not the platform that helped you build it. The tool is just that: a tool.

In legal terms, there’s no “my AI made me do it” defence.


🤔 So What Can You Do?

  • ✅ Use these tools for original designs: Sketch your own wireframes, then let the AI flesh them out.

  • ✅ Take inspiration, not duplication: You can draw ideas from good UI — but avoid replicating them pixel-for-pixel.

  • ✅ Use public design systems: Many platforms release UI kits and components under open licences (e.g., Material UI, Bootstrap). Start there.

  • ✅ Keep it internal: If you must replicate an existing interface to test functionality, don’t deploy it publicly — and definitely don’t commercialise it.

  • ✅ Get advice: If you’re close to the line (or don’t know where the line is), speak to an IP lawyer early. Clones are cheaper than court.


🧠 Final Thought: Just Because You Can…

…doesn’t mean you should.

AI is rapidly transforming the way software is built — but it’s also tempting users to cut corners on IP. Using these tools responsibly means treating screenshots not just as pixels, but as possibly protected property.

Build fast — but build clean.

Filed Under: AI, Copyright, Digital Law, IP, Technology Tagged With: AI, Copyright, Digital Law, IP, Technology

July 11, 2025 by Scott Coulthart

Bogus Brands, Fake Flyers, and Deepfake Danger: The Law Behind the Nathan Cleary Image Scandal

Rugby league star Nathan Cleary is the latest Australian celebrity to have his image hijacked for commercial gain without consent — a reminder that in the AI age, the unauthorised use of someone’s likeness isn’t just a reputational risk. It’s often unlawful, and sometimes even criminal.

Just hours after this year’s Origin decider, fans returned to their cars at Sydney Olympic Park to find a flyer featuring a doctored image of Cleary, seemingly endorsing novelty car accessories. The image was fake. The quote was fake. The endorsement never happened. But the legal implications are very real.

So what’s actually being breached when someone misuses your face — and what can you do about it?

📸 No Statutory “Right of Publicity” — But You Still Have Legal Options

Australia doesn’t have a US-style statutory “right of publicity” or standalone image right. But celebrities aren’t powerless.

Legal remedies typically come from three key areas:


⚖️ 1. Misleading and Deceptive Conduct (Australian Consumer Law)

Section 18 of the Australian Consumer Law prohibits conduct that is misleading or deceptive, or likely to mislead or deceive.

Using a person’s likeness — especially a high-profile figure like Nathan Cleary — in a way that suggests endorsement or association, when no such endorsement exists, will almost always be misleading.

This can apply even if no goods are sold, but the impression of association is strong enough to influence consumer behaviour.

Penalties can include injunctions, corrective advertising, damages, and fines for corporations and individuals.


🧠 2. Passing Off

A common law cause of action, passing off protects the goodwill a person or brand has built up. To succeed, you must show:

  • Reputation in the market

  • Misrepresentation (by the other party)

  • Damage to your goodwill or reputation

It’s often used by celebrities to stop unauthorised commercial use of their name or likeness. For sportspeople like Cleary — who command lucrative brand partnerships — unauthorised endorsements can undercut carefully curated sponsorship relationships.


🎨 3. Copyright and Doctored Images

If the image used was a reproduction or adaptation of a copyright-protected photo — for instance, one originally taken by a professional photographer — the flyer could infringe copyright as well.

Even digitally manipulated images (such as AI-generated or Photoshopped versions) may still reproduce a substantial part of the original.


🔒 Criminal Deception?

Cleary’s legal team has suggested this may also amount to obtaining a benefit by deception — a criminal offence under various state and territory laws.

That’s especially relevant where:

  • The misrepresentation is intended to induce consumers to buy something

  • The product may be part of a scam or fraudulent site

  • Consumers are financially harmed

This isn’t just civil IP — it’s potentially identity-based fraud.


🤖 AI Makes This Easier — and Worse

The ability to fake an endorsement has never been more accessible. AI image generators and editing tools now allow anyone to quickly create plausible likenesses of celebrities, insert fake quotes, or digitally recreate products.

What used to require a designer and Photoshop now takes 10 seconds and a prompt.

Without robust protections or swift enforcement, athletes and entertainers risk becoming unwilling frontmen for scammy brands or shady products — with little control over how or where their likeness appears.


🛑 So What Can Be Done?

For talent: Quick legal action is key. That includes cease-and-desist letters, takedown requests, and (where needed) court proceedings. Keep records of your brand deals — including exclusivity — and monitor the use of your name and likeness online.

For businesses: Don’t use a person’s image, name, voice, or persona to promote goods or services unless you’ve secured clear written consent. Even “harmless” nods or jokes can land you in hot water if the impression is that they’ve endorsed your product.

For regulators and sporting bodies: There’s a strong case for greater protection — not just for economic harm, but for consumer trust and brand integrity. Fans deserve to know when a product is genuinely endorsed, and when it’s just a digital fake.


Final Whistle

This isn’t just about one player and one flyer.

It’s a wake-up call about how easily digital tools can blur the line between real and fake — and why the law must be ready to blow the whistle when someone takes the mickey with a public figure’s face.

Filed Under: AI, IP Tagged With: AI, IP

June 24, 2025 by Scott Coulthart

Fair Use or Free Ride? The Case for an AI Blanket Licence

What if AI companies had to pay for the content they train on? Welcome to the next frontier in copyright law — where inspiration meets ingestion.

When AI companies train their models — whether for music, image generation, writing or video — they don’t do it in a vacuum. They train on us. Or more precisely: on our songs, our blogs, our art, our tweets, our books, our interviews.

They harvest it at scale, often scraped from the open web, with or without permission — and certainly without compensation.

This has prompted an increasingly vocal question from creators and content owners:

Shouldn’t we get paid when machines learn from our work?

The proposed answer from some corners: a blanket licensing regime.

What’s a Blanket Licence?

Nothing to do with bedding – a blanket licence is a pre-agreed system for legal reuse. It doesn’t ask for permission each time. Instead, it says:

You can use a defined pool of material for a defined purpose — if you pay.

We already see this in:

  • Music royalties (e.g. APRA, ASCAP, BMI)

  • Broadcast and public performance rights

  • Compulsory licensing of cover songs in the US

Could the same apply to AI?

What the Law Says (or Doesn’t)

AI companies argue that training their models on public material is “fair use” (US) or doesn’t involve “substantial reproduction” (Australia), since no exact copy of the work appears in the output.

However, copies are made during scraping, and substantial parts are almost certainly reproduced during the training process or embedded in derivative outputs — either of which could pose problems under both US and Australian copyright law.

But courts are still catching up.

Pending or recent litigation:

  • The New York Times v OpenAI: scraping articles to train GPT

  • Sarah Silverman v Meta: use of copyrighted books

  • Getty Images v Stability AI: image training and watermark copying

None of these cases have yet resolved the underlying issue:

Is training AI on copyrighted works a use that requires permission — or payment?

What a Blanket Licence Would Do

Under a blanket licence system:

  • Training (and copying or development of derivatives for that purpose) would be lawful, as long as the AI provider paid into a fund

  • Creators and rights holders would receive royalty payments, either directly or via a collecting society

  • A legal baseline would be established, reducing lawsuits and uncertainty

This would mirror systems used in broadcasting and streaming, where revenue is pooled and distributed based on usage data.

Challenges Ahead

1. Who Gets Paid?

Not all data is traceable or attributed. Unlike Spotify, which tracks each song streamed, AI models ingest billions of unlabeled tokens.

How do you determine who owns what — and which parts — of material abstracted, fragmented, and stored somewhere in the cloud?

2. How Much?

Rates would need to reflect:

  • The extent of use

  • The importance of the material to the training corpus

  • The impact on the original market for the work

This is tricky when a model is trained once and then used forever.

3. Which Countries?

Copyright laws vary. A licence in Australia might mean nothing in the US.

A global licence would require multilateral cooperation — and likely WIPO involvement.

Legal Precedent: Australia’s Safe Harbour and Statutory Licensing Models

Australia’s own statutory licensing schemes (e.g. educational copying under Part VB of the Copyright Act) show that:

  • Lawmakers can mandate payment for certain uses,

  • Even if individual rights holders never negotiated the terms,

  • Provided it’s reasonable, transparent, and compensatory.

But those systems also brought:

  • Bureaucratic collection processes

  • Contentious allocation models

  • Endless legal wrangling over definitions (What is “reasonable portion”? What qualifies as “educational purpose”?)

Expect the same for AI.

Creators and Innovation: A Balancing Act

For creators:

  • A blanket licence offers recognition and payment

  • It helps avoid the current “scrape now, settle later” model

  • It could fund new creative work rather than hollowing out industries

For innovators:

  • It provides legal certainty

  • Encourages investment in AI tools

  • Reduces the risk of devastating retroactive litigation

But if set up poorly, it could:

  • Be exclusionary (if licensing fees are too high for small players)

  • Be ineffective (if rights aren’t properly enforced or distributed)

  • Or be too slow to match AI’s pace

What’s Next?

Australia’s Copyright Act doesn’t currently recognise training as a specific form of use. But policy reviews are under way in multiple countries, including by:

  • The UK IPO

  • The European Commission

  • The US Copyright Office

  • And here in Australia, the Attorney-General’s Department is conducting consultations through 2024–25 on how copyright law should respond to AI

Creators, platforms, and governments are all watching the courts. But if consensus forms around the need for structured compensation, a statutory blanket licence might just be the solution.


Bottom Line

We’ve built AI on the backs of human creativity. The question isn’t whether to stop AI — it’s how to make it fair.

A blanket licence won’t solve every problem. But it could be the start of a system where creators aren’t left behind — and where AI learns with permission, not just ambition.

Filed Under: AI, Copyright, Digital Law, IP, Technology Tagged With: AI, Copyright, Digital Law, IP, Technology

June 20, 2025 by Scott Coulthart

Productivity or Pink Slips? The Rise of Agentic AI

Ok, enough with the scaremongering – let’s thrash it out.

Is AI going to replace us any time soon?

One perspective’s look at the medium term future:

    “Cancer is cured, the economy grows at 10% a year… and 20% of people don’t have jobs.”

So said Dario Amodei, CEO of Anthropic, in one of the most jarring soundbites to emerge from the AI sector this year. It’s not a dystopian movie pitch — it’s a plausible trajectory.

The Brisbane Times recently spotlighted how Telstra is leaning hard into this future, and it starts with deploying so-called Agentic AI – discrete AI tools able to do a bunch of things with minimal oversight.  From automating customer service to writing code, the $54 billion telco is betting big that its next era won’t be driven just by fibre and frequency, but by “digital agents“: AI tools with autonomy to act, learn and optimise at scale.

While Telstra CEO Vicki Brady didn’t give hard numbers on expected job cuts, she did suggest the company’s workforce will likely be smaller by 2030. No bold claims — just quiet math. That’s the real face of the AI revolution: not mass firings, but jobs that never get hired in the first place.

Enter the Digital Employee

Nvidia’s Jensen Huang calls them “digital employees” — autonomous, specialised AI agents that handle roles from cybersecurity to network monitoring to legal summarisation. Unlike your flesh-and-blood team, they don’t sleep, unionise, or call in sick.

Tech giants like Microsoft, Canva, and Shopify are already eliminating roles that generative AI can perform faster, cheaper or more reliably. Shopify’s test for approving new hires? Prove the job can’t be done by AI.

Even highly paid software engineers and technical writers are now brushing up résumés — or joining unions. The shock isn’t just the job losses — it’s the redefinition of what work is.

The Illusion of Understanding

And yet — for all its prowess, there’s a lot that AI still doesn’t understand.

It doesn’t feel shame, pride, love, loyalty or regret. It doesn’t know the weight of a moral dilemma or the subtle ache of ambiguity. It doesn’t take responsibility. It hasn’t grown up anywhere. It’s very good at simulating humanity, but it hasn’t cracked what it means to be human.

Here are just a few areas where that matters:

• Moral Judgment & Empathy

AI doesn’t feel anything. It can mimic empathetic language, but it doesn’t understand suffering, joy, duty, shame, or dignity. That matters in:

  • law (e.g. sentencing, equitable remedies)

  • medicine (e.g. breaking bad news)

  • management (e.g. mentoring, handling conflict)

  • creative industries (e.g. stories that evoke genuine emotion)

• Contextual Wisdom and Ethical Trade-Offs

Humans weigh competing priorities in fluid, unquantifiable ways. A judge balancing public policy with individual hardship, or a parent navigating fairness between siblings — AI can model it, but not feel the stakes or bear the consequences.

• Lived Experience and Cultural Intuition

Even with perfect training data, AI lacks a body, a history, a community. It hasn’t known pain or formed personal relationships. It cannot speak authentically from or to a place of real cultural knowledge.

• Responsibility and Accountability

We trust humans with hard decisions because they can be held responsible. There’s no moral courage or ethical failure in the output of a large language model — only the illusion of one.

These aren’t just philosophical quibbles. They’re pressing questions for:

  • Law: Who bears blame when an AI agent misfires?

  • Healthcare: Who decides whether aggressive treatment is compassionate or cruel?

  • Leadership: Can you coach courage into someone via algorithm?

The Uncomfortable Part

AI already mimics a lot of that better than expected.  Consider:

• Empathy Simulation

GPT-4, Claude and others can write with stunning emotional acuity. They generate responses that feel empathetic, artistic or wise. It’s not authentic — but it’s increasingly indistinguishable, and often considered “good enough” by the humans receiving it.

• Decision-Making and Pattern Recognition at Scale

AI already outperforms humans at certain medical diagnoses, legal research, contract review and logistics. Its consistency and recall beat even expert practitioners — and that pushes decision-making downstream to human review of AI output.

• Creative Collaboration

AI is co-authoring books, scoring music, designing buildings. The raw ideas remain human-led (for now), but AI increasingly does the scaffolding. The assistant as co-creator is here.

• Agentic AI and Task Autonomy

Agentic AI can take a task, plan it, execute it, and evaluate the results. That’s edging close to synthetic intentionality. In limited domains, it already feels like independent judgment.

The Upshot

What AI can do — increasingly well — is mimic language, logic and even tone. It can co-author your policy doc, diagnose your anomaly, draft your contract (although still terribly at present – which, frankly, makes the contracts lawyer in me feel safe for now), and script your empathy.

But ask it to weigh competing values in an evolving ethical context — or even just draft a nuanced commercial agreement, conduct accurate scientific or legal research, or develop a strategy based on historical fact — and you quickly meet its limits.

Those edge cases still belong to humans in the loop.

So Who Owns the Output?

As businesses delegate more high-order tasks to autonomous agents, legal questions are multiplying:

  • Who owns the IP generated by a self-directed AI agent?
    → At this stage, probably no one — though ordinary IP rules apply to any human-developed improvements.

  • Can AI-created processes be patented or protected as trade secrets?
    → Not patented without significant human input — at least not under current Australian (or global) standards. Trade secrets? Only if the process was generated in confidential circumstances, and even then, likely only protected contractually — or by a very sympathetic equity judge with a soft spot for machines and a broad view of what counts as confidence.

  • Will the law begin to treat AI output as a kind of quasi-employee contribution?
    → Hard to say. But this author’s view: yes — we’re likely to see forms of legal recognition for things created wholly or partly by generative AI, especially as its use becomes ubiquitous.

Telstra’s ambition to shift from “network provider” to “bespoke experience platform” only deepens the stakes. If AI manages your venue’s mobile traffic to prioritise EFTPOS over selfies, who owns that logic? What’s the IP — and who gets paid?

We’re very likely to find out soon.

We May Not Be Replaced, But We Are Being Rerouted

What’s unfolding isn’t the erasure of human work — but its redistribution.

Jobs once seen as safe — legal drafting, coding, customer care — are being sliced up and reassembled into workflows where humans supervise, train or rubber-stamp what AI proposes.

We’re becoming fewer creators, more editors. Fewer builders, more overseers.

This is the heart of the AI transition: it’s not about making us obsolete.  It’s about making us team players — not to say optional — in a landscape of role transformation, driven by the pursuit of results.

That’s why this isn’t just an IP question. It’s a human one.

So yes — cancer might be cured. The economy might boom.  But as the digital employee clocks in, we’ll need more than productivity gains.

We’ll need new answers — about ownership, ethics, responsibility and value.  Not just in law, but in how we define a fair and meaningful future.

Filed Under: AI, IP, Technology Tagged With: AI, IP, Technology

June 18, 2025 by Scott Coulthart

Paul Bender’s music has been sampled by Beyoncé and Kendrick. His band, Hiatus Kaiyote, has three Grammy nominations. His side project, The Sweet Enoughs, racks up millions of streams. So it came as a shock when fans started hearing tracks on his Spotify profile that he didn’t recognise — or approve.

Tracks that sounded like they’d been composed by an AI trapped in an elevator.

“It was probably the worst attempt at music I’ve ever heard,” Bender told Brisbane Times. “Just absolutely cooked.” His reaction soon gave way to a grim realisation: someone was uploading fake music — apparently AI-generated — directly to his artist profile. And it wasn’t just Spotify. Apple Music, Tidal, YouTube Music and Deezer all carried the same fakes.

No passwords were stolen. No logins compromised. Just a ticking time bomb in the music distribution supply chain.

The Loophole That Became a Business Model

The scam works like this: a grifter uploads garbage tracks via a digital music distributor, assigns them to a known artist name, and — voila — the platform “maps” it to the artist’s official profile. Instant legitimacy, with algorithmic discovery to match.

No ID check. No consent. No authentication.

This isn’t just a quirk of one platform’s back end. It’s systemic. And it’s being exploited on an industrial scale. One vlogger, TankTheTech, showed how anyone can assign AI music to an artist profile in under ten minutes.

And the numbers are staggering:

  • Deezer reports that 18% of its daily uploads in 2025 are AI-generated.

  • Mubert, an AI music tool, claims over 100 million tracks were made on its platform in just the first half of 2023.

  • The Music Fights Fraud Alliance estimates 10% of all global music streams are fraudulent, with some distributors seeing fraud rates as high as 50%.

That’s not fringe — it’s a revenue model. And it’s bleeding real artists.

Legal Implications: Between Passing Off and Platform Apathy

Let’s be clear: uploading fake music under someone else’s name looks a lot like impersonation, if not passing off, especially where artist reputation and income are at stake. There may also be:

  • Copyright infringement if elements of an artist’s work were used in training or replication.

  • Moral rights violations under the Copyright Act 1968 (Cth), especially the right of integrity where a fake work is falsely attributed.

  • Misleading or deceptive conduct under section 18 of the Australian Consumer Law.

Yet despite the legal exposure, platforms and distributors are playing hot potato with responsibility. Spotify calls it a “mapping issue.” Artists call it what it is: a scam that platforms are structurally enabling.

Why This Matters — Beyond Music

This isn’t just a niche concern for indie musicians. It’s a case study in what happens when:

  • AI-generated content floods creative ecosystems,

  • platforms prioritise volume over verification,

  • and IP rights become an afterthought to scale.

In short, it’s the algorithm’s world — and creatives are just living in it.

But not quietly. Artists like Bender and Michael League (of Snarky Puppy) are now speaking out and pushing for industry action. With growing numbers of testimonials and escalating complaints, the music world may be the canary in the coal mine for a broader wave of AI impersonation and platform indifference.

Until then, don’t be surprised if the next time you hit play on a favourite artist’s profile… what comes out is 100% algorithm, 0% soul.

Here’s a thought: 2FA authentication before allowing uploads? Verify before you amplify!

Filed Under: AI, Entertainment, IP Tagged With: AI, Entertainment, IP

June 2, 2025 by Scott Coulthart

Whose Work Is It Anyway? The Remix War, AI, Coffee Plungers and Swimsuits

From Elton John to anonymous meme-makers, a battle is raging over what it means to be “creative” — and whether it starts with permission.

Two stories made waves in copyright circles last week:

  • In the UK, Sir Elton John, Sir Paul McCartney and other musical heavyweights called for stronger rules to stop AI from “scraping” their songs without a licence.

  • In India, news agency ANI drew criticism for aggressively issuing YouTube copyright claims — even for sub-10 second clips — triggering takedown threats against creators.

At first glance, these might seem worlds apart. But they highlight the same question:

At what point does using someone else’s work become exploitation, not inspiration?

And who decides?

Creators vs Reusers: Two Sides of the Copyright Culture Clash

On one side: Creators — musicians, writers, filmmakers, photographers — frustrated by tech platforms and algorithms ingesting their work without permission. Whether it’s AI training data or news footage embedded in political commentary, their message is the same:
“You’re building on our backs. Pay up.”

On the other side: Remixers, meme-makers, educators, and critics argue that strict copyright regimes chill creativity. “How can we critique culture,” they ask, “if we’re not allowed to reference it?”

This isn’t new — hip hop, collage art, satire, and even pop music are full of samples and nods. But AI has industrialised the scale of reuse. It doesn’t borrow one beat or a single shot. It eats the entire catalogue — then spits out something “new.”

So what counts as originality anymore?

Australian Lens: Seafolly, Bodum, and the Meaning of “Original”

Seafolly v Madden [2012] FCA 1346

In this high-profile swimwear spat, designer Leah Madden accused Seafolly of copying her designs. She posted comparison images on social media implying that Seafolly had engaged in plagiarism. Seafolly sued for misleading and deceptive conduct under ss 52 and 53 of the Trade Practices Act 1974 (predecessors to s18 of the Australian Consumer Law – which had by then commenced but the relevant conduct being sued for took place before it had commenced).

The Federal Court found that Madden’s claims were not only misleading but also unsubstantiated, because the design similarities were not the result of actual copying. The case reinforced that:

  • Independent creation is a valid defence, even if the resulting works are similar

  • Superficial resemblance isn’t enough — there must be a causal connection

It’s a reminder that derivation must be substantial and material, not speculative or assumed.

Bodum v DKSH [2011] FCAFC 98

This case involved Bodum’s iconic French press coffee plunger — the Chambord — and whether a rival product sold by DKSH under the “Euroline” brand misled consumers or passed off Bodum’s get-up as its own.

Bodum alleged misleading or deceptive conduct and passing off, based not on name or logo, but on the visual appearance of the product: a clear glass beaker, metal band, and distinctive handles, which had come to be strongly associated with Bodum.

At trial, the Federal Court rejected Bodum’s claims. But on appeal, the Full Federal Court reversed that decision, holding that:

  • Bodum had a substantial reputation in the get-up alone;

  • The Euroline plunger was highly similar in appearance; and

  • DKSH’s failure to adequately differentiate its product through branding or design gave rise to a misleading impression.

Both passing off and misleading/deceptive conduct (also under the old s52) were found. The Court emphasised that reputation in shape and design can be enough — and differentiation must be meaningful, not tokenistic.

The AI Angle: Who Trains Whom?

AI tools like ChatGPT, Midjourney, and Suno don’t just copy works. They learn patterns from thousands of inputs. But in doing so, they arguably absorb creative expression — chord progressions, phrasing, brushstroke styles — and then make new outputs in that same vein.

AI developers claim this is fair use or transformative. Artists argue it’s a form of invisible appropriation — no different from copying and tweaking a painting, but with zero attribution or compensation.

It’s the Seafolly and Bodum problem, scaled up: if AI’s “original” work was trained on 10,000 human ones, is it really original? Or just a remix with plausible deniability?

The Bottom Line

Copyright law is meant to balance:

  • Encouraging creativity

  • Rewarding labour

  • Allowing critique and cultural dialogue

But that balance is breaking under the weight of machine learning models and automated copyright bots. As Seafolly and Bodum show, the law still values intention, process, and context — not just resemblance.

Yet in a world of remix and AI, intention is opaque, and process is synthetic.

So where do we draw the line?

Filed Under: AI, Copyright, Entertainment, IP Tagged With: AI, Copyright, Entertainment, IP

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Copy Paste App? The Pleasures and Pitfalls of Screenshot-to-Code Tools
  • Brand Control, Part 7: “Beyond the Logo” — Trade Marking Product Shapes, Sounds, and Scents
  • Confidential No More? New Aim Took Their Shot and Missed
  • Brand Control, Part 6: “Use It or Lose It” — Genuine Use and Trade Mark Non-Use Risks
  • Fanatics vs FanFirm: When Coexistence Crashes and Burns

Archives

  • August 2025 (1)
  • July 2025 (16)
  • June 2025 (21)
  • May 2025 (12)
  • April 2025 (4)

Footer

© Scott Coulthart 2025