• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

IPMojo

  • About IP Mojo
  • About Scott Coulthart
  • CONTACT
BOOK AN APPOINTMENT

Technology

August 1, 2025 by Scott Coulthart

Copy Paste App? The Pleasures and Pitfalls of Screenshot-to-Code Tools

Imagine this: you take a screenshot of your favourite SaaS dashboard, upload it to a no-code AI tool, and minutes later you have a functioning version of the same interface — layout, buttons, styling, maybe even a working backend prototype. Magic? Almost.

Welcome to the world of screenshot-to-code generators — tools that use AI and no-code logic to replicate functional software from images. These platforms (like Galileo AI, Builder.io, and Uizard) promise rapid prototyping, faster MVP launches, and a lower barrier to entry for founders, designers, and product teams alike.

But while the tech is impressive, the legal waters are murkier. Here’s the pleasure and the pitfall.


🚀 The Pleasure: Design to Prototype at Lightspeed

The promise is seductive:

  • Rapid prototyping: What used to take weeks of front-end dev can now take hours — sometimes minutes.

  • Visual to functional: AI converts static designs (or even screenshots of existing apps) into working interfaces with mock data or basic logic.

  • Lower costs: Startups or solo devs can build more for less — less code, less labour, and less time.

Tools like Galileo AI and Uizard are being used to generate mock admin panels, mobile UI concepts, and even pitch-ready MVPs. They’re ideal for internal dashboards, client demos, or iterating fast before investing in full-stack builds.

But many users go further — taking screengrabs from existing platforms (think Notion, Salesforce, Figma, Xero) and asking the AI to “make me one of these.”

And that’s where the problems begin.


⚠️ The Pitfall: Copyright, Clones, and Clean Hands

Just because a tool can replicate an interface doesn’t mean you should — especially if your starting point is a screenshot of someone else’s software.

Here are the big legal traps to watch out for:

1. Copyright in the Interface

While copyright doesn’t protect ideas, it does protect expressions — including graphic design, layout, icons, fonts, and even the “look and feel” of certain interfaces. If your cloned UI copies the visual design of another product too closely, you may be infringing copyright (or at least inviting a legal headache).

Australia’s Desktop Marketing Systems v Telstra [2002] FCAFC 112 reminds us that copyright can exist in compilations of data or structure — not just in pretty pictures.

2. Trade Dress and Reputation

Even if your app doesn’t copy the code, a lookalike interface could fall foul of passing off or misleading conduct laws under the Australian Consumer Law if it creates confusion with an established brand. That risk increases if you’re operating in a similar space or targeting the same user base.

The global tech giants have deep pockets — and they’ve sued for less.

3. Terms of Use Breaches

Many platforms prohibit copying or reverse engineering their interfaces. Uploading screenshots of their product to an AI builder might violate their terms of service — even if your clone is only for internal use.

This isn’t just theory: platforms like OpenAI and Figma already use automated tools to detect and act on terms breaches — especially those that risk commercial leakage or brand dilution.

4. No Excuse Just Because the Tool Did It

You can’t hide behind the AI. If your clone infringes IP rights, you’re liable — not the platform that helped you build it. The tool is just that: a tool.

In legal terms, there’s no “my AI made me do it” defence.


🤔 So What Can You Do?

  • ✅ Use these tools for original designs: Sketch your own wireframes, then let the AI flesh them out.

  • ✅ Take inspiration, not duplication: You can draw ideas from good UI — but avoid replicating them pixel-for-pixel.

  • ✅ Use public design systems: Many platforms release UI kits and components under open licences (e.g., Material UI, Bootstrap). Start there.

  • ✅ Keep it internal: If you must replicate an existing interface to test functionality, don’t deploy it publicly — and definitely don’t commercialise it.

  • ✅ Get advice: If you’re close to the line (or don’t know where the line is), speak to an IP lawyer early. Clones are cheaper than court.


🧠 Final Thought: Just Because You Can…

…doesn’t mean you should.

AI is rapidly transforming the way software is built — but it’s also tempting users to cut corners on IP. Using these tools responsibly means treating screenshots not just as pixels, but as possibly protected property.

Build fast — but build clean.

Filed Under: AI, Copyright, Digital Law, IP, Technology Tagged With: AI, Copyright, Digital Law, IP, Technology

June 25, 2025 by Scott Coulthart

YouTube’s Free Pass May Be Up: eSafety Pushes Back on Social Media Carve-Out

The Albanese Government’s plan to restrict under-16s from holding social media accounts is already proving contentious — and now, its one glaring exception has been officially called out. The eSafety Commissioner, Julie Inman Grant, has advised Communications Minister Anika Wells to scrap the carve-out that would exempt YouTube from the new age-gating regime set to kick in this December.

The proposal, which mandates that platforms like TikTok, Instagram, Snapchat, Reddit and X take “reasonable steps” to block account creation by under-16s, currently spares YouTube on the basis that it has a broader educational and health utility. But the Commissioner’s position is clear: if it walks like TikTok and Shorts like TikTok, it’s probably TikTok — and deserves to be regulated accordingly.

YouTube: Too Big to Ban?

Back in November, then-Minister Rowland argued YouTube played a “significant role in enabling young people to access education and health support”, and thus deserved its special treatment. But the eSafety Commissioner’s new advice — now in the hands of Minister Wells — says the data tells a different story.

YouTube isn’t just a fringe player. A recent eSafety survey found it’s used by 76% of 10- to 15-year-olds, making it the dominant platform for that age group. Among kids who encountered harmful content online, 37% said the worst of it happened on YouTube.

In other words, if the aim is to protect children from the harms of social media, YouTube is not just part of the problem — it’s the biggest piece of it.

Functional Similarity, Regulatory Inconsistency

The core of the Commissioner’s argument is that functionality, not branding, should drive regulation. YouTube Shorts mimics the addictive swipe-based short-form video experience of TikTok and Instagram Reels. Carving it out sends mixed messages about the purpose of the law — and creates loopholes large enough for a Shorts binge.

The advice also calls for more adaptable, risk-based rules that focus on a platform’s actual features and threat profile, not how it labels itself. Technology evolves too fast for static category-based exemptions.

But What’s the Threat, Really?

There may be many examples of nanny-state regulation these days – but this isn’t one of them.

YouTube is in this author’s opinion an excellent platform extremely useful and entertaining all at the same time, and that applies to benefits both for adults and under-18s/under-16s.

However, there are also significant dangers for under-16s that can’t be ignored.

In plain terms:

1. Exposure to Inappropriate Content

Even with YouTube Kids and restricted mode, children can still be exposed to:

  • Pornographic or sexually suggestive content (sometimes slipped past filters).

  • Violent or graphic videos (including real-life fights, injuries, or distressing footage).

  • Content promoting self-harm, eating disorders, or suicide (often through seemingly innocuous videos or “coded” messaging).

  • Misinformation or conspiracy theories (e.g., QAnon, anti-vax rhetoric).

These exposures are linked to real psychological harms, especially among younger teens still forming their identity and critical reasoning skills.


2. Contact Risks (Predators & Harassment)

YouTube allows comments, live chat during livestreams, and even community posts — all of which create:

  • Opportunities for unsolicited contact from adults (including grooming behaviour).

  • Exposure to cyberbullying or peer harassment, often via comments.

  • Unfiltered interactions during livestreams — which are harder to moderate in real time.

The eSafety Commissioner sees this as part of a broader “contact harm” risk — it’s not just what kids see, but who can reach them and how they’re targeted.


3. Addictive Design (Shorts, Recommendations)

YouTube’s algorithmic design encourages:

  • Binge-watching and excessive screen time through autoplay and recommendations.

  • Engagement loops in YouTube Shorts (TikTok-style scrollable video snippets).

  • Exposure to more extreme or sensational content the longer a child watches (known as algorithmic “radicalisation”).

This design can disrupt sleep, concentration, and mental wellbeing — particularly in adolescents.


4. Data Privacy & Profiling

YouTube collects vast amounts of user data — even from minors — to personalise recommendations and ads. While Google claims to limit this for users under 18:

  • The eSafety Commissioner is concerned that data-driven profiling may still occur covertly or imperfectly.

  • Kids may also be inadvertently tracked across platforms when logged into a YouTube or Google account.


5. False Sense of Safety

YouTube’s exemption from the new social media rules may give parents the impression it is “safe” or “educational” by default — when, in fact, it often contains the same risks as TikTok or Instagram.

The Commissioner specifically called out that there isn’t sufficient evidence YouTube “predominantly provides beneficial experiences” for under-16s. So the carve-out undermines the purpose of the rules.


In summary, the concern isn’t just about under-16s accessing YouTube, but about the total environment of:

  • Risky content,

  • Risky contact,

  • Addictive design, and

  • Inadequate protective controls.

Risk-Based Reform on the Horizon

The YouTube advice comes as the eSafety Commissioner readies a suite of industry-specific codes targeting harmful online content, including pornography and violent material. New obligations are expected for search engines, hosting services, and telcos — with five more codes in the pipeline. If voluntary industry codes fall short, the Commissioner has flagged she’ll impose mandatory standards before July’s end.

Penalties for breach of these codes — like the new social media rules — could reach $50 million for systemic non-compliance.

What’s Next?

The final decision on YouTube’s exemption sits with Minister Wells, who must table the rules in Parliament for scrutiny. But with pressure now coming from the very regulator tasked with enforcement, and mounting community concern over YouTube’s influence, the carve-out may not survive the next sitting.

The bigger question is whether Australia can strike the right balance between platform accountability, digital literacy, and youth agency — without blunting the tools that help kids learn and connect. In a digital world that resists easy categorisation, risk-based regulation may be the only way forward.

Filed Under: Digital Law, Regulation, Technology Tagged With: Digital Law, Regulation, Technology

June 24, 2025 by Scott Coulthart

Fair Use or Free Ride? The Case for an AI Blanket Licence

What if AI companies had to pay for the content they train on? Welcome to the next frontier in copyright law — where inspiration meets ingestion.

When AI companies train their models — whether for music, image generation, writing or video — they don’t do it in a vacuum. They train on us. Or more precisely: on our songs, our blogs, our art, our tweets, our books, our interviews.

They harvest it at scale, often scraped from the open web, with or without permission — and certainly without compensation.

This has prompted an increasingly vocal question from creators and content owners:

Shouldn’t we get paid when machines learn from our work?

The proposed answer from some corners: a blanket licensing regime.

What’s a Blanket Licence?

Nothing to do with bedding – a blanket licence is a pre-agreed system for legal reuse. It doesn’t ask for permission each time. Instead, it says:

You can use a defined pool of material for a defined purpose — if you pay.

We already see this in:

  • Music royalties (e.g. APRA, ASCAP, BMI)

  • Broadcast and public performance rights

  • Compulsory licensing of cover songs in the US

Could the same apply to AI?

What the Law Says (or Doesn’t)

AI companies argue that training their models on public material is “fair use” (US) or doesn’t involve “substantial reproduction” (Australia), since no exact copy of the work appears in the output.

However, copies are made during scraping, and substantial parts are almost certainly reproduced during the training process or embedded in derivative outputs — either of which could pose problems under both US and Australian copyright law.

But courts are still catching up.

Pending or recent litigation:

  • The New York Times v OpenAI: scraping articles to train GPT

  • Sarah Silverman v Meta: use of copyrighted books

  • Getty Images v Stability AI: image training and watermark copying

None of these cases have yet resolved the underlying issue:

Is training AI on copyrighted works a use that requires permission — or payment?

What a Blanket Licence Would Do

Under a blanket licence system:

  • Training (and copying or development of derivatives for that purpose) would be lawful, as long as the AI provider paid into a fund

  • Creators and rights holders would receive royalty payments, either directly or via a collecting society

  • A legal baseline would be established, reducing lawsuits and uncertainty

This would mirror systems used in broadcasting and streaming, where revenue is pooled and distributed based on usage data.

Challenges Ahead

1. Who Gets Paid?

Not all data is traceable or attributed. Unlike Spotify, which tracks each song streamed, AI models ingest billions of unlabeled tokens.

How do you determine who owns what — and which parts — of material abstracted, fragmented, and stored somewhere in the cloud?

2. How Much?

Rates would need to reflect:

  • The extent of use

  • The importance of the material to the training corpus

  • The impact on the original market for the work

This is tricky when a model is trained once and then used forever.

3. Which Countries?

Copyright laws vary. A licence in Australia might mean nothing in the US.

A global licence would require multilateral cooperation — and likely WIPO involvement.

Legal Precedent: Australia’s Safe Harbour and Statutory Licensing Models

Australia’s own statutory licensing schemes (e.g. educational copying under Part VB of the Copyright Act) show that:

  • Lawmakers can mandate payment for certain uses,

  • Even if individual rights holders never negotiated the terms,

  • Provided it’s reasonable, transparent, and compensatory.

But those systems also brought:

  • Bureaucratic collection processes

  • Contentious allocation models

  • Endless legal wrangling over definitions (What is “reasonable portion”? What qualifies as “educational purpose”?)

Expect the same for AI.

Creators and Innovation: A Balancing Act

For creators:

  • A blanket licence offers recognition and payment

  • It helps avoid the current “scrape now, settle later” model

  • It could fund new creative work rather than hollowing out industries

For innovators:

  • It provides legal certainty

  • Encourages investment in AI tools

  • Reduces the risk of devastating retroactive litigation

But if set up poorly, it could:

  • Be exclusionary (if licensing fees are too high for small players)

  • Be ineffective (if rights aren’t properly enforced or distributed)

  • Or be too slow to match AI’s pace

What’s Next?

Australia’s Copyright Act doesn’t currently recognise training as a specific form of use. But policy reviews are under way in multiple countries, including by:

  • The UK IPO

  • The European Commission

  • The US Copyright Office

  • And here in Australia, the Attorney-General’s Department is conducting consultations through 2024–25 on how copyright law should respond to AI

Creators, platforms, and governments are all watching the courts. But if consensus forms around the need for structured compensation, a statutory blanket licence might just be the solution.


Bottom Line

We’ve built AI on the backs of human creativity. The question isn’t whether to stop AI — it’s how to make it fair.

A blanket licence won’t solve every problem. But it could be the start of a system where creators aren’t left behind — and where AI learns with permission, not just ambition.

Filed Under: AI, Copyright, Digital Law, IP, Technology Tagged With: AI, Copyright, Digital Law, IP, Technology

June 20, 2025 by Scott Coulthart

Productivity or Pink Slips? The Rise of Agentic AI

Ok, enough with the scaremongering – let’s thrash it out.

Is AI going to replace us any time soon?

One perspective’s look at the medium term future:

    “Cancer is cured, the economy grows at 10% a year… and 20% of people don’t have jobs.”

So said Dario Amodei, CEO of Anthropic, in one of the most jarring soundbites to emerge from the AI sector this year. It’s not a dystopian movie pitch — it’s a plausible trajectory.

The Brisbane Times recently spotlighted how Telstra is leaning hard into this future, and it starts with deploying so-called Agentic AI – discrete AI tools able to do a bunch of things with minimal oversight.  From automating customer service to writing code, the $54 billion telco is betting big that its next era won’t be driven just by fibre and frequency, but by “digital agents“: AI tools with autonomy to act, learn and optimise at scale.

While Telstra CEO Vicki Brady didn’t give hard numbers on expected job cuts, she did suggest the company’s workforce will likely be smaller by 2030. No bold claims — just quiet math. That’s the real face of the AI revolution: not mass firings, but jobs that never get hired in the first place.

Enter the Digital Employee

Nvidia’s Jensen Huang calls them “digital employees” — autonomous, specialised AI agents that handle roles from cybersecurity to network monitoring to legal summarisation. Unlike your flesh-and-blood team, they don’t sleep, unionise, or call in sick.

Tech giants like Microsoft, Canva, and Shopify are already eliminating roles that generative AI can perform faster, cheaper or more reliably. Shopify’s test for approving new hires? Prove the job can’t be done by AI.

Even highly paid software engineers and technical writers are now brushing up résumés — or joining unions. The shock isn’t just the job losses — it’s the redefinition of what work is.

The Illusion of Understanding

And yet — for all its prowess, there’s a lot that AI still doesn’t understand.

It doesn’t feel shame, pride, love, loyalty or regret. It doesn’t know the weight of a moral dilemma or the subtle ache of ambiguity. It doesn’t take responsibility. It hasn’t grown up anywhere. It’s very good at simulating humanity, but it hasn’t cracked what it means to be human.

Here are just a few areas where that matters:

• Moral Judgment & Empathy

AI doesn’t feel anything. It can mimic empathetic language, but it doesn’t understand suffering, joy, duty, shame, or dignity. That matters in:

  • law (e.g. sentencing, equitable remedies)

  • medicine (e.g. breaking bad news)

  • management (e.g. mentoring, handling conflict)

  • creative industries (e.g. stories that evoke genuine emotion)

• Contextual Wisdom and Ethical Trade-Offs

Humans weigh competing priorities in fluid, unquantifiable ways. A judge balancing public policy with individual hardship, or a parent navigating fairness between siblings — AI can model it, but not feel the stakes or bear the consequences.

• Lived Experience and Cultural Intuition

Even with perfect training data, AI lacks a body, a history, a community. It hasn’t known pain or formed personal relationships. It cannot speak authentically from or to a place of real cultural knowledge.

• Responsibility and Accountability

We trust humans with hard decisions because they can be held responsible. There’s no moral courage or ethical failure in the output of a large language model — only the illusion of one.

These aren’t just philosophical quibbles. They’re pressing questions for:

  • Law: Who bears blame when an AI agent misfires?

  • Healthcare: Who decides whether aggressive treatment is compassionate or cruel?

  • Leadership: Can you coach courage into someone via algorithm?

The Uncomfortable Part

AI already mimics a lot of that better than expected.  Consider:

• Empathy Simulation

GPT-4, Claude and others can write with stunning emotional acuity. They generate responses that feel empathetic, artistic or wise. It’s not authentic — but it’s increasingly indistinguishable, and often considered “good enough” by the humans receiving it.

• Decision-Making and Pattern Recognition at Scale

AI already outperforms humans at certain medical diagnoses, legal research, contract review and logistics. Its consistency and recall beat even expert practitioners — and that pushes decision-making downstream to human review of AI output.

• Creative Collaboration

AI is co-authoring books, scoring music, designing buildings. The raw ideas remain human-led (for now), but AI increasingly does the scaffolding. The assistant as co-creator is here.

• Agentic AI and Task Autonomy

Agentic AI can take a task, plan it, execute it, and evaluate the results. That’s edging close to synthetic intentionality. In limited domains, it already feels like independent judgment.

The Upshot

What AI can do — increasingly well — is mimic language, logic and even tone. It can co-author your policy doc, diagnose your anomaly, draft your contract (although still terribly at present – which, frankly, makes the contracts lawyer in me feel safe for now), and script your empathy.

But ask it to weigh competing values in an evolving ethical context — or even just draft a nuanced commercial agreement, conduct accurate scientific or legal research, or develop a strategy based on historical fact — and you quickly meet its limits.

Those edge cases still belong to humans in the loop.

So Who Owns the Output?

As businesses delegate more high-order tasks to autonomous agents, legal questions are multiplying:

  • Who owns the IP generated by a self-directed AI agent?
    → At this stage, probably no one — though ordinary IP rules apply to any human-developed improvements.

  • Can AI-created processes be patented or protected as trade secrets?
    → Not patented without significant human input — at least not under current Australian (or global) standards. Trade secrets? Only if the process was generated in confidential circumstances, and even then, likely only protected contractually — or by a very sympathetic equity judge with a soft spot for machines and a broad view of what counts as confidence.

  • Will the law begin to treat AI output as a kind of quasi-employee contribution?
    → Hard to say. But this author’s view: yes — we’re likely to see forms of legal recognition for things created wholly or partly by generative AI, especially as its use becomes ubiquitous.

Telstra’s ambition to shift from “network provider” to “bespoke experience platform” only deepens the stakes. If AI manages your venue’s mobile traffic to prioritise EFTPOS over selfies, who owns that logic? What’s the IP — and who gets paid?

We’re very likely to find out soon.

We May Not Be Replaced, But We Are Being Rerouted

What’s unfolding isn’t the erasure of human work — but its redistribution.

Jobs once seen as safe — legal drafting, coding, customer care — are being sliced up and reassembled into workflows where humans supervise, train or rubber-stamp what AI proposes.

We’re becoming fewer creators, more editors. Fewer builders, more overseers.

This is the heart of the AI transition: it’s not about making us obsolete.  It’s about making us team players — not to say optional — in a landscape of role transformation, driven by the pursuit of results.

That’s why this isn’t just an IP question. It’s a human one.

So yes — cancer might be cured. The economy might boom.  But as the digital employee clocks in, we’ll need more than productivity gains.

We’ll need new answers — about ownership, ethics, responsibility and value.  Not just in law, but in how we define a fair and meaningful future.

Filed Under: AI, IP, Technology Tagged With: AI, IP, Technology

May 21, 2025 by Scott Coulthart

Age Check Please – Australia’s Social Media Age Trial Steps Up

If you thought “what’s your date of birth?” was just an annoying formality, think again. Australia is now deep into a world-first trial of age verification tech for social media — and the implications for platforms, privacy, and policy will be real.

It’s official: Australia is no longer just talking about age restrictions on social media — it’s testing them. In what’s being described as a world-first, the federal government earlier this year launched the Age Assurance Technology Trial, a trial of age assurance technologies across more than 50 platforms, including heavyweights like Meta, TikTok and Snapchat.

The idea? To test whether it’s technically (and legally) viable to verify a user’s age before they gain access to certain online services, especially platforms known to attract kids and teens.

The goal is to find out whether it’s possible — and proportionate — to verify a user’s age before letting them dive into algorithm-driven feeds, DMs, or digital chaos.

Now, as of mid-May, the trial is expanding — with school students in Perth and Canberra joining the test groups. The trial includes biometric screening (e.g. facial age estimation), document-based verification, and other machine-learning tools and tech designed to assess age and  detect users under 16 without necessarily collecting identifying information, in line with recommendations from the eSafety Commissioner and privacy reform proposals.

Initial results are reportedly encouraging, showing strong accuracy for detecting under-16 users. Some methods are accurate 90%+ of the time — but questions linger. How well do these tools work across diverse communities? How do they avoid discrimination? And perhaps most importantly: how do you balance age checks with user privacy?

But this isn’t just a tech exercise — it’s a law-and-policy warm-up. With the Children’s Online Privacy Code set to drop by 2026, and eSafety pushing hard for age-based restrictions, the real question is: can you implement age gates that are privacy-preserving, non-discriminatory, and not easily gamed by a teenager with a calculator and Photoshop?

It’s a tough balance. On one hand, there’s real concern about children’s exposure to online harms. On the other, age verification at scale risks blowing out privacy compliance, embedding surveillance tech, and excluding legitimate users who don’t fit biometric norms.

The final report lands in June 2025, and platforms should expect regulatory consequences soon after. If the trial proves age verification is accurate, scalable, and privacy-compatible, you can bet on mandatory age checks becoming law by the end of the year.

Bottom line? If your platform’s UX depends on open access and anonymity, start thinking now about how that survives an incoming legal obligation to know more about your users, and if not necessarily who they are, at least how young they actually are (as opposed to how old they might claim to be).

Filed Under: Digital Law, Technology Tagged With: Digital Law, Technology

April 28, 2025 by Scott Coulthart

Robot Rumble: Motorola and Hytera Throw Down Over Code

It wasn’t exactly Rock ’Em Sock ’Em Robots, but the recent battle between Motorola Solutions and Hytera Communications was certainly a copyright and tech-world punch-up — with real ramifications for copyright law relating to software.

Motorola accused Hytera of pinching thousands of confidential documents and source code files — the digital DNA of Motorola’s radio communications tech — and using them to turbocharge Hytera’s own products.

The claim was that three Motorola engineers downloaded a treasure trove of materials from their work accounts before hopping over to Hytera in 2008. Motorola was not amused, and sued in both the US and Australia, claiming copyright infringement, trade secret theft, and that it’s just not cricket.

Here’s how it unfolded:

Year Event
2007–2008 Three Motorola engineers leave, allegedly taking source code and confidential documents.
2010s Hytera launches eerily similar digital mobile radios (DMRs) into the market.
2017 Motorola sues Hytera in the US (for trade secret theft and copyright infringement).
2020 A US jury awards Motorola nearly US$765 million.
2024 The US Seventh Circuit Court of Appeals cuts down the damages, ruling copyright law doesn’t stretch to overseas sales.
2022–2024 In Australian proceedings, Motorola wins a Federal Court case finding substantial copying of its software and some patent infringements.

In Australia (Hytera Communications Corporation Ltd v Motorola Solutions Inc [2024] FCAFC 168), the Federal Court was having none of Hytera’s various arguments. It found that Hytera’s software infringed Motorola’s copyrights in six major works and awarded Motorola remedies accordingly. One of Motorola’s patents was also found infringed — though another was knocked out for invalidity.

So, why does this matter for copyright and software?

First up: software source code is absolutely protected by copyright.  Nothing has changed there.

It doesn’t matter if the copying happened sneakily, through engineers quietly/brazenly exporting files out the back door. If your new product looks suspiciously like the old employer’s, and the code similarities are undeniable, you’re probably in trouble.

It’s a breach of copyright in a copyright work to copy all or a substantial part of the copyright work without permission.  What this case does is clarify what “substantial part” means when it comes to software.

Generally, in copyright, the test for whether the copied bit was a “substantial part” of the copyright work is a “qualitative test” – that is, it’s about quality, not quantity.

In a music context, cases such as the famous “Men At Work” case (EMI Songs v Larrikin Music) taught us that it’s how important the copied musical passage was to the whole work and not how lengthy it was (that quirky flute part in “Land Down Under” was held to be a reproduction of an important, but short, part of the melody in “Kookaburra Sits in the Old Gum Tree”).

The Court in that case said substantiality depends mainly on the importance or distinctiveness of the part copied in relation to the original work — even a small musical phrase, if distinctive, could be substantial.

In Hytera, though, the Court made an important distinction.  When it comes to software, it’s not about how functionally important the copied code is. You could copy a piece of code that barely moves the dial commercially — and still infringe. The real test is whether the copied part contains the original intellectual effort and creative expression of the coder.

It’s a subtle shift from cases like Men at Work, where the Court focused on whether the snippet (a melody, in that case) was distinctive or significant to the work overall.

In software land, copyright doesn’t care how useful the copied bit is — it only cares if it was original.

The Court in Hytera, though, was careful when discussing “intellectual effort” not to revert entirely back to the old “sweat of the brow” arguments.

In the old Desktop Marketing Systems v Telstra case (early Federal Court levels) it was held that just collecting and compiling basic data was enough for copyright to apply, because it involved effort to compile it all together – that is, it involved “sweat of the brow.”

But then IceTV v Nine (2009) came along, in which the High Court said No — effort alone is not enough.  It was held that copyright protects original expression — meaning something resulting from independent intellectual effort and some creative choice, not just labour.

In Hytera, the Court was very careful not to slip all the way back into pure “sweat of the brow” thinking – the Court didn’t say that simply writing code or working hard gave rise to copyright protection.  Instead, they said:

  • The copied source code was protected because it reflected original intellectual effort and creative choices — not just functional output.

  • You still need some degree of creative expression — but in software, that creativity can sit in how the code is written, how problems are solved, and the structure of the program — not necessarily in the “importance” of the function itself.

In other words:

  • Not just “I worked hard, therefore copyright.”

  • But “I made creative and original choices, therefore copyright — even if the code serves a functional purpose.”

Copyright Lessons from Hytera

  • Don’t steal code. Seriously. Just don’t – it leaves digital fingerprints everywhere.

  • Copyright protects software structure and content, not just fancy graphics or user interfaces.

  • Judges don’t love “but we changed it a bit” arguments when the starting point was a pile of stolen files.

High Court Appeal?

In early April, Hytera applied to the High Court for special leave to appeal, but in an extra blow to Hytera’s already deflated spirits, the High Court refused to grant leave as their appeal “does not enjoy sufficient prospects of success to warrant the grant of special leave”.  Ouch …

Filed Under: Copyright, IP, Technology Tagged With: Copyright, IP, Technology

Primary Sidebar

Recent Posts

  • Copy Paste App? The Pleasures and Pitfalls of Screenshot-to-Code Tools
  • Brand Control, Part 7: “Beyond the Logo” — Trade Marking Product Shapes, Sounds, and Scents
  • Confidential No More? New Aim Took Their Shot and Missed
  • Brand Control, Part 6: “Use It or Lose It” — Genuine Use and Trade Mark Non-Use Risks
  • Fanatics vs FanFirm: When Coexistence Crashes and Burns

Archives

  • August 2025 (1)
  • July 2025 (16)
  • June 2025 (21)
  • May 2025 (12)
  • April 2025 (4)

Footer

© Scott Coulthart 2025