• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

IPMojo

  • About IP Mojo
  • About Scott Coulthart
  • CONTACT
BOOK AN APPOINTMENT

Digital Law

September 17, 2025 by Scott Coulthart

Aristocrat’s Jackpot: Full Court Revives Gaming Machine Patents

When does a slot machine cross the line from an abstract idea to a patentable invention?

After years of litigation, remittals, and even a 3–3 deadlock in the High Court, the Full Federal Court has finally tipped the balance in Aristocrat’s favour.

🎰 The Long Spin

Aristocrat has been fighting since 2018 to keep its patents over electronic gaming machines (EGMs) with “configurable symbols” — feature games that change play dynamics and prize allocation. The Commissioner argued these were just abstract rules of a game dressed up in software. Aristocrat said they were genuine machines of a particular construction that yielded a new and useful result.

The case bounced through:

  • Delegate (2018): patents revoked.

  • Burley J (2020): Aristocrat wins.

  • Full Court (2021): Aristocrat loses (majority invents “advance in computer technology” test).

  • High Court (2022): split 3–3, affirming the Full Court’s result by default under Judiciary Act s 23(2)(a).

  • Remittal (2024): Burley J reluctantly applies Full Court reasoning against Aristocrat.

Cue the latest appeal.

⚖️ The Precedent Puzzle

The Full Court (Beach, Rofe & Jackman JJ) confronted a thorny problem: should it stick to its own 2021 reasoning when the High Court had unanimously rejected that reasoning, even though no majority emerged?

The answer: No.

  • Only majority or unanimous High Court views are binding.

  • But the High Court’s unanimous criticism provided a “compelling reason” to abandon the earlier Full Court approach.

  • The Court found “constructive error” — not blaming Burley J, but recognising the law had to move on.

🖥️ Rethinking “Manner of Manufacture”

The Court reframed the test for computer-implemented inventions:

  • Not patentable: an abstract idea manipulated on a computer.

  • Patentable: an abstract idea implemented on a computer in a way that creates an artificial state of affairs and useful result.

Applying this, Aristocrat’s claim 1 was patentable — and by extension, so were the dependent claims across its four patents. The EGMs weren’t just abstract gaming rules. They were machines, purpose-built to operate in a particular way.

💡 Why It Matters

  • For patentees: This revives hope for computer-implemented inventions beyond “pure software” where technical implementation creates a new device or process.

  • For examiners: IP Australia may need to recalibrate examination practice on software-related patents — the “advance in computer technology” yardstick is gone.

  • For practitioners: This is a case study in how precedent, process, and patents collide. The High Court’s split didn’t end the story — it forced the Full Court to resolve it.

🚀 Takeaway

The Full Court has effectively reset the slot reels. Aristocrat’s EGMs are back in play, and the scope of patentable computer-implemented inventions in Australia looks a little brighter.

Sometimes the house doesn’t win.

Filed Under: Digital Law, Gaming Law, IP, Patents, Technology Tagged With: Digital Law, Gaming Law, IP, Patents, Technology

September 16, 2025 by Scott Coulthart

Epic Won the Battle. Now Developers Want Their Refunds.

When Epic Games went head-to-head with Apple, the Federal Court found that Apple misused its market power by locking iOS developers into the App Store and its payment system. That was big. But the Anthony v Apple class action takes it a step further: what if Apple has been overcharging Australian developers and consumers for years?

From liability to dollars

In Epic v Apple [2025] FCA 900, Justice Beach held that Apple’s restrictions substantially lessened competition in two markets:

  1. iOS app distribution; and

  2. iOS in-app payment solutions.

That case was about liability — whether Apple broke the law.

In Anthony v Apple Inc [2025] FCA 902, the Court applied those findings in the context of a class action by developers and users. This time, the question wasn’t just “did Apple misuse its power?” but “what should Apple have charged if competition had been allowed?”

The counterfactual commission

Apple famously takes up to 30% of in-app revenue. The class action alleges that this cut was inflated by Apple’s anti-competitive restrictions.

Justice Beach accepted that the key issue was whether commissions exceeded the “counterfactual” level — i.e. the rate that would have prevailed in a competitive market.

That’s not just a legal puzzle. It’s an economic modelling exercise: estimating what rival app stores and payment processors would have charged, and how Apple’s fees distorted prices across the app ecosystem.

Why this matters

  • Developers: If successful, they may recover damages for inflated commissions they’ve paid over years. That could mean real money back into the hands of Australian app makers.

  • Consumers: If commissions were inflated, those costs were often passed on through higher app and in-app purchase prices. Compensation claims could extend to end users.

  • Apple (and Google): The damages bill could be eye-watering. Liability findings are one thing; being ordered to pay back billions is another.

A coordinated strategy

Justice Beach emphasised that his reasons in Anthony v Apple should be read together with Epic v Apple and Epic v Google. This isn’t three random cases — it’s a coordinated litigation front against the app store model.

First, establish liability (Epic).
Then, pursue compensation (Anthony).
Finally, broaden the net (Epic v Google).

The bigger picture

Globally, regulators and courts are converging on the same theme: Apple and Google can’t use security or convenience as a shield for overcharging.

Australia’s twist? Class actions have a way of turning abstract competition law into concrete refunds.

⚖️ The takeaway

Epic v Apple broke open the wall. Anthony v Apple asks whether Apple should hand back the gold it’s been collecting inside.

This isn’t just another round in the same fight — it’s the damages phase of the app store wars. And it could hit closer to home for Australian developers and users than anything Epic ever fought for.

Filed Under: Competition Law, Digital Law, Remedies, Technology Tagged With: Competition Law, Digital Law, Remedies, Technology

September 12, 2025 by Scott Coulthart

Epic Down Under: How Australia Took a Bite Out of Apple’s Walled Garden

When Epic Games took on Apple in the US and Europe, the headlines practically wrote themselves – it was billed as a David-and-Goliath showdown between the Fortnite maker and the Cupertino colossus. Now, the same fight has reached Australian shores — and the Federal Court has bitten into Apple’s walled garden.

When Epic Games v Apple erupted in the US in 2020, Fortnite, the global gaming juggernaut, had been punted from the App Store after Epic tried to sneak in its own cheaper payment system. Cue a legal battle royale over whether Apple’s “walled garden” was innovation, exploitation, or both.

Fast forward to August 2025, and the fight has gone local. In Epic Games, Inc v Apple Inc [2025] FCA 900, Justice Beach of the Federal Court handed down a sprawling 6,347-paragraph judgment — and while not everything went Epic’s way, the headline is clear: Apple misused its market power under s 46 of the Competition and Consumer Act 2010 (Cth).

Australia has officially joined the global chorus questioning how far Big Tech’s gatekeeping power can go.

The legal frame: two markets, one gatekeeper

Epic’s case hinged on market definition — the first battlefield of any competition law fight. Epic said there were two relevant markets:

  1. The iOS app distribution market – how apps get onto iPhones and iPads.

  2. The iOS in-app payment solutions market – how digital content is paid for inside apps.

Apple argued for something broader: a market for “app transactions”, with plenty of alternatives. Justice Beach wasn’t buying it. He sided with Epic’s narrower framing, recognising that Apple was the sole gatekeeper in both distribution and in-app digital payments.

From there, the logic snowballed: Apple’s rules preventing sideloading (direct downloads) and banning alternative payment systems substantially lessened competition. That, in turn, triggered contraventions of s 46 (misuse of market power) and s 47 (exclusive dealing).

Epic didn’t get everything it wanted — some claims under s 21 of the ACL (unconscionable conduct) fell flat, and Apple’s ban on rival app stores inside the App Store was upheld. But the central wins are seismic.

Apple’s defence: “But security!”

Apple leaned heavily on security as a justification. Its argument: a centralised, curated App Store keeps users safe from malware, fraud, and scams.

Justice Beach accepted there were genuine security benefits — Apple’s model really does provide higher baseline quality and safety compared to the Wild West of sideloaded apps.

However, crucially, he ruled that security doesn’t trump competition law. A legitimate purpose (protecting users) doesn’t erase the anti-competitive effects (locking out rivals) – or as the Court put it: “The existence of a security purpose says little about the effect or likely effect of Apple’s restrictive conduct in terms of competition questions.”

What this means for…

Developers

Epic signalled it would jump into iOS distribution if given the chance.

Others will follow. Think Spotify, payment processors like Stripe, or even local players offering niche app stores.

The judgment cracks open a door that’s been locked since 2008.

Consumers

If remedies flow, users could see lower prices and more choice.

Developers paying Apple’s 30% cut have long argued they’re forced to inflate in-app purchase prices.

Alternatives could push those costs down — though don’t expect Apple to give up without a fight.

Regulators

The decision aligns with the ACCC’s Digital Platform Services Inquiry, which has repeatedly flagged Apple’s and Google’s control over app ecosystems.

Australia may now move closer to Europe’s Digital Markets Act, which mandates interoperability and alternative app stores.

Apple

Even if remedies are still pending, the finding of liability alone is a reputational hit.

Apple’s “we know best” stance has always traded on consumer trust. Now it must reckon with courts telling it that choice matters too.

The global context

This ruling doesn’t exist in a vacuum. It follows:

  • The US: where Epic’s case against Apple produced a mixed bag — Apple largely won at trial, but Epic clawed back some ground on appeal.

  • The EU: where the Digital Markets Act forced Apple to allow rival app stores and alternative payment methods in early 2024.

  • South Korea and Japan: already experimenting with app store regulation.

Australia is now firmly in the mix. What started as a Fortnite scuffle is becoming a global test of whether digital gatekeepers can keep locking the gates.

Where to next?

Justice Beach has reserved questions of relief for another hearing.

That means the really juicy part — what remedies Apple will face in Australia — is still to come. Options range from structural orders (sideloading must be allowed) to behavioural remedies (Apple must permit rival payment providers).

Whatever the outcome, one thing is clear: the walled garden isn’t as impregnable as Apple thought – and Epic may have just turned its legal battle royale into a global trend of regulatory respawn.

⚖️ The takeaway

For digital lawyers, regulators, and anyone building on someone else’s platform: this case is a reminder that market power is never just a tech problem — it’s a legal one.

When innovation, competition, and consumer choice collide, courts are willing to get out the pickaxe.

Filed Under: Competition Law, Digital Law, Technology Tagged With: Competition Law, Digital Law, Technology

August 7, 2025 by Scott Coulthart

Your Data, My Model? Why AI Ambitions Demand a Contract Check-Up

As AI capabilities become standard fare in SaaS platforms, software providers are racing to retrofit intelligence into their offerings. But if your platform dreams of becoming the next ChatXYZ, you may need to look not to your engineering team, but to your legal one.

The Problem with “Your Data”

Most software providers already have mountains of processed, transformed and inferred data—data shaped by customer inputs and platform logic. That data could supercharge AI development, from powering smarter dashboards to training predictive algorithms.

But here’s the rub: just because the data isn’t raw customer input doesn’t mean you can freely use it.

You may assume your standard software licence or SaaS agreement gives you all the rights you need. It probably doesn’t.

What Does the Contract Say?

Take a typical clause like this:

“The Customer grants the Provider a non-exclusive, irrevocable licence to use Customer Data to the extent reasonably required to provide the Services and for use in the Provider’s business generally.”

Even a broad “use in our business generally” clause won’t necessarily cover:

  • Using processed or aggregated data from multiple customers

  • Training an AI model whose outputs are shared with others

  • Commercialising new AI-powered features not contemplated in the original deal

And if the data is derived from inputs that were themselves confidential or personal, you’ve got even more legal landmines—Privacy Law, confidentiality obligations, and IP ownership issues if the customer contributed meaningful structure to the dataset.

Is Deidentification Enough (or even Allowed)?

A common fallback is: “We’ll just deidentify the data.” But that’s not a bulletproof strategy.

Under most privacy regimes, data is only considered deidentified if re-identification is not reasonably possible—a high bar, especially in small or specialised datasets. Even deidentified data may still be contractually protected if it originates from information the customer expects to be confidential.

More fundamentally, your contract might not give you the right to deidentify the data at all, unless required to do so by law.

Most software licences and SaaS agreements treat customer data as confidential information. Unless the contract expressly permits you to transform, aggregate or deidentify that data for secondary use (like AI training), doing so could itself amount to a breach. Moreover, if the data includes personal information, you’ll need to navigate privacy laws that impose their own limits—regardless of your contractual rights.

So before you start feeding your LLM, make sure you’re not breaching your SLA.

What to Look For (or Add)

If you’re a provider:

  • Check whether your agreement expressly allows you to create, collate, and use aggregated and deidentified customer data for AI training and product development.

  • Ensure the licence to use data extends beyond service delivery and includes improvements, analytics, and R&D.

  • Include language around data governance, privacy compliance, and ownership of AI outputs.

If you’re a customer:

  • Scrutinise clauses that allow use of data for “business purposes” or “analytics”—these may reach further than you think.

  • Consider negotiating limits, notice obligations, or opt-out rights when your data could be used to build broadly deployed AI systems—unless, of course, that can be turned to your advantage.

In the Age of AI, Contracts Are Training Data Too

Training AI on customer data can unlock immense value—but only if your agreements keep up. Your model is only as smart as your data. And your data rights are only as strong as your contract.

Filed Under: AI, Commercial Law, Contracts, Digital Law, Technology Tagged With: AI, Commercial Law, Contracts, Digital Law, Technology

August 1, 2025 by Scott Coulthart

Copy Paste App? The Pleasures and Pitfalls of Screenshot-to-Code Tools

Imagine this: you take a screenshot of your favourite SaaS dashboard, upload it to a no-code AI tool, and minutes later you have a functioning version of the same interface — layout, buttons, styling, maybe even a working backend prototype. Magic? Almost.

Welcome to the world of screenshot-to-code generators — tools that use AI and no-code logic to replicate functional software from images. These platforms (like Galileo AI, Builder.io, and Uizard) promise rapid prototyping, faster MVP launches, and a lower barrier to entry for founders, designers, and product teams alike.

But while the tech is impressive, the legal waters are murkier. Here’s the pleasure and the pitfall.


🚀 The Pleasure: Design to Prototype at Lightspeed

The promise is seductive:

  • Rapid prototyping: What used to take weeks of front-end dev can now take hours — sometimes minutes.

  • Visual to functional: AI converts static designs (or even screenshots of existing apps) into working interfaces with mock data or basic logic.

  • Lower costs: Startups or solo devs can build more for less — less code, less labour, and less time.

Tools like Galileo AI and Uizard are being used to generate mock admin panels, mobile UI concepts, and even pitch-ready MVPs. They’re ideal for internal dashboards, client demos, or iterating fast before investing in full-stack builds.

But many users go further — taking screengrabs from existing platforms (think Notion, Salesforce, Figma, Xero) and asking the AI to “make me one of these.”

And that’s where the problems begin.


⚠️ The Pitfall: Copyright, Clones, and Clean Hands

Just because a tool can replicate an interface doesn’t mean you should — especially if your starting point is a screenshot of someone else’s software.

Here are the big legal traps to watch out for:

1. Copyright in the Interface

While copyright doesn’t protect ideas, it does protect expressions — including graphic design, layout, icons, fonts, and even the “look and feel” of certain interfaces. If your cloned UI copies the visual design of another product too closely, you may be infringing copyright (or at least inviting a legal headache).

Australia’s Desktop Marketing Systems v Telstra [2002] FCAFC 112 reminds us that copyright can exist in compilations of data or structure — not just in pretty pictures.

2. Trade Dress and Reputation

Even if your app doesn’t copy the code, a lookalike interface could fall foul of passing off or misleading conduct laws under the Australian Consumer Law if it creates confusion with an established brand. That risk increases if you’re operating in a similar space or targeting the same user base.

The global tech giants have deep pockets — and they’ve sued for less.

3. Terms of Use Breaches

Many platforms prohibit copying or reverse engineering their interfaces. Uploading screenshots of their product to an AI builder might violate their terms of service — even if your clone is only for internal use.

This isn’t just theory: platforms like OpenAI and Figma already use automated tools to detect and act on terms breaches — especially those that risk commercial leakage or brand dilution.

4. No Excuse Just Because the Tool Did It

You can’t hide behind the AI. If your clone infringes IP rights, you’re liable — not the platform that helped you build it. The tool is just that: a tool.

In legal terms, there’s no “my AI made me do it” defence.


🤔 So What Can You Do?

  • ✅ Use these tools for original designs: Sketch your own wireframes, then let the AI flesh them out.

  • ✅ Take inspiration, not duplication: You can draw ideas from good UI — but avoid replicating them pixel-for-pixel.

  • ✅ Use public design systems: Many platforms release UI kits and components under open licences (e.g., Material UI, Bootstrap). Start there.

  • ✅ Keep it internal: If you must replicate an existing interface to test functionality, don’t deploy it publicly — and definitely don’t commercialise it.

  • ✅ Get advice: If you’re close to the line (or don’t know where the line is), speak to an IP lawyer early. Clones are cheaper than court.


🧠 Final Thought: Just Because You Can…

…doesn’t mean you should.

AI is rapidly transforming the way software is built — but it’s also tempting users to cut corners on IP. Using these tools responsibly means treating screenshots not just as pixels, but as possibly protected property.

Build fast — but build clean.

Filed Under: AI, Copyright, Digital Law, IP, Technology Tagged With: AI, Copyright, Digital Law, IP, Technology

June 25, 2025 by Scott Coulthart

YouTube’s Free Pass May Be Up: eSafety Pushes Back on Social Media Carve-Out

The Albanese Government’s plan to restrict under-16s from holding social media accounts is already proving contentious — and now, its one glaring exception has been officially called out. The eSafety Commissioner, Julie Inman Grant, has advised Communications Minister Anika Wells to scrap the carve-out that would exempt YouTube from the new age-gating regime set to kick in this December.

The proposal, which mandates that platforms like TikTok, Instagram, Snapchat, Reddit and X take “reasonable steps” to block account creation by under-16s, currently spares YouTube on the basis that it has a broader educational and health utility. But the Commissioner’s position is clear: if it walks like TikTok and Shorts like TikTok, it’s probably TikTok — and deserves to be regulated accordingly.

YouTube: Too Big to Ban?

Back in November, then-Minister Rowland argued YouTube played a “significant role in enabling young people to access education and health support”, and thus deserved its special treatment. But the eSafety Commissioner’s new advice — now in the hands of Minister Wells — says the data tells a different story.

YouTube isn’t just a fringe player. A recent eSafety survey found it’s used by 76% of 10- to 15-year-olds, making it the dominant platform for that age group. Among kids who encountered harmful content online, 37% said the worst of it happened on YouTube.

In other words, if the aim is to protect children from the harms of social media, YouTube is not just part of the problem — it’s the biggest piece of it.

Functional Similarity, Regulatory Inconsistency

The core of the Commissioner’s argument is that functionality, not branding, should drive regulation. YouTube Shorts mimics the addictive swipe-based short-form video experience of TikTok and Instagram Reels. Carving it out sends mixed messages about the purpose of the law — and creates loopholes large enough for a Shorts binge.

The advice also calls for more adaptable, risk-based rules that focus on a platform’s actual features and threat profile, not how it labels itself. Technology evolves too fast for static category-based exemptions.

But What’s the Threat, Really?

There may be many examples of nanny-state regulation these days – but this isn’t one of them.

YouTube is in this author’s opinion an excellent platform extremely useful and entertaining all at the same time, and that applies to benefits both for adults and under-18s/under-16s.

However, there are also significant dangers for under-16s that can’t be ignored.

In plain terms:

1. Exposure to Inappropriate Content

Even with YouTube Kids and restricted mode, children can still be exposed to:

  • Pornographic or sexually suggestive content (sometimes slipped past filters).

  • Violent or graphic videos (including real-life fights, injuries, or distressing footage).

  • Content promoting self-harm, eating disorders, or suicide (often through seemingly innocuous videos or “coded” messaging).

  • Misinformation or conspiracy theories (e.g., QAnon, anti-vax rhetoric).

These exposures are linked to real psychological harms, especially among younger teens still forming their identity and critical reasoning skills.


2. Contact Risks (Predators & Harassment)

YouTube allows comments, live chat during livestreams, and even community posts — all of which create:

  • Opportunities for unsolicited contact from adults (including grooming behaviour).

  • Exposure to cyberbullying or peer harassment, often via comments.

  • Unfiltered interactions during livestreams — which are harder to moderate in real time.

The eSafety Commissioner sees this as part of a broader “contact harm” risk — it’s not just what kids see, but who can reach them and how they’re targeted.


3. Addictive Design (Shorts, Recommendations)

YouTube’s algorithmic design encourages:

  • Binge-watching and excessive screen time through autoplay and recommendations.

  • Engagement loops in YouTube Shorts (TikTok-style scrollable video snippets).

  • Exposure to more extreme or sensational content the longer a child watches (known as algorithmic “radicalisation”).

This design can disrupt sleep, concentration, and mental wellbeing — particularly in adolescents.


4. Data Privacy & Profiling

YouTube collects vast amounts of user data — even from minors — to personalise recommendations and ads. While Google claims to limit this for users under 18:

  • The eSafety Commissioner is concerned that data-driven profiling may still occur covertly or imperfectly.

  • Kids may also be inadvertently tracked across platforms when logged into a YouTube or Google account.


5. False Sense of Safety

YouTube’s exemption from the new social media rules may give parents the impression it is “safe” or “educational” by default — when, in fact, it often contains the same risks as TikTok or Instagram.

The Commissioner specifically called out that there isn’t sufficient evidence YouTube “predominantly provides beneficial experiences” for under-16s. So the carve-out undermines the purpose of the rules.


In summary, the concern isn’t just about under-16s accessing YouTube, but about the total environment of:

  • Risky content,

  • Risky contact,

  • Addictive design, and

  • Inadequate protective controls.

Risk-Based Reform on the Horizon

The YouTube advice comes as the eSafety Commissioner readies a suite of industry-specific codes targeting harmful online content, including pornography and violent material. New obligations are expected for search engines, hosting services, and telcos — with five more codes in the pipeline. If voluntary industry codes fall short, the Commissioner has flagged she’ll impose mandatory standards before July’s end.

Penalties for breach of these codes — like the new social media rules — could reach $50 million for systemic non-compliance.

What’s Next?

The final decision on YouTube’s exemption sits with Minister Wells, who must table the rules in Parliament for scrutiny. But with pressure now coming from the very regulator tasked with enforcement, and mounting community concern over YouTube’s influence, the carve-out may not survive the next sitting.

The bigger question is whether Australia can strike the right balance between platform accountability, digital literacy, and youth agency — without blunting the tools that help kids learn and connect. In a digital world that resists easy categorisation, risk-based regulation may be the only way forward.

Filed Under: Digital Law, Regulation, Technology Tagged With: Digital Law, Regulation, Technology

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Whose Footage Is It Anyway? Game Meats v Farm Transparency Heads for the High Court
  • Watson Webb v Comino: When Valves Burst Into a Multi-IP Flood
  • Aristocrat’s Jackpot: Full Court Revives Gaming Machine Patents
  • Epic Won the Battle. Now Developers Want Their Refunds.
  • Copy That, Part 10 – Copyright Myths Busted: Top Misunderstandings

Archives

  • September 2025 (15)
  • August 2025 (18)
  • July 2025 (16)
  • June 2025 (21)
  • May 2025 (12)
  • April 2025 (4)

Footer

© Scott Coulthart 2025