Deepfakes on Trial: First Civil Penalties Under the Online Safety Act
The Federal Court has handed down its first civil penalty judgment under the Online Safety Act 2021 (Cth), in eSafety Commissioner v Rotondo (No 4) [2025] FCA 1191.
Justice Longbottom ordered Anthony (aka Antonio) Rotondo to pay $343,500 in penalties for posting a series of non-consensual deepfake intimate images of six individuals, and for failing to comply with removal notices and remedial directions issued by the eSafety Commissioner.
Key Points
1. First penalties under the Online Safety Act
This is the first time civil penalties have been imposed under the Act, making it a landmark enforcement case.
The Commissioner sought both declarations and penalties, with the Court emphasising deterrence as its guiding principle.
2. Deepfakes squarely captured
The Court confirmed that non-consensual deepfake intimate images fall within the Act’s prohibition on posting “intimate images” without consent.
Importantly, it rejected Rotondo’s submission that only defamatory or “social media” posts should be captured.
3. Regulatory teeth and enforcement
Rotondo received notices under the Act but responded defiantly (“Get an arrest warrant if you think you are right”) before later being arrested by Queensland Police on related matters.
His lack of remorse and framing of deepfakes as “fun” aggravated the penalty.
4. Platform anonymity
Although the Commissioner did not object, the Court chose to anonymise the name of the website hosting the deepfakes — reflecting a policy judgment not to amplify harmful platforms.
That said, the various newspapers reporting on this story all revealed the website’s address, but noted it has now been taken down.
IP Mojo is choosing not to reveal that website.
5. Civil vs criminal overlap
Alongside the civil penalties, the Court noted criminal charges under Queensland’s Criminal Code.
This illustrates how civil, regulatory and criminal enforcement can run in parallel.
Why It Matters
-
For regulators: This case confirms the Act has teeth. Regulators can secure significant financial penalties even where offenders are self-represented.
-
For platforms: The Court’s approach signals that services hosting deepfakes are firmly in scope, even if located offshore.
-
For the public: The judgment highlights the law’s adaptability to AI-driven harms — and sends a clear deterrence message.
-
For practitioners: Expect more proceedings of this kind, particularly as the prevalence of AI-generated abuse grows.
When Tiger Woods launched his new Sun Day Red brand with TaylorMade, it came with a sleek “leaping tiger” device mark. Puma — owner of the iconic leaping cat logo used since 1968 — wasn’t impressed.
When two businesses with nearly identical names lock horns, things usually come down to trade marks, passing off, and reputation. But in Jacksons Drawing Supplies Pty Ltd v Jackson’s Art Supplies Ltd (No 2) [2025] FCA 1127, the real fight was over disclaimers, pop-ups, sticky banners, and user attention spans.
Australia’s courts are no longer sitting on the sidelines of the AI debate. Within just a few months of each other, the Supreme Courts of New South Wales, Victoria, and Queensland have each published their own rules or guidance on how litigants may (and may not) use generative AI.
From ChatGPT hallucinations to deepfakes in affidavits, Queensland’s courts have drawn a line in the sand.
When animal rights collide with copyright law, sparks fly — and sometimes, whole new branches of doctrine get tested.