• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

IPMojo

  • About IP Mojo
  • About Scott Coulthart
  • CONTACT
BOOK AN APPOINTMENT

Regulation

June 25, 2025 by Scott Coulthart

YouTube’s Free Pass May Be Up: eSafety Pushes Back on Social Media Carve-Out

The Albanese Government’s plan to restrict under-16s from holding social media accounts is already proving contentious — and now, its one glaring exception has been officially called out. The eSafety Commissioner, Julie Inman Grant, has advised Communications Minister Anika Wells to scrap the carve-out that would exempt YouTube from the new age-gating regime set to kick in this December.

The proposal, which mandates that platforms like TikTok, Instagram, Snapchat, Reddit and X take “reasonable steps” to block account creation by under-16s, currently spares YouTube on the basis that it has a broader educational and health utility. But the Commissioner’s position is clear: if it walks like TikTok and Shorts like TikTok, it’s probably TikTok — and deserves to be regulated accordingly.

YouTube: Too Big to Ban?

Back in November, then-Minister Rowland argued YouTube played a “significant role in enabling young people to access education and health support”, and thus deserved its special treatment. But the eSafety Commissioner’s new advice — now in the hands of Minister Wells — says the data tells a different story.

YouTube isn’t just a fringe player. A recent eSafety survey found it’s used by 76% of 10- to 15-year-olds, making it the dominant platform for that age group. Among kids who encountered harmful content online, 37% said the worst of it happened on YouTube.

In other words, if the aim is to protect children from the harms of social media, YouTube is not just part of the problem — it’s the biggest piece of it.

Functional Similarity, Regulatory Inconsistency

The core of the Commissioner’s argument is that functionality, not branding, should drive regulation. YouTube Shorts mimics the addictive swipe-based short-form video experience of TikTok and Instagram Reels. Carving it out sends mixed messages about the purpose of the law — and creates loopholes large enough for a Shorts binge.

The advice also calls for more adaptable, risk-based rules that focus on a platform’s actual features and threat profile, not how it labels itself. Technology evolves too fast for static category-based exemptions.

But What’s the Threat, Really?

There may be many examples of nanny-state regulation these days – but this isn’t one of them.

YouTube is in this author’s opinion an excellent platform extremely useful and entertaining all at the same time, and that applies to benefits both for adults and under-18s/under-16s.

However, there are also significant dangers for under-16s that can’t be ignored.

In plain terms:

1. Exposure to Inappropriate Content

Even with YouTube Kids and restricted mode, children can still be exposed to:

  • Pornographic or sexually suggestive content (sometimes slipped past filters).

  • Violent or graphic videos (including real-life fights, injuries, or distressing footage).

  • Content promoting self-harm, eating disorders, or suicide (often through seemingly innocuous videos or “coded” messaging).

  • Misinformation or conspiracy theories (e.g., QAnon, anti-vax rhetoric).

These exposures are linked to real psychological harms, especially among younger teens still forming their identity and critical reasoning skills.


2. Contact Risks (Predators & Harassment)

YouTube allows comments, live chat during livestreams, and even community posts — all of which create:

  • Opportunities for unsolicited contact from adults (including grooming behaviour).

  • Exposure to cyberbullying or peer harassment, often via comments.

  • Unfiltered interactions during livestreams — which are harder to moderate in real time.

The eSafety Commissioner sees this as part of a broader “contact harm” risk — it’s not just what kids see, but who can reach them and how they’re targeted.


3. Addictive Design (Shorts, Recommendations)

YouTube’s algorithmic design encourages:

  • Binge-watching and excessive screen time through autoplay and recommendations.

  • Engagement loops in YouTube Shorts (TikTok-style scrollable video snippets).

  • Exposure to more extreme or sensational content the longer a child watches (known as algorithmic “radicalisation”).

This design can disrupt sleep, concentration, and mental wellbeing — particularly in adolescents.


4. Data Privacy & Profiling

YouTube collects vast amounts of user data — even from minors — to personalise recommendations and ads. While Google claims to limit this for users under 18:

  • The eSafety Commissioner is concerned that data-driven profiling may still occur covertly or imperfectly.

  • Kids may also be inadvertently tracked across platforms when logged into a YouTube or Google account.


5. False Sense of Safety

YouTube’s exemption from the new social media rules may give parents the impression it is “safe” or “educational” by default — when, in fact, it often contains the same risks as TikTok or Instagram.

The Commissioner specifically called out that there isn’t sufficient evidence YouTube “predominantly provides beneficial experiences” for under-16s. So the carve-out undermines the purpose of the rules.


In summary, the concern isn’t just about under-16s accessing YouTube, but about the total environment of:

  • Risky content,

  • Risky contact,

  • Addictive design, and

  • Inadequate protective controls.

Risk-Based Reform on the Horizon

The YouTube advice comes as the eSafety Commissioner readies a suite of industry-specific codes targeting harmful online content, including pornography and violent material. New obligations are expected for search engines, hosting services, and telcos — with five more codes in the pipeline. If voluntary industry codes fall short, the Commissioner has flagged she’ll impose mandatory standards before July’s end.

Penalties for breach of these codes — like the new social media rules — could reach $50 million for systemic non-compliance.

What’s Next?

The final decision on YouTube’s exemption sits with Minister Wells, who must table the rules in Parliament for scrutiny. But with pressure now coming from the very regulator tasked with enforcement, and mounting community concern over YouTube’s influence, the carve-out may not survive the next sitting.

The bigger question is whether Australia can strike the right balance between platform accountability, digital literacy, and youth agency — without blunting the tools that help kids learn and connect. In a digital world that resists easy categorisation, risk-based regulation may be the only way forward.

Filed Under: Digital Law, Regulation, Technology Tagged With: Digital Law, Regulation, Technology

June 24, 2025 by Scott Coulthart

What Didn’t Happen (Yet): The Privacy Reforms Still Waiting in the Wings

You could be forgiven for thinking Australia’s privacy law just had its big moment — and it did. But don’t get too comfortable. What we’ve seen so far from the December 2024 amendments to the Privacy Act 1988 (Cth) is just Round 1.

Welcome to the final instalment of our 9-part Privacy 2.0 series.

There’s a long queue of proposed changes that didn’t make it into the latest legislation, many of them quietly simmering in government inboxes, consultation drafts and “agreed in principle” footnotes.

Some of these postponed reforms could reshape the privacy landscape even more profoundly than the current crop. If you’re trying to future-proof your compliance or understand where the law is going next, here’s what to watch.

1. The Small Business Exemption — Still Alive (for Now)

Right now, businesses with an annual turnover under $3 million are generally exempt from the Privacy Act. That’s tens of thousands of data-handling entities with zero formal privacy obligations. The reform process flagged this as outdated — and it’s clear the exemption will eventually go. When it does, thousands of SMEs will be pulled into the privacy net for the first time. It’s not a question of if. It’s when.

2. Controllers vs Processors — Coming Soon to a Framework Near You

Unlike the GDPR(s), Australia’s privacy law still doesn’t distinguish between data “controllers” (who decide the purpose and means of processing) and “processors” (who process data on someone else’s behalf). That distinction brings clarity and proportionality in many overseas regimes. Expect pressure to harmonise with global norms — especially from businesses operating across borders who are tired of legal whiplash.

3. The Right to Object, Delete, Port — Not Yet, But On Deck

Australia still lacks a formal, standalone right to object to certain uses of data, to demand deletion (the famed “right to be forgotten”), or to port your data from one provider to another. These rights — core pillars of the GDPR(s) — have been agreed to in principle, are popular with the public, and would bring us closer to GDPR standards (and make life very interesting for adtech, fintech, and platform businesses).

4. De-Identified Data? Still A Grey Zone

The reform process acknowledged that re-identification of supposedly anonymous data is a real risk — and that de-identified information still needs regulation. But the law hasn’t caught up yet. Watch for future reforms to APPs 8 and 11 that would bring de-identified data into scope and make re-identification attempts a regulatory red flag.

5. Privacy by Design & Mandatory PIAs — Still Optional (for Now)

There was also discussion of codifying “privacy by design” and making Privacy Impact Assessments mandatory for high-risk activities. The idea? Embed privacy into planning, not just cleanup. It didn’t land this time, but expect it to return — particularly as AI, biometric tech and behavioural profiling go mainstream.


Bottom line? This is just the intermission. The Privacy Act is evolving — slowly, but deliberately — toward a framework that looks more like the GDPR(s) and less like its 1980s self. Businesses that treat the current reforms as the finish line are missing the point. The smart ones are already adapting to what’s next.

That’s a wrap on our Privacy 2.0 reform series. If you’ve made it this far, congratulations — you now know more about privacy law than most of Parliament.

Now, go fix your privacy policy — and maybe tell your AI to behave while you’re at it.

Filed Under: Privacy, Privacy 2.0, Regulation Tagged With: Privacy, Privacy 2.0, Privacy 2.0 Part 9, Regulation

June 23, 2025 by Scott Coulthart

Black Box, Meet Sunlight: Australia’s New Rules for Automated Decision-Making

Automated decision-making is everywhere now — in the background of your credit check, your insurance quote, your job application, even the price you see for a pair of shoes. For a while, this opaque machine logic operated in a legal blind spot: useful, profitable, and often inscrutable. But no longer.

Welcome to part 8 of our 9-part Privacy 2.0 series.

Australia’s latest privacy reforms are dragging automated decisions into the daylight. Starting 10 December 2026, organisations will be legally required to disclose in their privacy policies whether and how they use automated decision-making that significantly affects the rights of individuals. It’s the first real attempt under Australian law to impose some transparency obligations on algorithmic systems — not just AI, but any automation that crunches personal data and outputs a decision with real-world consequences.

So what do these changes demand? Two key things:

  1. Your privacy policy must (from 10 December 2026) clearly describe:

    • the types of personal information used in any substantially automated decision-making process, and

    • the kinds of decisions made using that information.

  2. It will apply wherever those decisions significantly affect an individual’s rights or interests — eligibility for credit, pricing, recruitment shortlists, fraud flags, algorithmic exclusions from essential services like housing or employment, and more. It’s not limited to full automation either. Even “mostly automated” systems — where human review is token or rubber-stamp — are caught.

The goal here is transparency, not prohibition. The law doesn’t say you can’t automate — but it does say you will have to own it, explain it, and flag it. That means no more hiding behind UX, generic privacy blurbs, or vague disclaimers. And if your systems are complex, decentralised, or involve third-party algorithms? No excuses — you’ll need to understand them anyway, and track them over time so your policy stays accurate.

In short, if your business relies on automated decisions in any meaningful way, you’ll need to:

  • Map those processes now (don’t wait until 2026),

  • Build a system for tracking how and when they change, and

  • Craft plain-language disclosures that are specific, truthful, and meaningful.

This isn’t just a ‘legal’ problem anymore — customers, regulators, and journalists are watching. No one wants to be the next brand caught auto-rejecting job applicants for having a gap year or charging loyal customers more than first-timers.

Tomorrow: we wrap our Privacy 2.0 series with what didn’t make it into the legislation (yet) — and where the next battle lines in Australian privacy reform are likely to be drawn.

Filed Under: Privacy, Privacy 2.0, Regulation Tagged With: Privacy, Privacy 2.0, Privacy 2.0 Part 8, Regulation

June 18, 2025 by Scott Coulthart

For many years, privacy enforcement in Australia was a bit… polite. The OAIC could nudge, issue determinations, and make a bit of noise, but it often lacked the real teeth needed to drive compliance in the boardroom. That era is over.

11 December 2024 saw the commencement of amendments to the Privacy Act 1988 (Cth) which overhaul Australia’s enforcement toolkit — with bigger fines, broader court powers, faster penalties, and forensic-level investigative authority. It’s not quite the GDPR, but it’s getting close enough to make a lot of GCs uncomfortable.

In this 7th part of our Privacy 2.0 series, let’s start with the money. The maximum fine for a serious or repeated privacy breach by a company is now $50 million, or three times the benefit obtained, or 30% of adjusted turnover — whichever is greater. That’s serious deterrent territory, not just a regulatory slap. Even mid-tier breaches now carry $3.3 million maximums for corporates. Individuals? You’re looking at up to $2.5 million if you seriously mess it up. There’s a new hierarchy of penalties too — with lower thresholds and infringement notices for technical breaches like bad privacy policies or sloppy notifications.

But it’s not just about fines. The OAIC can now issue infringement notices, bypassing court for certain minor but clear-cut breaches. Think of it like a privacy speeding ticket — faster, cheaper, but still stings. And yes, you can fight it in court if you want. Just hope your documentation holds up.

Then there are the new powers of investigation and monitoring. The OAIC is now plugged into the Regulatory Powers (Standard Provisions) Act 2014 (Cth), meaning it can get warrants, enter premises, seize devices, and even apply reasonable force — all while preserving privilege. This puts the Privacy Commissioner on more equal footing with ASIC and the ACCC, especially when it comes to serious or systemic non-compliance. If your data handling is shady, half-baked or undocumented — now’s the time to clean it up.

And finally, court powers have been expanded. The Federal Court and the Federal Circuit and Family Court can now order not just fines, but anything else appropriate — including remediation, compensation, and public declarations. This opens the door for privacy class actions to get seriously strategic – not just possible, but powerful.

Here’s the bottom line: privacy compliance can no longer sit in the “legal” corner or be outsourced to the IT team. It’s now a cross-functional risk category — and it’s time businesses treated it that way. If you’re not audit-ready, breach-ready, or regulator-ready… you’re not ready.

Next week in our Privacy 2.0 series: how the law tackles automated decision-making — and why your pricing algorithm, hiring bot, or fraud engine might need to show its work.

Filed Under: Privacy, Privacy 2.0, Regulation Tagged With: Privacy, Privacy 2.0, Privacy 2.0 Part 7, Regulation

June 11, 2025 by Scott Coulthart

Before the amendments to the Privacy Act 1988 (Cth) on 11 December 2024, if your Australian business wanted to send personal data overseas — say, to a CRM hosted in the US or a support centre in Manila — you had to jump through a slightly vague hoop. Under APP 8.1, you were supposed to take “reasonable steps” to ensure the recipient wouldn’t do anything that would breach the Australian Privacy Principles. And if they did? Thanks to section 16C, you were still on the hook.

There were a couple of workarounds, one of which was found in APP 8.2(a) – this let you off the liability hook if you “reasonably believed” the recipient country had a law or binding scheme that was “substantially similar” to the APPs — and had real enforcement mechanisms. But what does “reasonable belief” mean in that context? And how similar is “substantially similar”? The vagueness of the whole thing often felt like a false sense of security.

The December amendments bring structure and at least some clarity. We now have APP 8.2(aa) and 8.3, which allow for the creation of a whitelist: a formal, government-endorsed list of countries and binding schemes deemed to have privacy protections and enforcement powers equivalent to ours. If your recipient is on the list, you don’t have to prove a thing — just document that the transfer aligns with the rules and you’re good to go.

This is huge. It streamlines compliance and brings us closer to the way other jurisdictions, like the UK and the EU under their respective GDPRs, handle cross-border data flows via “adequacy” decisions. It also gives businesses clarity about who’s in the safe zone, who’s not, and what conditions might apply. For instance, a country might only make the list for health data, or only for financial services entities. The flexibility is there — but so is the scrutiny.

One catch? At the time of publishing this post, the list doesn’t exist yet. It’ll be created via regulation, which means the real-world usefulness of this reform hinges on how quickly and smartly that list gets built. Until then, businesses still have to do the old assessment under APP 8.2(a), with all the murkiness that comes with it.

So if your infrastructure, vendors, or data processors are offshore, now’s the time to:

  • map your transfers,

  • review your contracts,

  • and prepare to align with the new safe-harbour system when it drops.

Because in the new privacy era, “we didn’t realise the US server was logging that” won’t fly anymore.

Next week in our Privacy 2.0 series: the enforcement overhaul — where civil penalties, infringement notices, and OAIC superpowers come roaring into view.

Filed Under: Privacy, Privacy 2.0, Regulation Tagged With: Privacy, Privacy 2.0, Privacy 2.0 Part 6, Regulation

June 3, 2025 by Scott Coulthart

Reasonable Steps Just Got Real: What APP 11 Now Demands

For years, Australian Privacy Principle 11 has required businesses to take “reasonable steps” to protect personal information from misuse, interference, or loss. Sounds fair — but also vague. What exactly is “reasonable”? A locked filing cabinet? Two-factor authentication? Asking nicely?

In this 4th part of IP Mojo’s exclusive Privacy 2.0 blog series, we discuss how the latest privacy law amendments haven’t rewritten APP 11 — they’ve sharpened it. Specifically, they’ve clarified that “reasonable steps” include both technical and organisational measures. It’s a simple sentence, but it changes the conversation. Because now, the standard isn’t just what you thought was reasonable. It’s what you can prove you’ve done to make security part of your systems, your structure, and your staff’s day-to-day behaviour.

Let’s break it down. Technical measures? Think encryption, firewalls, intrusion detection systems, and strong password protocols. Organisational measures? Employee training, incident response plans, documented data handling procedures, and privacy-by-design baked into new systems and tools. It’s not just about buying tech — it’s about building a culture.

Of course, “reasonable” still depends on context: the nature of your business, the sensitivity of the data, the volume you handle. But this update sends a signal: the era of set-and-forget privacy compliance is over. If your team’s still using outdated software or storing customer records on someone’s laptop, that’s not going to cut it.

Here’s the kicker: while the amendment itself is modest — just a new clause (11.3) — the implications are not. It gives regulators clearer footing. It gives courts a stronger hook. And it gives businesses a chance to get ahead — by documenting what you’re doing, auditing what you’re not, and showing your privacy policies aren’t just legalese, but lived practice.

Tune in tomorrow for: a look at the new data breach response powers, and how the government can now legally share your customers’ personal information — yes, really — in a post-hack crisis.

Filed Under: Privacy, Privacy 2.0, Regulation Tagged With: Privacy, Privacy 2.0, Privacy 2.0 Part 4, Regulation

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • 🏇 When the Race Stops a Nation — Who Owns the Moment?
  • AI Training in Australia: Why a Mandatory Licence Could Be the Practical Middle Ground
  • AI-Generated Works & Australian Copyright — What IP Owners Need to Know
  • When Cheaper Medicines Meet Patent Law: Regeneron v Sandoz
  • #NotThatFamous: When Influencer Buzz Fails the s 60 Test

Archives

  • November 2025 (1)
  • October 2025 (14)
  • September 2025 (21)
  • August 2025 (18)
  • July 2025 (16)
  • June 2025 (21)
  • May 2025 (12)
  • April 2025 (4)

Footer

© Scott Coulthart 2025