Black Box, Meet Sunlight: Australia’s New Rules for Automated Decision-Making
Automated decision-making is everywhere now — in the background of your credit check, your insurance quote, your job application, even the price you see for a pair of shoes. For a while, this opaque machine logic operated in a legal blind spot: useful, profitable, and often inscrutable. But no longer.
Welcome to part 8 of our 9-part Privacy 2.0 series.
Australia’s latest privacy reforms are dragging automated decisions into the daylight. Starting 10 December 2026, organisations will be legally required to disclose in their privacy policies whether and how they use automated decision-making that significantly affects the rights of individuals. It’s the first real attempt under Australian law to impose some transparency obligations on algorithmic systems — not just AI, but any automation that crunches personal data and outputs a decision with real-world consequences.
So what do these changes demand? Two key things:
-
Your privacy policy must (from 10 December 2026) clearly describe:
-
the types of personal information used in any substantially automated decision-making process, and
-
the kinds of decisions made using that information.
-
-
It will apply wherever those decisions significantly affect an individual’s rights or interests — eligibility for credit, pricing, recruitment shortlists, fraud flags, algorithmic exclusions from essential services like housing or employment, and more. It’s not limited to full automation either. Even “mostly automated” systems — where human review is token or rubber-stamp — are caught.
The goal here is transparency, not prohibition. The law doesn’t say you can’t automate — but it does say you will have to own it, explain it, and flag it. That means no more hiding behind UX, generic privacy blurbs, or vague disclaimers. And if your systems are complex, decentralised, or involve third-party algorithms? No excuses — you’ll need to understand them anyway, and track them over time so your policy stays accurate.
In short, if your business relies on automated decisions in any meaningful way, you’ll need to:
-
Map those processes now (don’t wait until 2026),
-
Build a system for tracking how and when they change, and
-
Craft plain-language disclosures that are specific, truthful, and meaningful.
This isn’t just a ‘legal’ problem anymore — customers, regulators, and journalists are watching. No one wants to be the next brand caught auto-rejecting job applicants for having a gap year or charging loyal customers more than first-timers.
Tomorrow: we wrap our Privacy 2.0 series with what didn’t make it into the legislation (yet) — and where the next battle lines in Australian privacy reform are likely to be drawn.