Productivity or Pink Slips? The Rise of Agentic AI
Ok, enough with the scaremongering – let’s thrash it out.
Is AI going to replace us any time soon?
One perspective’s look at the medium term future:
“Cancer is cured, the economy grows at 10% a year… and 20% of people don’t have jobs.”
So said Dario Amodei, CEO of Anthropic, in one of the most jarring soundbites to emerge from the AI sector this year. It’s not a dystopian movie pitch — it’s a plausible trajectory.
The Brisbane Times recently spotlighted how Telstra is leaning hard into this future, and it starts with deploying so-called Agentic AI – discrete AI tools able to do a bunch of things with minimal oversight. From automating customer service to writing code, the $54 billion telco is betting big that its next era won’t be driven just by fibre and frequency, but by “digital agents“: AI tools with autonomy to act, learn and optimise at scale.
While Telstra CEO Vicki Brady didn’t give hard numbers on expected job cuts, she did suggest the company’s workforce will likely be smaller by 2030. No bold claims — just quiet math. That’s the real face of the AI revolution: not mass firings, but jobs that never get hired in the first place.
Enter the Digital Employee
Nvidia’s Jensen Huang calls them “digital employees” — autonomous, specialised AI agents that handle roles from cybersecurity to network monitoring to legal summarisation. Unlike your flesh-and-blood team, they don’t sleep, unionise, or call in sick.
Tech giants like Microsoft, Canva, and Shopify are already eliminating roles that generative AI can perform faster, cheaper or more reliably. Shopify’s test for approving new hires? Prove the job can’t be done by AI.
Even highly paid software engineers and technical writers are now brushing up résumés — or joining unions. The shock isn’t just the job losses — it’s the redefinition of what work is.
The Illusion of Understanding
And yet — for all its prowess, there’s a lot that AI still doesn’t understand.
It doesn’t feel shame, pride, love, loyalty or regret. It doesn’t know the weight of a moral dilemma or the subtle ache of ambiguity. It doesn’t take responsibility. It hasn’t grown up anywhere. It’s very good at simulating humanity, but it hasn’t cracked what it means to be human.
Here are just a few areas where that matters:
• Moral Judgment & Empathy
AI doesn’t feel anything. It can mimic empathetic language, but it doesn’t understand suffering, joy, duty, shame, or dignity. That matters in:
-
law (e.g. sentencing, equitable remedies)
-
medicine (e.g. breaking bad news)
-
management (e.g. mentoring, handling conflict)
-
creative industries (e.g. stories that evoke genuine emotion)
• Contextual Wisdom and Ethical Trade-Offs
Humans weigh competing priorities in fluid, unquantifiable ways. A judge balancing public policy with individual hardship, or a parent navigating fairness between siblings — AI can model it, but not feel the stakes or bear the consequences.
• Lived Experience and Cultural Intuition
Even with perfect training data, AI lacks a body, a history, a community. It hasn’t known pain or formed personal relationships. It cannot speak authentically from or to a place of real cultural knowledge.
• Responsibility and Accountability
We trust humans with hard decisions because they can be held responsible. There’s no moral courage or ethical failure in the output of a large language model — only the illusion of one.
These aren’t just philosophical quibbles. They’re pressing questions for:
-
Law: Who bears blame when an AI agent misfires?
-
Healthcare: Who decides whether aggressive treatment is compassionate or cruel?
-
Leadership: Can you coach courage into someone via algorithm?
The Uncomfortable Part
AI already mimics a lot of that better than expected. Consider:
• Empathy Simulation
GPT-4, Claude and others can write with stunning emotional acuity. They generate responses that feel empathetic, artistic or wise. It’s not authentic — but it’s increasingly indistinguishable, and often considered “good enough” by the humans receiving it.
• Decision-Making and Pattern Recognition at Scale
AI already outperforms humans at certain medical diagnoses, legal research, contract review and logistics. Its consistency and recall beat even expert practitioners — and that pushes decision-making downstream to human review of AI output.
• Creative Collaboration
AI is co-authoring books, scoring music, designing buildings. The raw ideas remain human-led (for now), but AI increasingly does the scaffolding. The assistant as co-creator is here.
• Agentic AI and Task Autonomy
Agentic AI can take a task, plan it, execute it, and evaluate the results. That’s edging close to synthetic intentionality. In limited domains, it already feels like independent judgment.
The Upshot
What AI can do — increasingly well — is mimic language, logic and even tone. It can co-author your policy doc, diagnose your anomaly, draft your contract (although still terribly at present – which, frankly, makes the contracts lawyer in me feel safe for now), and script your empathy.
But ask it to weigh competing values in an evolving ethical context — or even just draft a nuanced commercial agreement, conduct accurate scientific or legal research, or develop a strategy based on historical fact — and you quickly meet its limits.
Those edge cases still belong to humans in the loop.
So Who Owns the Output?
As businesses delegate more high-order tasks to autonomous agents, legal questions are multiplying:
-
Who owns the IP generated by a self-directed AI agent?
→ At this stage, probably no one — though ordinary IP rules apply to any human-developed improvements. -
Can AI-created processes be patented or protected as trade secrets?
→ Not patented without significant human input — at least not under current Australian (or global) standards. Trade secrets? Only if the process was generated in confidential circumstances, and even then, likely only protected contractually — or by a very sympathetic equity judge with a soft spot for machines and a broad view of what counts as confidence. -
Will the law begin to treat AI output as a kind of quasi-employee contribution?
→ Hard to say. But this author’s view: yes — we’re likely to see forms of legal recognition for things created wholly or partly by generative AI, especially as its use becomes ubiquitous.
Telstra’s ambition to shift from “network provider” to “bespoke experience platform” only deepens the stakes. If AI manages your venue’s mobile traffic to prioritise EFTPOS over selfies, who owns that logic? What’s the IP — and who gets paid?
We’re very likely to find out soon.
We May Not Be Replaced, But We Are Being Rerouted
What’s unfolding isn’t the erasure of human work — but its redistribution.
Jobs once seen as safe — legal drafting, coding, customer care — are being sliced up and reassembled into workflows where humans supervise, train or rubber-stamp what AI proposes.
We’re becoming fewer creators, more editors. Fewer builders, more overseers.
This is the heart of the AI transition: it’s not about making us obsolete. It’s about making us team players — not to say optional — in a landscape of role transformation, driven by the pursuit of results.
That’s why this isn’t just an IP question. It’s a human one.
So yes — cancer might be cured. The economy might boom. But as the digital employee clocks in, we’ll need more than productivity gains.
We’ll need new answers — about ownership, ethics, responsibility and value. Not just in law, but in how we define a fair and meaningful future.