The All-Singing, All-Dancing Central Team
There’s a persistent belief in the public sector that if the existing machinery is slow, risk-averse, and fragmented, the solution is to build a new, smaller machine next to it. A "Centre of Excellence," a "Transformation Office," an "Innovation Hub."
It’s the allure of the crack squad. The idea that if we just assemble the right group of "high-performing" individuals and free them from the shackles of the mothership, they will slice through the Gordian Knot of bureaucracy.
I’ve seen this movie before. I’ve even been cast in it a few times.
One of the public servants who founded the AI CoLab - a genuine space for cross-sector collaboration that I’ve happily participated in - pinged me regarding my critique of the APS AI Plan. He pointed to the creation of a new "AI Delivery and Enablement" (AIDE) team who have six months to establish the longer-term function (dependent on funding and MYEFO) and asked what I make of it, particularly its mandate for monitoring and evaluation.
He knows, as I do, that the gravitational pull of risk aversion in the APS is immense. So, the question is: Is AIDE the escape velocity the APS needs?
The Noah’s Ark of Capability
Let’s look at the job advertisement for the team tasked with establishing this function.
It promises an “agile fashion,” a “fast-moving approach,” and demands that team members be “comfortable with ambiguity and uncertainty.” This is the standard liturgy of the modern transformation unit. It signals intent without addressing the systemic barriers that make the rest of the organization profoundly uncomfortable with uncertainty.
The skills required are telling. They are seeking a “multi-disciplinary expert team.” The list is exhaustive: designers, analysts, compliance experts, systems thinkers, data analysts, and - my personal favourite - "Venture architects."
Venture architects? In the Department of Finance? This is what happens when the public sector tries to cosplay as a Silicon Valley startup. It’s a linguistic signal that we value the aesthetics of innovation over the substance of public service.
The intention is to assemble a Noah’s Ark of capability, two of every kind of expert, and set them loose on the wicked problems of AI adoption. But what happens when you put all these specialists together without changing the system they operate within?
They spend all their time coordinating, negotiating, defining their roles, and explaining their existence to the rest of the organization. They are forced to "manage stakeholders" and endlessly refine slide decks, rather than actually delivering change.
Navigating or Changing?
The most revealing line in the skills required is this: “Experts in navigating public service processes and compliance.”
And there it is.
If the goal is genuine transformation - to overcome the risk aversion and the glacial pace of change - then you don't need people who are experts at navigating the existing compliance regime. You need people empowered to change it.
The AIDE team is being set up not to challenge the status quo, but to become exceptionally good at working within it. They are tasked with “helping tackle common adoption barriers” (p19 of the Plan), but what happens when the barrier is a mandated security protocol, a rigid funding cycle, or the entrenched operating model of a major service delivery agency?
A small team sitting in the Department of Finance does not have the leverage to force Services Australia or the ATO to re-engineer their core processes. They will advise, they will cajole, they will produce beautiful "lessons learned" reports. But they will not accelerate systemic change.
The M&E Question: Turtles All the Way Down
I was specifically asked about the Monitoring and Evaluation (M&E) component, which is, (as regular readers know), my favourite hobby horse. The lack of follow-up is the Achilles' heel of public sector reform.
The job ad describes the 'Monitoring' stream as:
“tracking the application and implications of generative AI to ensure the ongoing responsiveness and agility of the APS AI Plan, understand emerging issues, and establish relevant metrics to support the Plan.”
Read that carefully. They are establishing metrics to support the Plan. They are tracking implications to ensure the agility of the Plan.
This is not outcome-focused evaluation. This is monitoring the implementation of the strategy itself. It’s turtles all the way down.
The crucial question is not, "Is the Plan agile?" The question is, "Did the adoption of AI actually deliver the $19 billion in annual value promised on page 7?" Did service delivery times decrease? Did the quality of policy advice improve in a measurable way?
When M&E is focused inward on the strategy rather than outward on the public impact, it becomes a self-referential exercise. It allows the administration to report success without ever having to confront the messy reality of whether the underlying initiative actually worked.
The Limits of Enablement
I appreciate the intent behind AIDE. There is a need for coordination and shared learning. But the biggest barriers are systemic: legacy IT, Byzantine procurement rules, and the deeply ingrained culture of risk aversion.
Establishing another high-performing, agile-ish, venture-architecting central team is the easy part. It is visible action. The hard part is reforming the underlying system.
Without the authority to change the fundamental incentives that drive the public service machinery, AIDE risks becoming just another layer of well-intentioned effort, producing beautiful artifacts of busy-work while the underlying machinery grinds on, unchanged.

Strategy vs. The Big, Glossy Void
(I wanted to go back, with the benefit of Good Strategy/Bad Strategy in mind - and re-examine the Plan).
I once watched a senior leadership team spend three days in an expensive off-site retreat to develop their new "strategy." They emerged exhausted but triumphant, clutching a document filled with Venn diagrams, pillars, and commitments to “leverage synergies” and “unlock value.” When I asked one of them what the actual strategy was - what they were going to stop doing, or how they were fundamentally changing their approach - he looked confused. "It's all in the document," he said, pointing to the glossy.
It wasn't. It was a list of aspirations held aloft by the hot air of corporate speak.
This tendency to confuse ambition with planning is endemic in the public sector. It allows us to avoid the hard work of diagnosis and the pain of making real choices.
Richard Rumelt, in his seminal work Good Strategy/Bad Strategy, provides the perfect acid test for this phenomenon. He argues that good strategy is rare because it requires focus, hard choices, and a ruthless assessment of reality. It must contain a "kernel": a clear Diagnosis of the core challenge, a Guiding Policy to address it, and Coherent Actions focused on high-leverage points.
Bad strategy, conversely, is characterized by fluff, a failure to face the actual challenge, and the cardinal sin: mistaking goals for strategy.
With this lens, let's look (again) at the AI Plan for the Australian Public Service 2025. It is, I am sorry to report, a masterclass in Bad Strategy.
1. Mistaking Goals for Strategy
The APS AI Plan is brimming with desirable outcomes. The vision is to “improve government service delivery, policy outcomes, efficiency, and productivity” (p5). It promises "$19 billion in annual value" to the public sector by 2030 (p7).
These are wonderful goals. But they are not a strategy.
Rumelt warns against presenting a desired end state as if it were a plan to achieve it. The APS plan falls directly into this trap. It eloquently describes what it wants (more AI use, more productivity) but completely leapfrogs the how.
The document is laden with the requisite fluff: We will be “future-ready,” “drive systemic adoption,” and ensure initiatives are “people centred” (p6). This language is designed to soothe, not to direct. It creates the illusion of substance while avoiding the difficult fundamentals of how complex systems actually change.
2. The Superficial Diagnosis
A good strategy starts with a brutal, honest diagnosis: What is the critical bottleneck preventing progress?
The AI Plan’s diagnosis is superficial at best. It identifies that AI adoption is “inconsistent” (p11) and that there are “barriers to adoption relating to culture, trust, and resourcing” (p5).
This is not a diagnosis; it's a description of symptoms. It’s the equivalent of a doctor telling a patient, "You have a fever." Yes, we know. But why?
The Plan utterly fails to diagnose the actual sickness plaguing the APS. It doesn’t confront the brittle legacy IT infrastructure that can barely handle current workloads, let alone advanced AI. It doesn’t address the Byzantine procurement processes that stifle innovation. And it certainly doesn’t touch the political incentives that drive extreme risk aversion among senior leaders.
By failing to diagnose the root causes of the dysfunction it seeks to fix, the strategy cannot possibly hope to address them.
3. The Absence of a Guiding Policy
If the diagnosis is weak, the guiding policy is non-existent. A guiding policy is the overall approach for dealing with the obstacles identified. It requires choice and focus - deciding what you will do and, crucially, what you will not do.
The AI Plan offers no such focus. Instead, it presents three "mutually reinforcing pillars": Trust, People, and Tools (p8).
These are not guiding policies. They are categories. They are the basic building blocks of any organizational initiative. It provides no direction, no leverage, no strategic advantage.
Where is the hard choice? Is the APS going to prioritize AI for high-volume transactions to free up staff? Or will it focus on augmenting expert decision-making in high-risk policy areas?
The Plan tries to do everything, everywhere, all at once. It mandates universal action: Every public servant gets training; Every agency gets a Chief AI Officer; Every public servant gets access to tools.
This "spray and pray" approach is the antithesis of strategy. It disperses effort rather than concentrating it.
4. The Activity Trap (Incoherent Action)
Without a clear diagnosis or guiding policy, "coherent action" is impossible. What we get instead is an activity trap: a long list of initiatives that look busy but lack coordination and leverage.
The Plan defaults to the standard managerial reflex: create new roles and new committees.
The centerpiece is the mandate for "Chief AI Officers" (p19) in every agency to “drive adoption.” This is the classic fallacy that the barrier to change is a lack of senior oversight. It adds another layer to the managerial lasagna, creating more coordination overhead rather than addressing the core challenge.
Simultaneously, it creates a central AIDE team in the Department of Finance to "accelerate the uptake." This is not coherence; it is a recipe for confusion and turf wars.
The actions are fragmented. Providing "GovAI Chat" (p23) to all staff while the underlying data governance is immature is not strategic; it's performative. Mandating eLearning modules (p18) without changing the processes people use daily achieves nothing.
These are not mutually reinforcing steps designed to overcome a diagnosed challenge. They are a grab-bag of initiatives designed to signal that the government is "doing something" about AI.
The Strategy of Hope
The APS AI Plan 2025 is not a strategy. It is a glossy brochure of aspirations. It flourishes, as Rumelt says, because it “floats above analysis, logic and choice, held aloft by the hot hope” that the difficult fundamentals can be avoided.
It is easier to launch a plan full of buzzwords, appoint new officers, and roll out new tools than it is to do the hard, slow work of genuine systemic reform.
It is performative change, designed to signal action rather than achieve results. And in a few years, when the productivity gains fail to materialize, we will, no doubt, have another plan.