The Automated Managerialist's Dream

The Automated Managerialist's Dream

I remember sitting in a Gartner Conference workshop around 2013, somewhere in the Sydney. We were talking about "Big Data" and the Australian Public Service.

The consultancy partner leading the session, all suit and enthusiasm, declared that data was the "new oil" and that if organisations didn’t "leverage their data assets" they would be obsolete within five years.

Most people looked bewildered - as the standard feeling on big data was that agencies fragmented, legacy IT systems that required them to manually copy and paste information between three different screens just to process a simple application were already "big" enough. The gap between the consultant's vision and the operational reality was a chasm.

More than ten years later, the oil has changed, but the sermon remains the same.

Today’s salvation comes in the form of the Australian Government’s new AI Plan for the Australian Public Service 2025. It’s a glossy document, full of illustrations of people connected by glowing digital lines. And, as expected, it promises a revolution.

The Sound and the Fury

The vision, we are told, is to "improve government service delivery, policy outcomes, efficiency, and productivity, through substantially increasing the use of AI in government"

Stop me if you’ve heard this one before.

The document is a veritable feast of the kind of language that makes me want to rub my eyes and ask: how? We are going to be “future-ready,” we will “drive systemic adoption,” and ensure technologies are “fit-for-purpose.” It’s a cascade of commitments that sound decisive while committing to nothing concrete.

The Foreword assures us this plan ensures the APS “keeps pace with community expectations” and that the introduction of AI will be “transparent, inclusive, and well-managed.”

This is the standard boilerplate of modern managerialism: statements of intent so broad and virtuous that you can’t reasonably disagree with them. Who doesn’t want an inclusive, well-managed public service? But when the language of such documents drifts heavily into the positivistic, I forget I’m reading government policy and could be convinced I’ve accidentally opened the corporate report of a major bank.

It’s sound and fury, signifying the desperate need to appear in control of a rapidly moving technological wave.

The Chief AI Officer: A New Layer on the Cake

If there’s one thing the public sector loves more than a new strategy, it’s a new senior executive role to oversee it.

The plan mandates that “Each agency and department [appoint] a senior executive as Chief AI Officer” (p9). These officers, we are told, will “drive adoption and advocate for strategic change” and will have the “mission and authority to effect change” (p19).

This is the managerial reflex in its purest form. The assumption is that the barrier to effective change is a lack of senior oversight, a lack of driving. It reinforces the ideology that change is something done to the organization by those at the top, rather than something grown from within.

Do we really believe that the complex, nuanced work of integrating AI into, say, sensitive social services or complex environmental policy, will be solved by appointing another SES Band 2 to attend more meetings?

The document tries to differentiate this new role from the existing 'AI Accountable Official' (who handles governance). The Chief AI Officer is there for the strategy and the innovation. It’s a classic separation, but it rarely survives contact with reality. It just creates another silo, another reporting line, another layer of negotiation. Another layer on the managerial lasagna.

The Black Box of Productivity

The central promise, the justification for this entire enterprise, is productivity. The plan cites the Productivity Commission, suggesting massive GDP growth and claiming that by 2030, AI adoption could deliver "$19 billion in annual value" to the public sector (p7).

This is where we find the black box. The "Yada-Yada-Yada" of the plan.

The logic presented is this:

  • AI is powerful and revolutionary.
  • We will give every public servant access to AI tools (like "GovAI Chat").
  • We will mandate foundational training (an eLearning module, no doubt).
  • Yada Yada Yada.
  • Massive productivity gains and improved services.

What is missing is the crucial mechanism of how the work actually changes.

Public service work is not assembling widgets. It is complex, nuanced, and relational. How exactly does access to a secure chatbot translate into better policy advice on entrenched social issues? How does it speed up the glacial pace of inter-agency coordination?

The plan assumes that the tools themselves are the solution. It ignores the ingrained habits, the workarounds, the funding logistics, and the competing interests that define the public sector's reality.

If your core IT systems don’t talk to each other, if your processes are built on decades of legislative accretion, AI won’t fix it. It will just automate the dysfunction.

It risks becoming a very expensive way to generate mediocre first drafts of ministerial briefs.

Performative Engagement

The plan makes all the right noises about people and consultation. Under the ‘People’ pillar, there is much talk of “ongoing and genuine consultation” (p18) and ensuring employees “have a voice” (p19).

But look closer at the mechanisms. The APSC will issue a Circular setting out standards for consultation, aligning with “existing obligations in APS Enterprise Agreements.”

This is not co-design. This is not genuine engagement with the expertise of the practitioners who actually understand how the work gets done. This is formal consultation - the legal requirement to inform staff of changes and "genuinely consider" their feedback before proceeding with the pre-determined plan.

We know how this plays out. It’s Post-it participation. Staff are invited to workshops, asked for their "pain points," and then the management team disappears behind closed doors to synthesize the feedback into the solution they already designed. When the final plan emerges, bearing little resemblance to the staff input, it breeds cynicism, not buy-in.

The plan treats staff as objects to be "upskilled" and "managed through ongoing change," rather than experts whose insights are critical to avoiding disaster.

The Missing Link: Accountability

If we are truly embarking on this transformation, how will we know if it worked?

The plan focuses on input metrics: number of staff trained, availability of tools, appointment of officers. But where is the commitment to measuring the outcomes? Where is the plan to evaluate if service delivery times actually improved, or if policy advice became more robust?

We see the establishment of an “AI Review Committee” (p15) to provide oversight on high-risk use cases. But its recommendations are explicitly “non-binding.” It’s a toothless tiger, a committee designed to provide the illusion of oversight without the inconvenience of actual authority.

This lack of meaningful accountability is the hallmark of performative change. We launch the initiative with fanfare, but we rarely stick around to see if the benefits were realized. We are masters at wording intent and paupers on follow-through or metric.

The AI Plan 2025 is an ambitious document. But it is built on the same flawed foundations that have plagued public sector reforms for decades: a belief in technology as a silver bullet, a reliance on top-down managerial directives, and a profound misunderstanding of how complex systems actually change.

If we want genuine transformation, we need less managerialist dreaming and more engagement with the messy reality of public service work. Otherwise, we are just automating the merry-go-round.