The Strategy-Shaped Object

The Strategy-Shaped Object

I've just had a look at the Australian Government’s "National AI Plan 2025" and I'm not surprised. It certainly looks like a strategy. It has vision statements, goals, and nine distinct actions. But look closer, and the familiar emptiness yawns back at you.

This is what Richard Rumelt calls "Bad Strategy." It floats above analysis, logic, and choice, held aloft by the hot hope that the difficult fundamentals can be avoided. It is the sound and fury of managerialism, signifying very little.

The Fluff and the Fury

Let’s start with the language. It’s a veritable feast of the vague and the virtuous.

The plan is anchored in three goals: Capturing the opportunity, Spreading the benefits, and Keeping Australians safe.

As I’ve said before, when you encounter principles that "you can't reasonably disagree with," they have no strategic value. Who is arguing for less safety? Who wants to contain the benefits? These aren't strategic choices; they are the bare minimum expectations of a functioning government.

This vapidity cascades through the document. The government will "ensure settings are fit for purpose" (p. 6) to attract investment. They will "Provide national leadership" (p. 7). By 2030, our workplaces will be "fully Al capable" and our workers "empowered" (p. 9).

Stop right there. "Empowered." "Fit for purpose." What does this actually mean? How will you do it? This is the language of the passive bureaucrat disguised as the active manager. It’s a linguistic performance designed to create the illusion of substance while studiously avoiding the difficult, messy reality of implementation.

The Missing Kernel

A good strategy requires Rumelt’s "kernel": a sharp Diagnosis, a clear Guiding Policy, and Coherent Action. This plan is missing the first two entirely.

Where is the Diagnosis? A diagnosis identifies the critical challenge - the core obstacle that, if overcome, unlocks the most value.

The "National AI Plan" fundamentally fails this test. The introduction (p. 6) reads less like a diagnosis and more like a victory lap. Australia is "consistently punching above our weight," attracted $10 billion in data centre investment, and the AI industry is "thriving."

If everything is going so well, what is the critical obstacle this plan is designed to overcome?

The plan vaguely gestures towards risks and a regional digital divide, but it never identifies the central bottleneck. Is it access to compute? A lack of specialized talent? A failure of specific industries to adopt? The dominance of US/China tech giants?

Where is the Guiding Policy? Because the diagnosis is absent, there can be no focused guiding policy. A guiding policy requires choice. It means saying "we will do this at the expense of that."

There is no focus here. The plan attempts to be all things to all people: build infrastructure, back local capability, attract investment, scale adoption, train everyone, improve public services, mitigate harms, promote ethics, AND partner globally. It’s a catalogue of initiatives, not a strategy.

The Black Box (The blah-blah-blah)

Without a guiding policy, the Actions become a grab-bag of initiatives connected to the goals only by magical thinking. This is the "Black Box" - the gap where the mechanism of change is supposed to live, but where we usually find only hand-waving.

Let’s look at Action 1: Build Smart Infrastructure. The focus is on attracting massive investment in data centers. Simultaneously, the plan stresses sustainability. The document notes data centre electricity demand is expected to triple by 2030 (p. 11).

How does the plan resolve this critical conflict?

It doesn't. The connection is a proposed set of "national data centre principles" (p. 11) to "clarify" what alignment with national interests looks like.

So, the strategy to manage the complex trade-offs of infrastructure is... to develop principles? This is the Blah-Blah-Blah. We will attract massive energy-consuming infrastructure, AND we will be sustainable, and the mechanism connecting these two is a document that hasn't been written yet.

Let's scrutinize Action 2: Back Australian AI capability. The ambition is for Australia to lead, not just consume. To achieve this, the government points to "$460 million in existing funding already available or committed" (p. 13).

$460 million is nothing in the global AI landscape. Microsoft alone announced a $5 billion investment in Australian infrastructure (p. 12). The idea that a few hundred million in research grants and an "AI Accelerator" will create sovereign capability that can compete with the sheer gravitational pull of global tech giants is delusional. It assumes that funding research automatically equals sustainable, sovereign commercial success.

Finally, look at Action 4: Scale AI adoption. The plan rightly identifies a significant gap: only 29% of regional organisations are adopting AI compared to 40% in metro areas (p. 18).

What is the solution? The National AI Centre (NAIC) providing "tailored guidance" and "editable AI policy templates" (p. 19).

Do we genuinely believe that the barrier for a small business in remote Queensland is the lack of an editable policy template? This is classic performative action. It offers a superficial fix that fundamentally misunderstands the operational reality on the ground.

The Accountability Vacuum

Finally, we must ask the most crucial public management question: How will we know if this worked?

The section on "Implementing the plan and measuring success" (p. 8) is perhaps the most telling. It promises a "flexible, evidence-based approach," factoring in the "evolving nature of AI."

This is managerial-speak for: "We have no concrete metrics, no targets, and no baseline."

They will draw on existing data sources like the ABS. There is no commitment to a rigorous evaluation of whether these specific initiatives delivered their intended benefits. It is the classic "fire-and-forget" approach. By the time 2030 rolls around, the ministers who launched this will be long gone, and a new government will launch a new plan to fix the failures of the old one.

Turtles All The Way Down

This document is a symptom of a system that incentivizes the performance of strategy over the hard work of actual management. It avoids the difficult tradeoffs inherent in real strategy. It doesn't choose a specialization; it doesn't concentrate resources on a critical leverage point.

It promises leadership without making decisions. It promises empowerment without shifting power. It promises transformation while reinforcing the status quo. It’s turtles all the way down - a stack of goals and actions with no solid ground beneath them.

A note to the reader

The post above was entirely written by AI. I (Felix Barbalet) provided the context and reference material, and light drafting, but the analysis, critiques, etc, are all AI generated. I have fact checked the assertions, also read the AI Strategy, and endorse the above analysis.

There is a deliberate irony in using an AI to critique a national plan that lacks a "kernel" because it highlights exactly what AI is and isn't. AI could not have written this post without my diagnosis and guiding context. It did not "have" these opinions; it synthesized them based on my inputs and intent.

This highlights the role of AI in human creative expression: it is not a replacement for the spark of an idea or the weight of a decision. It is an instrument - a synthesizer -that allows us to scale our critical thinking and sharpen our articulation. It amplifies the human signal; it does not replace the source.

Read more