- AI Strategy
Most AI Strategies Fail Before They Start
Organisations are spending millions on AI transformation programmes built on a foundation of assumptions. The problem is rarely the technology. It is the strategy - or what passes for one.
Simon Elisha
Founder & CEO
There is a pattern I see repeatedly in organisations pursuing AI transformation. It goes something like this: a board member reads an article, a CEO attends a conference, and within weeks there is a mandate to “develop an AI strategy.” A leader is appointed. A consulting firm is engaged. A roadmap is produced. Tools are procured. And twelve months later, the organisation has spent significant time and capital on capabilities it did not need, solving problems it did not have.
This is not an implementation failure. It is a strategy failure. And it happens before anyone writes a line of code, or activates an agent.
The Cargo Cult Problem
The phrase “cargo cult” comes from post-war Pacific islands, where communities built replica airstrips hoping to attract the planes that had brought supplies during the conflict. The planes never came.
Many AI strategies follow the same logic. Organisations observe companies successfully using AI, so they too must adopt AI. They replicate the visible artefacts - the executive leader, the tools, the teams, the well-named programmes - without understanding the underlying conditions that made those investments sensible in the first place.
The result is strategy by mimicry - it feels productive and it delivers very little. In fact, it is a net loss as it wastes time, energy and focus.
Capability vs. Need
The fundamental question most AI strategies fail to ask is deceptively simple: what problem are we solving?
I have seen this question failed to be asked, or adeqately answered, through every technology transformation.
Not “what could AI do for us” - that question has infinite answers, most of them irrelevant. The critical question is narrower and harder: where does our organisation currently make decisions poorly, slowly, or expensively, and would better information or faster pattern recognition materially change the outcome?
This reframing matters because it shifts the conversation from capability to need. AI vendors will happily demonstrate what their tools can do - they are cool and fun to buld with! What they cannot tell you is whether your organisation needs those capabilities, whether you have the data infrastructure to support them, or whether the problem you are trying to solve is even a technology problem at all.
I have seen organisations invest in machine learning platforms when their actual bottleneck was a spreadsheet maintained by one person in finance. I have seen predictive analytics programmes launched when the real issue was that nobody trusted the data in the source systems. The technology was impressive. The strategy was a dud.
”AI Strategy” Often Means “Slideware Strategy”
Here is an uncomfortable truth: many AI strategies are written by people who who have not used AI “in anger” or on a regular basis. The consulting firm that produced your AI roadmap also often has partnerships with platform vendors. They check out the slides, the websites, synthesise a “use this and you will be golden” and call it a strategy.
This is not necessarily malicious, but it is structural. When your advisor’s revenue depends on you buying technology, the advice will tend toward buying technology. The question “do you actually need this?” is rarely profitable to ask - but is a multi-million dollar question!
An affective AI strategy starts with the organisation, not the technology. It maps existing workflows, identifies genuine decision points, and asks where - specifically - better automation, prediction, or pattern recognition would change outcomes. In many cases, the honest answer is: not yet, or not here.
These are not exciting answers. They do not fill slide decks with transformation timelines. But they are accurate, and accuracy makes for good strategy.
What Good Looks Like
A sound AI strategy has several key characteristics:
It starts with mapping, not shopping. Before evaluating any tool, it maps where the organisation creates and destroys value, where decisions are made, and where information flows break down. This is closer to an operational analysis than technology assessment.
It distinguishes between automation and intelligence. Many tasks that organisations want to “AI” are actually automation problems - they just need better workflows. Misclassifying these wastes resources and creates unnecessary complexity.
It accounts for organisational reality. The best AI initiative in the world fails if the people who need to adopt it do not trust it, understand it, or want it. Strategy that ignores culture, incentives, and capability is not strategy - it is a folorn hope (and hope is not a strategy!).
It has a credible “do nothing” option. Every good strategy includes an honest assessment of what happens if you do not act. Sometimes the answer is “It will be OK” - and that is useful to know before committing millions and years to a transformation.
The Hard Part
The reason organisations skip this work is understandable. It is not fun, or cool, or fuel for your next TEDx Talk. It can be slow, uncomfortable, and reveal hard truths. Telling a board that the organisation is not ready for AI - or that it does not need AI in the way it thinks it does - requires a level of candour that most internal teams and external advisors find difficult.
But the alternative is worse. The alternative is a strategy that looks like a strategy, costs like a strategy, and delivers like a cargo cult. A monumental waste of money and effort.
If your organisation is developing an AI strategy, the first question to ask is not “which AI tools should we buy?” It is “what specific problems are we solving, and how do we know they are the right ones?”. Closely follwed by “who is advising us, and what are their incentives?”.
Start there and you are ahead of 80% of other organisations.
Ready to discuss your challenge?
Vantage Fulcrum works with a diverse range of clients. If you're facing a consequential technology decision, let's talk.
Years Leading
Technology leadership at the highest levels
Patents
Solving problems others hadn't yet defined
Companies
Assessed across enterprise, government, and startups - breadth no single-sector advisor can match