Most FP&A teams say they are "exploring AI." Almost none have actually assessed where they stand. There is a difference between curiosity and readiness, and the teams that confuse the two are about to waste a lot of money.
I have seen this pattern dozens of times. A finance leader buys a tool before fixing the basics, the pilot looks impressive for two weeks, and then everyone realizes the real bottleneck was never the model. It was the operating discipline underneath it.
AI readiness matters because AI amplifies whatever environment you put it into. If your data is messy, AI scales messy. If your planning process lives in five versions of the same spreadsheet, AI makes that confusion faster. If nobody on the team can tell a good output from a polished hallucination, you are not automating work. You are automating risk.
That is why I would assess readiness before I would buy anything. The wrong order is common: demo first, procurement second, operating model third. The right order is more boring and much more effective: assess the environment, tighten what is weak, then decide where AI can create leverage.
1 = weak and mostly reactive
3 = workable but inconsistent
5 = repeatable, trusted, and ready to scale
The 4 Dimensions Of AI Readiness
| Dimension | 1 looks like | 5 looks like |
|---|---|---|
| Data Quality | Conflicting files, manual exports, no trusted source of truth | Clean definitions, governed access, reliable refreshes |
| Process Clarity | Key workflows live in people's heads | Core FP&A cycles are documented and repeatable |
| Team Skills | Curiosity but no practical AI fluency | Team can test prompts, validate outputs, and work with data partners |
| Leadership Alignment | AI framed as cost cutting or side experimentation | CFO treats AI as a strategic capability with clear guardrails |
1. Data Quality
This is where most teams fail. Not because they are careless, but because finance organizations have usually grown through urgency, not architecture. A quick export here, a manually adjusted file there, one critical mapping owned by one analyst. That works until you ask AI to sit on top of it.
Ask your team:
- Can we get the core numbers for revenue, headcount, opex, and cash from trusted sources without manual file hunting?
- Are the key definitions documented, especially for metrics that drive executive reporting?
- When numbers change, can we trace why they changed without starting a Slack archaeology project?
Readiness score: 1 | 2 | 3 | 4 | 5
If you score this dimension low, do not buy an AI workflow tool and expect it to rescue you. It will not. It will just give you faster answers built on unstable inputs.
2. Process Clarity
AI cannot automate what you have never defined. That sounds obvious, but it is amazing how many FP&A teams cannot clearly describe their own monthly forecast cycle, variance review flow, or board deck build process in writing. They know how to do it. They just have never externalized it.
Ask your team:
- Could a new analyst read a one-page document and understand how your forecast or variance process works?
- Are handoffs, approvals, and decision points clear, or do they depend on tribal knowledge?
- Do you know which steps are true analysis and which steps are just administrative drag?
Readiness score: 1 | 2 | 3 | 4 | 5
I have seen teams try to automate a process that three different people describe three different ways. That is not an AI problem. That is a management problem.
3. Team Skills
You do not need every FP&A professional to become a data engineer. You do need at least a few people on the team who understand how to work with AI seriously. That means knowing how to frame a prompt, how to test outputs, how to spot failure modes, and when to pull in data or engineering partners.
Ask your team:
- Does anyone on the team know how to structure prompts for analysis instead of treating AI like a search box?
- Can someone validate whether an output is directionally useful, numerically sound, and traceable?
- Does the team understand the basics of where data comes from, how it moves, and where it breaks?
Readiness score: 1 | 2 | 3 | 4 | 5
This dimension matters because AI adoption dies when the team either overtrusts the model or refuses to use it at all. Both are skill problems.
4. Leadership Alignment
This one gets ignored too often. If the CFO sees AI as a cheap labor story, the team will treat it defensively. If leadership treats AI as a strategic capability, the conversation changes. You get better pilots, better sponsorship, and better cross-functional support.
Ask your leadership team:
- Does the CFO have a clear point of view on where AI should improve finance execution or decision quality?
- Are risk, governance, and review expectations explicit, or is the message just "go experiment"?
- Is success defined as headcount reduction, or as faster cycles, better insight, and stronger business partnership?
Readiness score: 1 | 2 | 3 | 4 | 5
The best AI finance programs I have seen were not driven by cost-cutting theater. They were driven by leaders who wanted the team spending less time collecting numbers and more time shaping decisions.
Below 8: Start here. You do not need a bigger tool budget yet. You need operating discipline.
8 to 14: Accelerate. You have enough foundation to pilot targeted AI use cases, but you still have weak spots that will slow scale.
15 and above: You are ahead of most peers. Not finished, but genuinely ready to move faster than the market.
The interpretation matters. A high score does not mean you should automate everything. It means you have earned the right to be selective and aggressive. A low score is not bad news either. It is clarity. And clarity is a much better starting point than another AI vendor deck.
Tool Spotlight: Start A Master Data Dictionary In Notion
If your biggest weakness is data readiness, start with something embarrassingly simple: a master data dictionary in Notion.
List your critical metrics. Define each one. Name the system of record. Note the owner. Document how often it refreshes. Flag the common failure points. That single habit will expose more operational mess than most software evaluations ever will.
I like this because it is cheap, fast, and hard to fake. Before you spend on transformation, prove that your team can agree on what the numbers actually mean.
Action Item
Run this checklist in a 30-minute team sync this week. Score each dimension together, compare where the team disagrees, and send the result to your CFO with one sentence: "Here is where we are actually ready for AI, and here is where we are not."
That conversation alone will put you ahead of the teams still calling vendor demos "strategy."
Closing
My honest take: most FP&A teams are not at a 15 yet. Most are living in the middle, usually dragged down by data quality and process ambiguity. That is not a criticism. It is just where the market is. I have built teams from scratch, I am working with Navan right now, and the pattern is consistent: the teams that move fastest with AI already trust their numbers and know how their work really gets done.
Issue #4 will build on this one: how to run an AI pilot inside FP&A without turning it into governance theater, analyst anxiety, or another stalled innovation project.