Most AI strategy documents are filled with buzzwords that sound impressive and mean nothing. "Leverage synergistic AI capabilities to drive digital transformation across the enterprise." What does that actually mean? What should you do on Monday morning?
Here's a framework we use with every client. It cuts through the noise and helps you figure out where AI can actually help your business — and where it can't.
The Four-Question Filter
Before investing a dollar in AI, run every potential use case through these four questions:
1. Is There a Repeatable Decision?
AI is good at making the same type of decision many times with consistency. It's bad at novel, one-off judgment calls.
Good fit: "Should we approve this loan application?" (thousands of decisions, consistent criteria) Bad fit: "Should we enter the Japanese market?" (one-time strategic decision, requires context AI doesn't have)
If humans in your organization are making the same type of decision hundreds or thousands of times, that's a signal AI can help.
2. Do You Have the Data?
AI needs examples to learn from. Not theoretical data — actual historical data that shows the relationship between inputs and the decision you want to make.
Ask yourself:
- Do you have at least 6-12 months of historical decisions?
- Are the outcomes recorded? (Not just the decision, but whether it was right)
- Is the data accessible, or is it locked in PDFs, emails, and people's heads?
If the data doesn't exist yet, the first step isn't AI — it's instrumentation. Start recording the decisions and outcomes now, build AI later.
3. What's the Cost of Being Wrong?
Every AI system will make mistakes. The question is: what happens when it does?
Low cost of error: Product recommendations (wrong recommendation = customer scrolls past it) High cost of error: Medical diagnosis (wrong diagnosis = patient harm)
For high-stakes decisions, AI should augment humans, not replace them. The system recommends, the human decides. For low-stakes, high-volume decisions, full automation makes sense.
This isn't about avoiding AI in high-stakes areas — our healthcare triage system works in one of the highest-stakes environments there is. It's about designing the right level of human oversight.
4. Can You Measure Success?
Before you build anything, define what "working" looks like in numbers:
- What metric will improve?
- By how much?
- Over what time period?
- How will you measure it?
"Better customer experience" isn't measurable. "15% reduction in support ticket resolution time within 90 days" is.
If you can't define success in concrete terms, you're not ready for AI — you're ready for more research.
Prioritizing AI Opportunities
Once you've filtered your ideas through the four questions, you'll have a shortlist of viable use cases. Prioritize them by plotting two dimensions:
Impact: How much business value will this create? (Revenue increase, cost reduction, risk mitigation)
Feasibility: How hard is it to build? Consider data availability, integration complexity, and organizational readiness.
Start with the high impact, high feasibility quadrant. These are your quick wins — they prove the value of AI to your organization and build momentum for bigger initiatives.
Avoid the temptation to start with the most technically impressive project. The flashiest AI isn't always the most valuable.
The One-Pager Test
For each priority use case, write a one-page brief:
- The decision being automated or augmented
- The data available to train the model
- The integration with existing workflows
- The success metric and target
- The timeline to first measurable value
If you can't fill one page with concrete details, the use case needs more definition before it needs AI.
Common Mistakes
Starting with the technology: "We should use GPT-4 / RAG / computer vision" — these are solutions looking for a problem. Start with the problem.
Boiling the ocean: Trying to build an "AI platform" that does everything. Build one thing that works, prove value, expand.
Ignoring integration: The model is 20% of the work. The other 80% is getting it into your workflow where people can actually use it. Budget accordingly.
No feedback loop: If the AI makes a mistake and nobody records it, the model never improves. Build correction mechanisms from day one.
What This Looks Like in Practice
When ShopSense came to us, they didn't ask for "an AI strategy." They asked: "Why do we keep running out of our best sellers while warehouses are full of stuff that doesn't sell?"
That question — specific, painful, measurable — led to a demand forecasting system that hit 92% accuracy and saved $1.8M in the first year.
The best AI strategies don't start with AI. They start with questions like that.