Methodology

Triangulation: why three data sources are better than one (especially for AI readiness)

The AI Readiness Diagnostic uses a method borrowed from research methodology. Here's why it finds what self-assessments and vendor audits miss.

23 March 20265 min read

Triangulation: why three data sources are better than one (especially for AI readiness)

Every founder has a theory about their AI readiness. "We're using AI already." "We need better tools." "We need to hire a data person." The theory is usually based on one data source — their own observation from inside the system.

That's the weakest possible diagnostic. Not because founders are wrong, but because a single vantage point has systematic blind spots. You see what you're looking for. You miss what you're not.

This matters especially for AI readiness, because the gap between perceived AI capability and actual AI readiness is usually enormous — and the tools companies use to assess themselves (vendor scorecards, self-evaluations, team surveys) all share the same blind spots.

What triangulation means

Triangulation is a research methodology principle: you don't trust any single source of data. You cross-reference at least three independent sources and look for convergence.

It's the same principle an auditor uses (financial records vs. bank statements vs. physical inventory), a journalist uses (source A vs. source B vs. documents), and a doctor uses (symptoms vs. blood work vs. imaging).

The logic: if three independent data sources point to the same conclusion, you can be confident the finding is real. If they diverge, you've found something worth investigating — a gap between perception and reality.

Why founders don't triangulate AI readiness

Most companies rely on one — maybe two — data sources for understanding their AI readiness:

| Common approach | Data source | Blind spot | |----------------|-------------|------------| | Vendor assessments | Vendor's readiness scorecard | Conflict of interest. The assessor profits from you being "almost ready." | | Founder self-assessment | Personal observation | Dunning-Kruger. The less you know about AI, the more ready you think you are. | | Team surveys | Internal conversations | Social desirability. "Are we AI-ready?" gets the answer people think you want. | | Tool usage metrics | Quantitative outputs | Activity ≠ readiness. Using ChatGPT daily ≠ measurable business impact. | | Board/advisor input | External observation | Distance bias. Advisors see the pitch, not the operational reality. |

Each of these is valuable. None is sufficient alone. And most companies never systematically cross-reference them.

What triangulation looks like for AI readiness

The AI Readiness Diagnostic uses a structured triangulation approach. Three independent data streams, cross-referenced against each other:

Stream 1: Stakeholder interviews

One-on-one conversations with founders, leadership, and key team members about their AI knowledge, current usage, evaluation processes, and strategic commitment. Not surveys — interviews. Open-ended questions designed to surface what people actually think, not what they'd write in a form.

Key questions:

  • "What does AI actually do well for companies at our stage?" (tests knowledge reliability)
  • "Which of our AI tools can you tie to a specific business metric improvement?" (tests application capability)
  • "Who evaluated our last AI tool purchase, and what was their process?" (tests R&D ownership)
  • "By what date must we have measurable AI capability, and who's accountable?" (tests strategic non-negotiability)

Stream 2: Document and data review

The company's own artifacts: AI tool subscriptions and costs, usage analytics, any AI strategy documents, budget allocations, team structure, meeting notes. These tell you what the organisation does with AI, not what it says it does.

The gap between stated AI strategy and actual AI resource allocation is often the most revealing finding. A company that says "AI is a top priority" but has no AI budget line, no dedicated R&D time, and no measurement framework has a strategic non-negotiability gap — regardless of what the founder believes.

Stream 3: External pattern matching

Comparison against known failure modes: premature AI adoption patterns, common vendor-dependency traps, the Startup Genome premature scaling indicators applied to AI investment, and MSME transition zone patterns. Does this company's AI readiness profile match a known failure pattern?

This is where accumulated diagnostic experience becomes an asset. Every new diagnostic is compared against a growing body of real cases — what worked, what failed, and why.

Where the magic happens: convergence and divergence

The diagnostic value isn't in any single stream. It's in the relationships between them.

Convergence = all three streams point to the same finding. When the founder says "we've been careful about AI adoption," the data shows only 2 AI tools both tied to specific metrics, and the team interviews confirm structured evaluation processes — you have a validated finding. This company has genuine application capability.

Divergence = streams disagree. The founder says "we're AI-ready," but the data shows 6 AI subscriptions with no tracked metrics, and the team interviews reveal that nobody can explain what the AI strategy is or who owns it.

The divergence is where AI readiness diagnosis lives. It surfaces the gap between what the founder believes about readiness and what's actually true about the organisation's four dimensions.

Why this matters more for AI readiness than anything else

AI readiness is the domain where self-assessment is most unreliable. Here's why:

The Dunning-Kruger problem. The less a company understands about AI (knowledge reliability gap), the more likely they are to overestimate their readiness. You can't assess what you don't understand. This is why vendor assessments and self-evaluations consistently overstate readiness.

The activity-readiness confusion. Using AI tools feels like AI readiness. But tool usage without measurement, evaluation, and strategic commitment is just activity. Triangulation distinguishes activity from readiness by cross-referencing what people say they're doing with what the data shows is actually happening.

The vendor conflict. Most AI readiness assessments are designed by companies that sell AI services. They have a structural incentive to find you "close to ready" and recommend their product as the gap-filler. Triangulated diagnostics have no such incentive — the finding is the finding, regardless of what the recommended intervention looks like.

Try it yourself

You don't need a formal diagnostic to start triangulating your AI readiness. Pick one dimension and ask:

  1. What does the founder believe? About knowledge, application, R&D, or urgency — what's the narrative?
  2. What does the data say? Not vibes — actual metrics. AI tool costs, tracked business outcomes, R&D hours spent, deadlines set and met.
  3. What do three different people in the company say? Talk to them separately. Don't lead them.

If all three agree, you probably have your answer for that dimension. If they don't — you've just found the most important gap to investigate.


The Business Health Score is a 3-minute self-diagnostic that surfaces which dimension is your biggest gap. Free, instant results.

Is your business leaking revenue?

Take the free Business Health Score — 3 minutes, 4 dimensions scored.