Framework FAQ

Common questions about the Analytical Readiness Framework.

Does this depend on ChatGPT vs Claude vs Gemini vs Copilot?

No. ARF is model‑agnostic. It focuses on data model semantics and context stability, which affect answers regardless of the LLM.

Is this a data quality issue?

Not primarily. ARF is about semantic clarity and context determinism. Even clean data can produce unreliable AI answers if definitions or filters are ambiguous.

Do I need to rebuild my model?

Usually no. Most improvements are additive: clarify definitions, reduce ambiguity, and document context. Rebuild only when the model structure itself is the root issue.

How do I start small?

Pick one high‑impact KPI and apply the framework: define a canonical measure, document context, add driver measures, and test repeatability.

What if I have multiple definitions of the same metric?

Choose a canonical definition, document it, and map variants to it. Make variants explicit in naming and descriptions.

Why do answers change between runs?

Usually due to unstable filter context or ambiguous relationships. The same data can yield different results when context paths are unclear.

How does Power BI filter context relate to AI?

AI answers are computed in the same filter context as measures. If context is ambiguous or unstable, AI answers will vary even if the data is unchanged.

What is “semantic drift”?

Semantic drift is the gradual change of a metric’s meaning over time, often caused by untracked logic changes or shifting business rules.

What does “AI‑readable” mean?

AI‑readable means the model is unambiguous and well‑documented: clear names, explicit relationships, and rich metadata.

How do I measure progress?

Track improvements in metadata coverage, context stability tests, and explainability metrics. ARF recommends a simple monthly scorecard.