Semantic Integrity
Canonical Metrics: One Definition, Many Views
Canonical metrics standardize meaning while allowing flexible reporting views.
TL;DR
- • Define each metric once and reuse it everywhere.
- • Views can vary, but the definition must not.
The problem (layman)
- • Different teams define the same metric differently.
- • AI cannot infer which definition is the “right” one.
Why it matters
- • Canonical metrics ensure consistency across dashboards and AI answers.
- • They reduce time spent reconciling numbers.
Symptoms
- • The same KPI shows different values in different reports.
- • Stakeholders debate definitions instead of insights.
Root causes
- • Metric definitions stored in docs but not embedded in the model.
- • Local optimization leads to local definitions.
What good looks like
- • One authoritative measure per KPI with clear scope and unit.
- • Variant measures explicitly reference the canonical base.
How to fix (steps)
- • Create a metric catalog with owners and definitions.
- • Build canonical measures and update reports to use them.
- • Allow variants only when they explicitly reference the base.
Pitfalls
- • Allowing “temporary” local measures to linger.
- • Hiding canonical definitions in external docs only.
Checklist
- • Canonical metric list published and owned.
- • All KPIs map to a canonical measure.
- • Variants explicitly documented.