ARF Layer

Semantic Integrity

Ensures every metric and term has one clear meaning, so AI and humans compute the same truth.

Layman explanation

Semantic Integrity is about definitions: what a metric means, how it is calculated, and where its boundaries are. When the same business concept has multiple measures, or when naming and units are inconsistent, AI answers diverge even if the data is correct. This layer focuses on reducing ambiguity and making semantics explicit in the model.

Semantic Integrity Flow
Rendering diagram...
A flow from business question to canonical metric, clear definition, consistent calculation, and stable AI answers.
Semantic integrity links business questions to a single, well‑defined metric.

What breaks when this layer is weak

  • AI returns different values for the same metric depending on the question phrasing.
  • Executives see conflicting numbers across reports and natural‑language answers.
  • Automated insights highlight the wrong drivers because the metric definition is unclear.

Symptoms you can observe

  • Multiple measures for “Revenue,” “Bookings,” or “Active Customers.”
  • Measures with unclear units (percent vs basis points).
  • Hidden filters embedded in DAX that change meaning without being visible.
  • Measure names that don’t match the business definition.

Root causes

  • Metric definitions are stored in documents instead of the model.
  • Teams create local measures to solve immediate needs without reusing canonical ones.
  • Naming conventions are inconsistent across datasets and reports.
  • No owner for key metrics or definition changes.

What good looks like

  • One canonical measure per business metric, with documented meaning and unit.
  • Names reflect business definitions and calculation boundaries.
  • Default aggregations are explicit and correct for each field.
  • Semantic changes are reviewed and communicated.

Remediation checklist

  • Create a canonical metrics list with owners.
  • Deprecate duplicates and map reports to canonical measures.
  • Add descriptions for measures, columns, and tables.
  • Document units, time windows, and exclusions in the model metadata.

Metrics to track

  • % of measures with descriptions
  • # of duplicate metrics per business concept
  • % of measures using a canonical base
  • Average metric name conformity to naming standards

Foundational articles

Why Multiple Measures for the Same Metric Break AI Answers

Multiple measures for the same metric create conflicting answers and undermine trust.

Canonical Metrics: One Definition, Many Views

Canonical metrics standardize meaning while allowing flexible reporting views.

Semantic Drift: How Definitions Quietly Change Over Time

Semantic drift happens when metric meaning changes without clear communication.

Naming Measures So Humans and AI Agree

Consistent naming helps AI select the right measure and reduces ambiguity.

Measure Singularity: Reducing Metric Sprawl Without Losing Flexibility

Measure singularity keeps one true metric while allowing controlled variants.

Units, Currency, and Time: The Hidden Semantics That Cause Bad Answers

Units, currency, and time basis are often implicit, but AI needs them explicit.

Default Aggregation: When SUM Is the Wrong Assumption

Default aggregations can distort results when a sum is not meaningful.

Business Definitions vs Calculation Logic

A metric is not just a formula; it is a business definition with boundaries.

Dimensional Grain: Preventing Apples-to-Oranges Comparisons

Grain defines the level of detail; without it, AI compares incompatible data.

Measure Branching Done Right: Reuse Without Confusion

Branching keeps complex measures readable and consistent when done carefully.

Calculation Groups Without Chaos

Calculation groups can simplify models, but they need clear rules and naming.

A Lightweight Metric Dictionary That Actually Gets Used

A simple metric dictionary helps teams align without heavy governance overhead.