The dbt Metrics Layer Is Dead. Here’s What Actually Replaced It



In 2022, dbt Labs introduced dbt Metrics with a bold promise:

Define business metrics once in code.

Query them everywhere.

Never argue about revenue again.

The pitch was compelling: version-controlled metrics, reusable across BI tools, notebooks, and APIs. A single source of truth for the entire company.

By late 2023, the project was quietly deprecated.

Officially, it was “evolving into the Semantic Layer.”

Unofficially, the original approach didn’t work.

So what went wrong? And what replaced it?

After analyzing implementation attempts across dozens of teams, a clear pattern emerges:

The idea was correct.

The architecture wasn’t ready.

What dbt Metrics Promised?

The original problem was real.

Marketing calculated revenue one way.

Finance calculated it differently.

Product had its own definition.

Three dashboards.

Three numbers.

Endless reconciliation meetings.

dbt Metrics proposed a solution:

➜ Define the metric once (in YAML)

➜ Centralize logic in Git

➜ Generate SQL dynamically

➜ Use the same definition across tools

The benefits sounded obvious:

➜ One canonical definition

➜ Version control

➜ Reusability

➜ Cross-tool consistency

➜ Governance through code

It felt like a breakthrough moment for analytics engineering.

But the model had hidden flaws.

Why dbt Metrics Failed?

Query-Time Computation Was Too Expensive

dbt Metrics generated SQL and executed it at query time in the warehouse.

For simple aggregates, that worked fine.

For real-world metrics such as churn-adjusted MRR, cohort retention, and lifetime value, it became extremely expensive.

Every dashboard load triggered full scans and complex calculations.

High-cardinality dimensions made things worse.

➜ Warehouse bills spiked

➜ Dashboards slowed down

➜ Users experienced 30–60 second queries

Users expected sub-second results.

Instead, they got expensive analytics queries.

The industry learned something important:

➜ Metrics layers must optimize read performance

➜ Pre-aggregation is fundamental

Dimension Explosion Broke the Model

Metrics allowed arbitrary slicing across many dimensions.

In theory: flexible.

In practice: computational chaos.

When you combine:

➜ Fine-grained time grains

➜ Multiple categorical dimensions

➜ High-cardinality attributes

You create millions or even billions of grouping combinations.

Query-time computation does not scale under that load.

Semantic layers that survived solved this with:

➜ Pre-aggregated rollups

➜ Intelligent query routing

➜ Caching layers

dbt Metrics had none of these.

“Integration” Wasn’t Real Integration

The promise was simple: define once, use anywhere.

But the reality was different.

BI tools could technically query metrics, but the experience was poor.

➜ Exploration was limited

➜ Drill-downs were awkward

➜ Caching was inconsistent

➜ Custom filtering felt unnatural

BI platforms are not just SQL wrappers.

They are interactive exploration systems.

Looker, for example, provides:

➜ Dynamic dimension joins

➜ Aggregate awareness

➜ Drill paths

➜ Validation

➜ Scheduling

➜ Access controls

dbt Metrics generated SQL.

That is not the same thing as a full semantic layer.

YAML Abstraction Added Friction

Simple metrics were elegant.

Complex metrics became difficult to maintain.

Nested derived expressions, dependencies, and multi-layer definitions all written in YAML created maintenance headaches.

For sophisticated calculations, writing SQL directly was often clearer.

The abstraction layer didn’t deliver enough value compared to the complexity it introduced.

And without a strong multiplier effect, abstractions fail.

Governance Is Not Just Git

The assumption was simple:

Put metrics in version control and governance will follow.

But real governance requires:

➜ Clear ownership

➜ Approval workflows

➜ Impact analysis

➜ Deprecation processes

➜ Stakeholder alignment

Git tracks changes.

It does not align departments.

Many teams ended up with:

➜ revenue

➜ revenue_v2

➜ gross_revenue

➜ adjusted_revenue

All technically version-controlled.

None organizationally resolved.

Governance turned out to be more social than technical.

It Wasn’t a Full Semantic Layer

A real semantic layer includes:

➜ Metric definitions

➜ Data modeling

➜ Access control

➜ Query optimization

➜ Caching

➜ APIs

➜ Exploration support

dbt Metrics handled metric definitions.

Everything else relied on the warehouse.

Users expected a full semantic solution.

They received a metric definition format.

That gap proved fatal.

What Replaced dbt Metrics?

The failure didn’t kill the vision.

It clarified what was required.

dbt Semantic Layer (MetricFlow)

dbt rebuilt the concept with a new architecture.

➜ Server-side query engine

➜ Better caching

➜ Explicit entity modeling

➜ Improved integrations

➜ Commercial support model

It is stronger than dbt Metrics, but still competing in a crowded market.

Cube (Open-Source Alternative)

Cube focused on performance-first architecture.

➜ Built-in pre-aggregation

➜ Multi-level caching

➜ REST and GraphQL APIs

➜ Multi-tenant support

This made it attractive for mid-market teams.

It operates as a headless semantic layer.

Looker (The Enterprise Benchmark)

Looker never needed dbt Metrics.

Its semantic layer already included:

➜ Exploration-first modeling

➜ Performance optimization

➜ Governance

➜ Deep BI integration

It is expensive and complex.

But it works.

And it demonstrated that a full-stack semantic architecture beats a metrics-only abstraction.

Transform (dbt-Native Semantic Layer)

Transform created a semantic layer specifically for dbt teams.

➜ Direct integration with dbt models

➜ Query routing and caching

➜ BI tool integrations

It appeals to teams that want more structure without adopting a full enterprise BI platform.

The Pragmatic Path: Materialized Metrics in dbt

Many teams chose a simpler path.

Instead of building semantic layers, they:

➜ Created pre-aggregated metric tables in dbt

➜ Materialized them in the warehouse

➜ Queried them directly from BI tools

Benefits include:

➜ Fast queries

➜ Lower warehouse cost

➜ Simpler architecture

➜ Easier testing

It lacks dynamic slicing and APIs, but for many teams it is sufficient.

Sometimes boring wins.

What the Industry Learned?

Lesson 1 - Abstraction Must Justify Its Cost

If a new layer adds complexity without dramatic benefit, teams abandon it.

dbt Metrics didn’t provide enough value relative to its complexity.

Lesson 2 - Performance Beats Elegance

Analysts prioritize:

➜ Speed

➜ Accuracy

➜ Ease of use

Code elegance ranks much lower.

Semantic platforms must optimize for runtime performance first.

Lesson 3 - Pre-Aggregation Is Mandatory

Query-time aggregation collapses under scale.

Successful semantic systems rely on:

➜ Rollups

➜ Caching

➜ Aggregate awareness

Performance architecture determines success.

Lesson 4 - Governance Is Organizational

Technical controls cannot solve alignment problems.

Effective metric governance requires:

➜ Clear ownership

➜ Approval workflows

➜ Impact awareness

Without these, version control alone does not prevent chaos.

Lesson 5 - Integration Must Feel Native

“Compatible with Tool X” is not enough.

Integration must:

➜ Support exploration

➜ Respect tool workflows

➜ Enable caching

➜ Maintain performance

Otherwise, users return to native modeling inside BI tools.

The Current Landscape

Small teams typically use:

➜ Materialized dbt models

Mid-market teams often adopt:

➜ Cube

➜ dbt Semantic Layer

Enterprises usually rely on:

➜ Looker

➜ AtScale

API-first or multi-tool organizations often prefer:

➜ Cube

➜ Headless BI platforms

There is no universal winner.

Architecture must match organizational scale.

The Honest Verdict

dbt Metrics failed because:

➜ It relied on query-time computation

➜ It lacked pre-aggregation

➜ Integrations were shallow

➜ Governance was incomplete

➜ Abstraction outweighed benefit

What replaced it:

➜ Pre-computed dbt models

➜ Cube

➜ dbt Semantic Layer

➜ Looker

➜ Headless BI platforms

The experiment wasn’t wasted.

The industry needed it to fail.

Because its failure clarified what real semantic infrastructure requires:

➜ Performance-first design

➜ Deep integration

➜ Organizational governance

➜ Explicit modeling

The graveyard of dbt Metrics didn’t kill the idea of shared metrics.

It matured it.

And the systems we are building today are stronger because of it.



Blog liked successfully

Post Your Comment