The Universal
Semantic Layer
Define metrics once in YAML, SQL, or Python. Query them consistently across any data warehouse.
See It in Action
Compatible with metrics definitions in 15+ semantic model formats.
Metrics shouldn't be locked inside one tool.
Sidemantic reads metric definitions from Cube, MetricFlow, LookML, and 12 more formats. Query any warehouse: DuckDB, Snowflake, BigQuery, Postgres, ClickHouse. Serve results over MCP or a REST API.
Your team keeps their tools. Your metrics stay consistent everywhere.
models: - name: orders table: public.orders dimensions: - name: status type: categorical - name: order_date type: time sql: created_at metrics: - name: revenue agg: sum sql: amount - name: order_count agg: count
MODEL ( name orders, table public.orders ); DIMENSION (name status, type categorical); DIMENSION (name order_date, type time, sql created_at); METRIC (name revenue, agg sum, sql amount); METRIC (name order_count, agg count);
from sidemantic import Model, Dimension, Metric orders = Model( name="orders", table="public.orders", dimensions=[ Dimension(name="status", type="categorical"), Dimension(name="order_date", type="time", sql="created_at"), ], metrics=[ Metric(name="revenue", agg="sum", sql="amount"), Metric(name="order_count", agg="count"), ], )
cubes: - name: orders sql_table: public.orders dimensions: - name: status sql: status type: string - name: created_at sql: created_at type: time measures: - name: revenue sql: amount type: sum - name: count type: count
semantic_models: - name: orders model: ref('orders') entities: - name: order type: primary expr: order_id dimensions: - name: status type: categorical - name: order_date type: time type_params: time_granularity: day measures: - name: revenue agg: sum expr: order_amount - name: order_count agg: count
view: orders { sql_table_name: public.orders ;; dimension: status { type: string sql: ${TABLE}.status ;; } dimension_group: created { type: time timeframes: [date, week, month] sql: ${TABLE}.created_at ;; } measure: revenue { type: sum sql: ${TABLE}.amount ;; } measure: count { type: count } }
source: orders is duckdb.table('orders') extend { primary_key: order_id dimension: status is status order_date is created_at measure: revenue is sum(amount) order_count is count() }
id: orders base_sql_table: public.orders dimensions: - id: status type: string - id: created_at type: timestamp_tz measures: - id: revenue func: sum of: amount - id: order_count func: count
type: metrics_view model: orders timeseries: created_at smallest_time_grain: day dimensions: - name: status column: status measures: - name: total_revenue expression: SUM(amount) - name: total_orders expression: COUNT(*)
table_name: orders main_dttm_col: created_at schema: public columns: - column_name: status type: VARCHAR groupby: true - column_name: created_at is_dttm: true metrics: - metric_name: total_revenue expression: SUM(amount) - metric_name: order_count expression: COUNT(*)
name: orders table_name: orders schema: public dimensions: status: type: string sql: ${TABLE}.status created_at: type: timestamp timeframes: [date, week, month] measures: total_revenue: aggregate_type: sum sql: ${orders.amount} count: aggregate_type: count
tables: - name: orders base_table: database: analytics schema: public table: orders dimensions: - name: status expr: status data_type: VARCHAR time_dimensions: - name: order_date expr: created_at data_type: TIMESTAMP facts: - name: revenue expr: amount default_aggregation: sum - name: order_count expr: 1 default_aggregation: sum
semantic_model: - name: orders_analytics datasets: - name: orders source: public.orders fields: - name: status expression: dialects: - dialect: ANSI_SQL expression: status metrics: - name: revenue expression: dialects: - dialect: ANSI_SQL expression: SUM(orders.amount)
orders: table: public.orders dimensions: status: expr: _.status created_at: expr: _.created_at is_time_dimension: true measures: revenue: expr: _.amount.sum() count: expr: _.count()
Built for correctness
Everything you need to centralize metric definitions and serve consistent answers everywhere.
Import from Anywhere
Cube, dbt, LookML, Malloy, Hex, Rill, Superset, Omni, Snowflake Cortex, AtScale, GoodData, Holistics, ThoughtSpot, BSL, and OSI. No vendor lock-in.
YAML, SQL, or Python
Author your semantic layer in the format your team prefers. All three produce identical query behavior.
Fan-out Correctness
Symmetric aggregates prevent double-counting across one-to-many joins. Correct results by default, not by accident.
Automatic Join Inference
Cross-model queries infer join paths from declared relationships. No JOIN keyword needed in semantic SQL.
Any Warehouse
Native adapters for Snowflake, BigQuery, Postgres, ClickHouse, Databricks, Spark. Plus MySQL, Trino, Redshift, SQL Server, and more via ADBC.
AI-Ready via MCP
Connect LLMs to your metrics through the Model Context Protocol. Built-in guardrails for safe, governed AI access.
Query anywhere
Semantic SQL, Python API, CLI, MCP server, or Postgres wire protocol. Same metrics, same answers.
SELECT customers.region, orders.revenue, orders.order_count FROM orders
layer = SemanticLayer() result = layer.sql("SELECT orders.revenue FROM orders") result.fetchall()
# Connect with any Postgres client $ psql -h localhost -p 5444 =# SELECT orders.revenue FROM orders;
$ claude mcp add sidemantic \
-- uvx sidemantic mcp
Use in notebooks, BI tools, Slack bots, or any tool that speaks Postgres. One definition, consistent results everywhere.
Ready to try Sidemantic?
Install in seconds. Try the interactive workbench with built-in demo data, no database required.