Not All Semantic Layers Are Built for AI

Estimated Reading Time: 1 minutes

The semantic layer has moved from niche infrastructure to a boardroom priority. Snowflake announced one. Databricks announced one. Google repositioned Looker as one. Microsoft repositioned Power BI as one.

This is genuinely good news. As ISG analyst Matt Aslett observed in a recent research note, semantic modeling has become a “critical element of several key trends that are shaping the future of enterprise computing.” Gartner echoed this signal in its 2026 top trends for data and analytics, identifying semantic layers as a key strategic focus for data leaders this year.

But when every platform claims semantic capability, the question becomes: which approach actually holds up as infrastructure?

The Map AI Actually Needs

A useful way to think about a semantic layer is to compare GPS coordinates to Google Maps for a self-driving car. Raw data (tables, columns, schemas) is like a set of coordinates. It tells you where things are, but not how to navigate between them.

A semantic layer is the map. It encodes roads, relationships, constraints, and rules about how movement actually works. For a human, raw coordinates are inconvenient and not very useful. For an autonomous system, they’re unusable. The same is true for AI operating on enterprise data.

Not all semantic layers are built the same way. The architectural choices that felt like edge cases in the BI era are mission-critical in the AI era.

At AtScale, we’ve organized our approach around three core principles: Open, Composable, and Multimodal. Together, they define what a semantic layer must be to function as infrastructure in enterprise deployments.

Open: Context Only Works If It’s Universal

A map that only works with one type of vehicle isn’t infrastructure. Open semantic layers expose their structure through standard interfaces so different systems can all navigate the same model.

Open means supporting the protocols that the enterprise already uses: SQL, MDX, DAX, Python, REST, and now MCP (Model Context Protocol). Each protocol serves a different class of consumers: MDX and DAX for Analysis Services and Excel environments, REST for custom applications, and MCP for AI agents operating in tools like Claude, ChatGPT, or enterprise-built systems.

Openness is about ensuring that governed business logic isn’t trapped inside a proprietary environment. As Aslett noted in his ISG research, the proliferation of semantic modeling options “poses a dilemma for many enterprises,” increasing the risk of multiple groups creating multiple models that undermine the goal of agreed definitions. Open standards are the mechanism for preventing that fragmentation at the ecosystem level.

It’s also very important not to repeat the sins of the past by coupling your semantic layer platform to a particular application or data platform. Your semantic layer needs to live independently of its data and applications. By coupling your semantic layers to a data platform, you freeze out all other data sources and risk lock-in to that data platform. By coupling your semantic layer to a specific analytics or AI tool, you risk limiting its usefulness and, again, locking in to that tool.

Composable: Duplication Is the Root Cause of Inconsistency

Modern mapping systems are built from reusable layers, such as roads, traffic patterns, and points of interest, that can be combined in different ways. Composable semantic models apply the same idea to business logic.

When metrics, dimensions, and business logic are modular, teams can build on each other’s work rather than duplicate it. A “customer” dimension, once defined, becomes the foundation for the sales, marketing, and operations semantic models.

This is also what makes governance work at scale. When definitions change, they change in one place. Every downstream model or semantic object inherits the update automatically. In an environment where agents may be querying across hundreds of metrics and dozens of domains, that inherited consistency is not optional.

AtScale’s Semantic Modeling Language (SML) was designed around this principle. As a YAML-based, object-oriented specification, SML enables semantic models to be versioned, validated, and deployed through CI/CD pipelines as code.

Multimodal: Structure That AI and Humans Both Understand

A map works because it presents information in a form both humans and machines can understand. Multimodal semantic layers do the same for enterprise data, supporting both dimensional and tabular modeling so that every consumer, human or machine, can work in the structure that fits their workflow.

Dimensional modeling organizes data into facts and dimensions like time, product, region, and customer. These structures have been the foundation of enterprise analytics for decades because they map naturally to how business users think. They also happen to be exactly what AI systems need. 

LLMs struggle with complex schemas because raw schemas don’t communicate analytical intent. A schema with thousands of tables can’t tell the LLM which are facts, which are dimensions, how hierarchies work, or which aggregation rules apply in which context. Dimensional structures make that explicit.

Tabular modeling extends this principle to broader analytical patterns, flat, flexible structures that support high-performance querying across modern data platforms without requiring a strict star schema. Together, dimensional and tabular modeling cover the full range of how enterprise data is actually organized and consumed.

The result is a semantic layer that supports familiar exploration patterns for business users, while simultaneously providing AI agents with the structured, deterministic context they need to return consistent answers.

The Standard Emerging Underneath

The core challenge of enterprise AI is giving it a map it can trust.

These three principles, Open, Composable, and Multimodal, are the foundation of that map. They are the architectural requirements that determine whether a semantic layer can operate as infrastructure. The industry is converging on that understanding. 

What enterprises should be evaluating now is whether the approach they choose can hold up under the demands of agentic AI: accessible to every application without being tied to a single data source, scalable through modular design, and structured in a way that works for machines and humans.

That’s the standard enterprise AI will be built on.

Join Us at the Semantic Layer Summit

These conversations are exactly what the Semantic Layer Summit is designed for. If you’re evaluating how to build a semantic infrastructure that holds up under the demands of agentic AI, I’d encourage you to register. We’ll dive into open standards, composable modeling, and what it actually takes to move from pilot to production.

>> Register here.

SHARE
Guide: How to Choose a Semantic Layer
The Ultimate Guide to Choosing a Semantic Layer

See AtScale in Action

Schedule a Live Demo Today