How can enterprises prepare for a world where AI agents, not just humans, are analyzing data and making recommendations? Every data and analytics leader is grappling with this question, and the answer lies in a universal semantic layer.
In my recent conversation with Jens Kröhnert, Principal Solution Architect at Oraylis, we explored how semantic layers and agentic AI are converging and why enterprises need a stronger foundation before deploying autonomous systems safely.
Jens and I both worked with the early Microsoft BI stack, struggled to scale multidimensional models, and learned firsthand that business users want speed, consistency, and self-service. A semantic layer made that possible then, and the same principle holds today.
The difference is that now machines, not just people, rely on that semantic foundation. The stakes are significantly higher.
Machines Need Semantics Even More Than Humans Do
Semantic layers were initially built to enable users to interact with data without writing SQL or understanding complex schemas. But as Jens pointed out, the need is even more acute for AI systems.
Humans bring intuition and institutional knowledge. LLMs don’t understand business logic, dimensional hierarchies, custom metrics, or data quality rules. When they’re asked to navigate raw tables, they guess, and those guesses are often inconsistent and wrong. For enterprise use cases, that’s not only a productivity issue. It’s a governance and risk problem.
AI systems need a universal semantic layer to eliminate ambiguity and deliver deterministic answers.
The Role of the Semantic Layer in the AI Stack
A universal semantic layer provides four essential capabilities:
- Providing a “single source of truth” for business metric definitions
- Mapping physical data to business concepts
- Delivering consistent, accurate queries
- Providing a governance control plane for permissions, masking, and lineage
These components ensure that every consumer uses the same logic across BI tools, applications, and AI agents. The semantic layer defines business meaning once and applies it everywhere.
Once essential only for dashboards and analysts, the semantic layer is now also the foundation for AI copilots, autonomous workflows, and agentic systems.
Inside the AI Frontier Model
To explain how AI will reshape enterprise operations, Jens shared the AI Frontier framework, developed by Microsoft Research. It consists of three phases:
- Plan – Humans define goals, context, policies, and ontology
- Develop – AI generates pipelines, code, and data structures
- Operate – AI executes, monitors, and optimizes systems automatically

Jens’s team already uses automation to generate multiple layers of a modern data platform. The final gap is the generation of the semantic layer itself. This is where open standards like AtScale’s Semantic Modeling Language (SML) become essential. SML provides AI with a structured way to propose, refine, and ultimately help build semantic models programmatically.
In the future, AI will not only query semantic models but also help create and maintain them.
What Happens When an LLM Has a Governed Context
During our conversation, I demonstrated how Claude uses the Model Context Protocol (MCP) to interact with AtScale’s semantic layer. MCP gives the LLM a standardized way to:
- Discover semantic models
- Understand their dimensions and metrics
- Execute governed queries
- Synthesize insights using consistent business logic
The difference is profound. Instead of reconstructing joins or inferring definitions, Claude relies entirely on AtScale’s governed semantics. This eliminates hallucinations and enables far richer analysis.
For example, a simple prompt such as “Tell me something about sales” prompted Claude to run a series of guided queries, evaluate the results, and return synthesized insights.
This is beyond natural language query. It’s autonomous analytical reasoning grounded in trusted business definitions.
Toward Autonomous Operation
Once AI can understand your business context and act within defined guardrails, it can begin to take on controlled operational tasks, such as running experiments, optimizing spending, or suggesting targeted campaigns.
Over time, AI agents will communicate with each other more than with humans. Enterprises that don’t expose their business meaning through a semantic layer risk becoming invisible in an agent-driven digital ecosystem.
This is why semantic governance is becoming a strategic priority, not just a technical one.
How to Prepare Now
To operationalize AI agents across the enterprise, organizations need to start with their semantic foundation. Focus on:
- Establishing trusted metric definitions
- Clarifying business entities and relationships
- Applying consistent governance and access rules
- Adopting open semantic standards like SML
- Enabling AI access through MCP instead of custom integrations
These steps ensure that when AI enters a workflow, it operates safely, predictably, and in alignment with business intent.
Any business preparing for an agent-driven future needs to embrace a semantic-first AI architecture, where meaning, context, and governance are prerequisites for automation.Ready to get started? Learn more about semantic layers and see AtScale in action.
SHARE
2026 State of the Semantic Layer