I’m hosting a session on May 20. Here’s what we’ll dig into.
Agentic AI looks impressive in a demo, until you ask it a question your CFO actually cares about. If an AI agent can’t reliably answer “What was net revenue by region last quarter?” using governed logic, it isn’t an asset. It’s a liability.
Most failures here aren’t model failures. They’re context failures. Enterprises wire strong models into weak data stacks, then act surprised when agents hallucinate numbers, double count revenue, or ignore row-level security. The model did exactly what it was asked to do. The stack never gave it a trustworthy way to compute the answer.
On May 20, at the virtual Semantic Layer Summit, I’m bringing together people wiring AI into production data stacks to talk about this gap and how to close it. You’ll hear from Ted Kwartler (Accenture), André Balleyguier (Anthropic), Rajiv Shah (OpenHands), Ikechi Okoronkwo (WPP), Justin Lo (Chevron), and Joshua Patterson (NVIDIA) on what really breaks when “agentic AI” meets real enterprise data. They’re the people who get the call when an agent gives a CFO the wrong number.

The session is about one specific layer. Not another orchestration framework, vector store, or knowledge graph. A semantic layer and a computation engine that returns the same answer every time, instead of a confident guess.
Here’s what that looks like in practice. “Net revenue by region last quarter” sounds like one question. Inside a real enterprise: Sales orders (vbap_netwr), returns (re_reg_bap) and ledger and intercompany eliminations in Snowflake (gl_amt_net_lc). “Net revenue” isn’t any one of those columns. It’s gross orders, minus returns and discounts, minus intercompany, where each transaction is converted to USD at the close-of-period rate, rolled up to the region the controller uses, not the one the CRM uses.
That is the semantic part. The layer encodes those definitions once, in business terms, so every agent and every dashboard means the same thing by net revenue, region, and last quarter.
Then comes the compute. A semantic query engine resolves the question against those definitions, joins the SAP, Salesforce, and Snowflake sources, applies FX and eliminations in the right order, enforces row-level security, and returns one number with an audit trail. Grounded this way, LLM queries can reach 100% accuracy on business questions. Not 40. Not 80. Not 90. 100.
One last point. The semantic layer that matters is open. It isn’t locked inside one BI tool or one warehouse. It sits above your systems and serves the same governed answer to Tableau, Power BI, internal apps, and a fleet of agents. Same definition, same computation path, every surface that touches it.
If you can’t reproduce the number, you don’t have an answer. You have an anecdote. A CFO won’t sign off on a number that changes between Tuesday and Thursday, and a CISO won’t approve an agent that quietly steps through row-level security. The companies that win the next phase of enterprise AI won’t have the flashiest demos. They’ll have agents that return the same governed answer every time. Built to survive the boardroom.
If you’re pushing agents into finance, operations, or risk, join us on May 20. Come help build the layer that turns anecdotes into answers.
SHARE
Guide: How to Choose a Semantic Layer