What I saw from the board, and why it matters if you’re betting your career on AI.
When I joined AtScale’s board, I saw what looked like a best-in-class semantic layer: a way to make analytics faster, more consistent, and easier to govern across warehouses and tools.
Over the last few years, I’ve worked with startups and large enterprises on the data infrastructure behind AI. And a pattern kept repeating itself: the teams that get real value from AI are not the ones that simply find data faster. They are the ones that can trust the answer when it shows up in a dashboard, a board deck, or an AI-generated summary. That’s what AtScale actually does for leaders betting their careers on AI. It computes governed answers, so they can stand in front of a board, a CFO, or a regulator and trust the numbers on the slide.
Here’s the problem. Agentic AI today runs on a risky assumption: if you give agents metadata, schemas, and raw data access, they will reason their way to the right answer. In reality, they often do not. You get hallucinated metrics, the wrong time window, or answers built from a quiet mix of governed and ungoverned data. Trust erodes not because the output looks obviously broken, but because it looks almost right. Different AI engines can return different answers to the same question because most systems describe data, but very few govern how the answer should actually be computed. If you want AI to be more than a demo, you have to marry its reasoning and creativity with repeatable computational accuracy.
While all of this was unfolding, AtScale quietly reimagined itself for the AI age, embracing the Model Context Protocol (MCP). Snowflake Ventures led AtScale’s largest equity financing to date, a signal that semantic and computation infrastructure is no longer optional in the modern data stack. At the same time, AtScale joined the Open Semantic Interchange (OSI), an open initiative led by Snowflake which reflects a broader belief that semantic context should be open, portable, and not tied to a single vendor’s ecosystem. In a world where data platforms, OpenAI, Microsoft, Google, and others are all pushing their own “context” stacks, very few engines are designed to operate across those environments rather than force a new one.
The fashionable and accurate word for what AI needs here is context. The market has responded the only way it knows how, with dozens of “context layers” all promising to sit between your data and your agents. As a leader, you’re left with a hard question: which one actually decides the answer you’re willing to stand behind?
This is where AtScale earned my attention. It governs how answers are computed, not just how data is described, so the same question resolves to the same governed answer across analysts, tools, warehouses, and AI systems. That is a very different thing from simply exposing metadata or making data easier to access. It gives enterprises a way to apply shared business logic consistently, without forcing them onto yet another platform.
That mattered to me because it reflects how large enterprises actually operate. They are not going to rip out their warehouse, replace every tool, or standardize on a single AI model. They need an architecture that works across the stack they already have. AtScale does that today for some of the largest and most sophisticated companies in the world. It does that across clouds and tools for customers like Fidelity and The Home Depot, and for global brands like Nike that are hiring engineers explicitly to build semantic models in AtScale.
And that is what ultimately made this decision clear for me. With Snowflake Ventures investing, and with leaders like Jay Schuren, Luis Maldonado, Amy Miller, and Bryan Abou-Rjaily joining the company, this no longer felt like an interesting technical bet. It felt like the right place to help enterprises solve one of the most important infrastructure problems in AI.
That’s why I joined.
SHARE
Guide: How to Choose a Semantic Layer