Navigating the Intersection of Semantics, AI, and Enterprise Innovation with Juan Sequeda

Data-Driven Podcast

Dave Mariani interviews Juan Sequeda, the Principal Scientist at data.world. They discuss Juan’s Benchmark report on Knowledge Graphics and semantics. The conversation also covers generative AI and its role in the future of enterprise innovation.

See All Podcasts Button Arrow
Quote icon

It’s really a consumer-oriented technology right now. Chat GBT—we know it’s the first killer app for Gen AI. And like you, I use it. It’s a real productivity booster. So I’m really curious to see how it transitions into the enterprise.

And you guys have always been, on the forefront of saying, look, we need to invest in semantics. We need to treat this as a first-class citizen. And like, let’s be honest, not many people realize that that’s what they should be doing. And you know what? Part of the motivation here is just me getting tired and saying, look, we need to show the evidence that we need to invest in this stuff. I’m gonna put my money where my mouth is, I’m gonna give you the data to do that.

Transcript

Dave Mariani: Okay, welcome everyone to another episode of, Data Driven podcast. I’m Dave Mariani, I’m CTO and Co-founder of AtScale. And our special guest today is Juan Sequeda. And Juan is the head of the AI lab at Data World. So welcome to the podcast, Juan.

Juan Sequeda: Thank you, David. It’s a pleasure to be here.

Dave Mariani: So this is coming off of, a great session that Juan gave at the Semantic Layer Summit, back in April. And, you know, as Juan is, let’s just start with you. So, you know, you are the head of the AI lab at Data World. What was sort of your path to becoming, a data scientist, data guru, for how you got to where you are today

Juan Sequeda: Well, so it’s a long path. I would say it’s not a traditional path. So I’m an academic by heart and training. so I did undergraduate, I started actually my undergraduate in, in South America in Columbia. So I’m half Colombian, so my parents are from, and that’s where I got, that’s where I started. And actually, that’s how I got exposed to, all of this semantics I got, so it was actually around 2004, I think, or 2005, there was a, a, a seminar I bumped into about Semantic Web early on. Mm-Hmm. And that’s how I got introduced to this whole area of semantics of knowledge and about the web. Then, we got into, well, the web, which is this gigantic information space that integrates so much data knowledge in a very distributed, heterogeneous manner.

Juan Sequeda: And that got me really excited. I transferred and ended up at UT Austin as a computer scientist majoring in computer science. and from there I really was interested in continuing this interest in semantics and, and all these semantic web stacks and, things that we know today is RDF and OWL and Sparkle and that, mm-Hmm. And that got me really interested, from the beginning. I actually have a. I did another startup at that time, like in 2006, and, oh, seven. So I’ve always had an entrepreneurial interest, but science is what really drives me. So, from there, I, I met my, my, well, the person who became my Ph.D. advisor, and, and he’s a database guy, and he said, Hey, so I get this, I’m a database guy, but there’s all this semantic stuff in ontologies and, and graphs.

Juan Sequeda: What is the relationship between these semantic web technologies and relational databases that was a question he posed to me around 2006. And that question really changed my life. That way, that ended up being the reason why I did research. I decided I was interested in the problem, loved Austin, and loved working with my advisor. And I’m like, well, their path to keep doing this was to do a PhD. So I did my PhD at the University of Texas at Austin, on the topic of integrating relational databases with semantic web technologies. And so I’ve worked a lot on web standards. So a lot of the sta the standards that map relational databases to RDF and graphs and stuff, that came outta my PhD dissertation part. I did a lot of it, and I ended up doing a lot of query engines, virtualization, and a lot of theory practice.

Juan Sequeda: So then I’m like, Hey, this stuff is gonna pick up if it’s, if the industry’s really gonna be interested, they’re gonna say, Hey, we want all this semantic stuff, but my data’s in a relational database. How do I bridge it? And effectively, people started knocking on our door for that. And I think what happened was that we decided to commercialize that technology. That was probably 2000, I mean 2011, 12 2012, I was starting to get on planes and go do consulting and stuff and talking to folks. And we started a company around 2014 officially. And, we did that for a couple of years. All the semantics, virtualization, and now, the term knowledge graph has started to come out and be in a thing. And then, I think, data World was actually one of our customers ’cause they were interested in our IP.

Juan Sequeda: And when Data World entered the enterprise space around 2019, 2018, 2019, it was very clear that we should join forces. We’re both in Austin and very aligned in our mission and vision. That’s when my company was acquired by Data World in 2019. And then that, now I’ve been here for four years at Dat World, almost close to five now. I like to bridge. So I’m, a person about bridging communities, bridging technologies, bridging mindsets. And I think this has been one of the stuff that, has really strived, I really enjoy in my career. so that’s a long story. Very quickly, I came in from the academic startup, but I still have PhD students, master students, I still publish, I still do research, but I like to go to work.

Juan Sequeda: my research is inspired by what I’m actually working with customers to, find the,, the pains that they have today. Research problems that you’re like, okay, that’s a problem. That’s engineering. An engineering problem is the time and money problem. Mm-Hmm. science problem is one that you say, gimme time and money. And I’m like, I still don’t know if I can give you a solution for that. And that’s what I get inspired. And at some point, you start doing the research, and you realize, oh, I’m now turning this into an engineering problem. All right, Mm-Hmm. time and money problems. Let’s go solve it. So that’s what I like to do.

Dave Mariani: I love it. I love, love it. That’s, why I love to come across another person, another engineer, or scientist who loves semantics as much as I do. And you know, we, so you gave a great a, a great presentation at the summit, on the paper that you wrote, in this area. So, it’s a long title, generative AI benchmark, increasing the accuracy of LLMs in the enterprise with a knowledge graph. and, got a lot of buzz. I know, I saw that read it, and, was really impressed by its conclusions, but sort of one, you know, what sort of led you to embark upon creating a benchmark study. What was the driving factor?

Juan Sequeda: Dave I think you are folks like you have been in this space way longer than me, right Mean twice my, my career probably three times. I don’t know. And you guys have always been, we’ve been on the forefront of saying, look, we need to invest in semantics. We need to treat this as a first class citizen. And like, let’s be honest, not many people are realizing that that’s what they should be doing. And you know what? Part of the motivation here is just me getting tired and saying, look, we need to show the evidence that we need to invest in this stuff. I’m gonna put my money where my mouth is, I’m gonna give you the data to go do that. And just because we, I have this very strong intuition. I had that intuition folks like you have, but I’ve always had that intuition, and we were building products around it.

Juan Sequeda: And, so that was one of the kind of, let’s call that the personal, ego motivation to do that. The second one is that it’s just very, there’s so much blah, blah, blah going around. I mean, this was over a year ago. There’s so much blah, blah, blah around, oh yeah, we should do LMS that can go generate sql. I’m like, blah, okay, we gotta stop with the blah, blah, blah, and we gotta figure this out. Yes. Because too much marketing, too much bullshit around here. Apologies from my language. But anyways, . So then, from there, so how do we find a, and then how do I find a systematic approach to be able to find it evidence where I can actually very strongly convince people and not just be, oh, here’s just another anecdotal example, becomes a little blog post that gets forgotten.

Juan Sequeda: So we wanted something to be foundational. So for me, I’m like, oh, this is all science right here. There’s act. So the way to go think about this, like there is a very specific unknown that we have. And, and, and, and as cliche as it sounds, science is about taking an un an unknown and turning it into unknown. Mm-Hmm. , and then find the scientific method. Let’s make observations about this may come up with the hypothesis. Be able to come up with the e experiment to be able to find evidence that supports that hypothesis you share with the community. Because that’s where peer review comes around and they’re like, you try to convince others. And I think that’s exactly what we’ve been able to accomplish with the benchmark. So the part of the benchmark is very specifically the research question is, Hey, we’re all talking about LLMs and, and, and, and be able to go do text to SQL and, and, and, and question answering over databases.

Juan Sequeda: Mm-Hmm. , really how well does this work To what extent does that work Mm-Hmm. the view where there is the extent, how much, where are we today And and how much does that, does that improve And then we’ve, in folks like you, right Folks like me, folks who’ve been in this space, they’re like, wait, semantics are needed. And they’re like, yes, I agree, but we actually need to find a way to show this that it does improve. So the ques the, the other question is like, to what extent does the semantics, the knowledge improve that Mm-Hmm. . So, and that’s a specific question. Mm-Hmm. And then how do we come up with this is, well, if we look, if we observe in the real world, everybody’s talking about easy examples. All benchmarks aren’t easy examples, but the world is all about complexity.

Juan Sequeda: Yes. So how do we actually find this deal with this complexity So I think the other framework that we, the, the part of that we come up with is in this, in un understanding the types of questions. And we have this quadrant of easy questions to harder questions, easy questions being a list of things and harder questions being, metrics and KPIs. And then easy to hard complexity of the schema. Do we need a couple of tables or do I need eight, nine tables Mm-Hmm. . So you put these two things, you get the quadrant, and now I can kind of systematically be able to test questions on each of these quadrants and then be able to analyze that more instead of just having some general blanket statements. so I think that’s kind of what the, the, that being able to kind of really understand things that we didn’t know.

Juan Sequeda: And, and at, at the end of the, at the end of the day, the result is, yeah, we have the evidence that by investing in semantics, investing in knowledge graphs, you’ll increase the accuracy. the, the numbers that we present are three times more knowing, knowing that it’s what the very basic basic prompt strategy and stuff. So that means that there’s so much room for improvement. And that’s the exciting thing, is that we now know a base, a base, and everybody’s improving. So now we, now people are validating it, realizing that this is, this is, this is the evidence they needed, and that’s how we advance knowledge, and that’s how we advance actually the industry too.

Dave Mariani: Yeah. And you know what, and then what I love is that you, you actually employed the scientific method, which we, which God, it’s so, so rare these days that you had a hypothesis, you develop a methodology to test your hypothesis, you documented it in your paper, and then you ran your experiment and, and produced the results. and, and that was like, you know, it was, that’s why I liked it so much. I just love that your methodology was clear. and so it wasn’t just a blog post, you know, it really was a scientific study and I love it. But you know, some of your numbers there like that, when I sort of looked in and you, you gave your presentation at the semantic layer Summit, what I really, what struck me, Juan, was that, the numbers themselves were impressive in terms of how much, semantics improved the accuracy of LLMs. But it’s also the converse that, without a se, without semantics, I mean, the accuracy, some of you got a big goose egg on some of those results. In other words, it’s like they were, they were completely wrong, meaning, the, the prompts without semantics as opposed to being improved with them. And I was kind of really surprised by that. Were you surprised by, you know, just by some of, some of your results on the in

Juan Sequeda: I was, so, I was surprised on, on that. And so look, lemme go back on what I was surprised on in general. So mm-hmm. one, when this whole started, I was very surprised that the, that these LMS in particular, this GPT-4, and we we’re working out with mixed drill, it is it very well too, it knows these semantic web standards, RD fowl sparkle. Mm-Hmm. , it knows that. Mm-Hmm. . And, and, and that, that I was very surprised, happily surprised be, but it also makes sense because these are what people may consider them obscure old standards, but they’ve been out there on the web for 10, 20 years. Yep. Right So it, it, it’s, these models, these models have been trained on that stuff. So that, so that’s one that was the first happily surprise around that. Now, the other one was seeing that when you, when the, the execution accuracy when it was going to on seql without any, semantics, when you had starting more than five tables, the accuracy was zero. Yes. That was very surprising. That was very surprising. But what then I realized is that talking to a lot of people who are using tools, these copilots that are coming out Mm-Hmm. , they’re like, oh, but this is being constrained on a handful of tables. I’m like, it makes sense,

Dave Mariani: Right

Juan Sequeda: Because they’re like, yeah, I know that it were, I know we’ll get some decent results if it’s three, four tables, but the moment you’re gonna gimme more tables, it’s gonna go bad. So I’m like, so, so socially, right Anecdotally, that’s also being kind of, people are coming to that same conclusion. And I think now the, the question is, why is this happening And one mm-Hmm. , we really will never know because we don’t know what’s going inside of these l lms. But if I sit down and I hypothesize this, I think when it comes to the, the semantics, you’re make, you’re making this knowledge, the, the language explicit. Mm-Hmm. . And it specifically in a graph, it’s very explicit because in language, you have a subject verb, an object. Mm-Hmm. no edge node. You start doing that. So, mm-hmm. . And actually the, the relationship, the relationships themselves become a first class citizen.

Juan Sequeda: And it’s part of the language. While in sql, the relationships are foreign keys and they’re implicit. So you really, I mean, you have to know that these two column names are the one that matches. And then you don’t, they may have different codes and stuff like that. It can match to a lot of things. So it will, it then kind of makes sense that once you’re starting to go match a bunch of, join a bunch of tables, which implies more, having more and more relationships, that is gonna kind of go down because mm-Hmm. , you require more of that context that is getting lost in your, and, and requiring more of it, and you don’t have any of it. So that gets it more complicated. So it just starts to kind of halluc it and generates things. So again, I don’t know why, that’s my hypothesis of why it’s happening, but at the end of the day, it’s all about just having more context. And that semantics and that knowledge is the context that we need. So that’s why I tell everybody, you want to be able to go do this question answering over enterprise data. You must invest in semantics. If you don’t do that, you’re gonna fail. Don’t fail. Please don’t fail. Don’t be a loser.

Dave Mariani: Yeah. It’s amazing. It’s a five or six. All it took is five or six TA tables to sort of, to, to get zero accuracy. I, that was, that, that really hit me, hit me hard. I, I, but it, it was, it’s also, you know, was I had the same kind of hypothesis that you did, one, which is that there’s no way that without semantics, you could just look at tables with foreign keys and figure out how to answer a business question without any context. So it just, it just, it just makes intuitive sense to me. That’s why we invested in semantics. It’s like, people need context, machines need context too. So, so I was really, really happy to see that. So, so what was the reaction to your report, Juan What, what’s the range of reactions that you got, would you say

Juan Sequeda: I, well, I think that, well, we published that last November and it went viral. Mm-Hmm. , right So we know that, I think every single major tech company, got access to it. I got people ping me from different big tech companies. I’m like, wow, like that report that you wrote, I know it’s being shared all over the place. so that’s one second, I also got a lot of reactions from folks who are saying thank you, because finally I can now go off and give a piece like, Hey, here it is. Mm-Hmm, here it is. Yeah. Here’s the evidence. Like, here’s now. Can you believe me now Right Mm-Hmm. . So I think that, that, that was, that was another one. third, it’s, it’s, it, it’s folks like, like folks like you, another other vendors too are like, yeah, this is what we’ve been saying some.

Juan Sequeda: And then they’ve actually been validating it too, right So that, that’s the other great thing is that we’re seeing other folks independently, replicating the results. Validating. So that shows that it’s, we’re all very much in alignment. And I mean, I’d argue that, think about last October, when would you, when were you hearing about semantics and knowledge graphs Not much from the LLMs. And now I’m like all over. I’m seeing every day I’m getting ping, seeing things on my LinkedIn on medium and stuff, more and more posts about, knowledge graphs and context and LLMs. And I’m like, well, I, I think I helped contribute to that, that,

Dave Mariani: You definitely did that,

Juan Sequeda: That you def that conversation happening right there. And, and at the end of the day, like, this is all, this is what I really wanted. I want more people to realize that this is a really critical topic that we need to be considering. And I will acknowledge though, that it’s not easy and it’s a paradigm shift. Mm-Hmm. . And, and by the way, but change is hard, right I mean, if it, if it were easy, this would all be working. So it’s, there’s, there’s a change that needs to happen. So I think that the people, there’s gonna be a lot of the tendencies of folks are saying, I want this automated. This is all just technology. And Mm-Hmm. , we will accomplish a little bit of that with these co-pilots of just, yeah. Text to sql. It will generate SQL query for you, but it’s not gonna be for a larger audience.

Juan Sequeda: It’s gonna be for a technical audience. So maybe it’s okay. Mm-Hmm. They don’t need the a hundred percent accuracy. They’re okay with some code because they’re gonna edit it anyways. But if we really want to change the industry, like we need to go, go, go, invest in the semantics and the knowledge graphs. And so the other reaction is people saying, oh, thi this question that, that, executive have been asking for this all along. And they’re like, Mm-Hmm. , I think we’re, we’re getting closer there. I’m not gonna say we’re there. We’re never gonna be there. Because the, the goal is the goalpost is always gonna move, but we’re getting closer to, to, to, to achieving a lot of these kind of, these visions that we’ve been having for so long. Mm-Hmm. think, right, right. Now, what it’s important is to set the, set the expectations correctly. We gotta start small and do it. ’cause otherwise we’re gonna go swing the pendulum and people have all these high expectations and hype and reverse and like, blah. We’ll go back. And the concern I have is that in 10 years time, we’re still ship shipping spreadsheets around.

Dave Mariani: Yeah. You know what it’s like, I was, that was gonna be my next question, which is, you know, there is a lot of hype, and, and what’s your take on Gen ai you know, is it gonna change the world And if it is, what needs to happen for that It is changing the world. But, God, there’s, I, I see so much hype out that every single company, every single one of our customers is got a, a, a budget line item to invest in gen ai. And they don’t even know what that means, or, or what that, what that even looks like in terms of where they suspend the money. What, what are you seeing out there ’cause you’re right in the middle, all of this one.

Juan Sequeda: Yeah. So, so couple things. One, generative AI has, cha has already changed the world and will continue to change the world. So it’s like two things. One, the productivity gains that are gonna, are that people already happening are already game-changing. So, Mm-Hmm. , you’re not using gen AI for your day-to-Day stuff like using chat, GPD, like, again, you’re the follower. You can be a leader. So you do. So, like, I use chat, chat tv for every, every task that I try to, that I have to go through, I ask myself, Mm-Hmm, , how could I use GPT for this And then think about it. And then I use it and I’m like, oh, I just saved a bunch of time for that. Like, it’s a, it’s, it’s, it’s a, it’s a staple in my household. Like my wife and I, like, we use it all the time.

Juan Sequeda: We think about it. And if we’re asking ourselves that, we’re like, wait, wait, hold on, let’s go quick. Oh, that helped us brainstorm help. So we save a lot of time. So that’s already a game changer in that aspect. Yeah. so that’s number one sec. And then therefore, there’s gonna be so much economic growth. There’s a lot of people gonna make money around this stuff, so for sure, that’s already changing the world. we are definitely in a web moment, so, Mm-Hmm. , imagine it’s 1991 Mm-Hmm. . And this web, this thing called web comes out people, a lot of people were skeptical about it, a lot of people, and it changed their world, right it’s a mobile moment type of, so we’re in that. So we’re always gonna have a lot of this hype coming in. Now, I think what, what what happens is that there’s all this low hanging fruit that we’re all focused on, because obviously we want to go focus on the easy stuff.

Juan Sequeda: Let’s make easy fast money quickly, which is the right thing to go do, right but then, but there’s gonna be so much stuff over there. So we’re gonna start seeing kind of the, the, the, the, the, the main nuggets come out and it’s not gonna be a space for everybody. And Mm-Hmm. . And actually, I think that a lot of the productivity stuff is probably gonna be, it’s not gonna be that huge game changer. Mm-Hmm. , I think there one, once we get, once we get the productivity, understand the productivity, people are gonna be expecting more, the productivity is a vitamin. Now I’m like, okay, what are the s And I think now is when we start to go say, okay, we start dealing with a little bit more harder problems. and, and that’s where we’re gonna go see the leaders kind of pushing for that.

Juan Sequeda: Now, companies who are like investing in gen ai, this is where you’re gonna go see, are you a leader Are you a follower Are you Mm-Hmm. , if you’re a follower is like, I just need gen ai because everybody’s doing a gen ai. You’re just being a follower. Mm-Hmm. being a leader is, I have this problem I’ve been trying to go solve for so long. I know what the pain is, I know what the cost is. I know how much money I’m leaving the table, but I still not be able to go solve it. That’s my focus. How can I probably use this technology to help me solve that problem So it’s problem business first, and then figure out the technology to go solve that, versus, oh, let’s go figure out how to go use this technology two different ways that use of the technology first, that’s what followers do. Starting first with the business, that’s what leaders are gonna do. So we’re gonna be start seeing who are the leaders and who are the followers. So I say this all the time, decide who you want to go be.

Dave Mariani: Yeah. And that’s good advice. I couldn’t agree with you more. It’s like, I think that there’s a, there’s a lot more in that other bucket of, I gotta be investing in gen AI as opposed to, here’s, here’s a problem that we’ve never really been able to crack, and we think we may be able to apply this technology to that problem’s. And it’s risky.

Juan Sequeda: It’s, it’s risky to go do these things, right You have to be innovative. Yeah. You gotta, and then we’re just, we’re just following the same approaches as always. Right There’s those hype cycles that we’re going through, right We’re crossing the chasms around these things. This is, I mean, none of this is new. I mean, we’ve, we’ve, we’ve gone through these cycles before. We just kind of, this is where I’ll actually call out for junior folks. I’m like, this is where you should really partner up with folks who have seen the world a couple times. Mm-Hmm. up

Dave Mariani: Who, who’ve seen these sort of major transitions and can, can play it out. Exactly. ’cause I, I’d agree with you that this is a major transition and we, we don’t really even know yet where this is going. and, and there’s no way to predict it because it’s, you, you know, but, but it’s definitely in, in its first early phases and, and companies and enterprises that can figure out how to apply it, it’ll be a competitive differentiator for them. completely. Yeah. Completely. And I’m, and I’m with you. look, it’s, for me, it’s, it’s a, it’s, it’s really sort of a consumer oriented sort of technology right now. Chat, GBT, we know it’s the, that’s the first sort of real sort of killer app for Gen ai. And like you, I use it, it’s a real productivity booster. So I’m, I’m really curious to see how it, how it transitions into the enterprise.

Dave Mariani: How is it not just how has it changed the way we work and in some cases it’s already changing the way it work. ’cause you know, I use it to edit my blog posts and and, and to, you know, generate some stuff for me when it comes to, you know, website content and the like. And, and so it is changing things that way. But I really do am looking for like that next step. And so far I see a bunch of co-pilots, and that seems to be sort of helpers. The co-pilots are helpers in helping you and your normal tasks and your everyday tasks.

Dave Mariani: Where does it go from there, Juan Where do you think, what’s your prediction of like, how do we go beyond sort of the co-pilots in

Juan Sequeda: So, I, I’m glad I bringing this up because I’ve been thinking about this and I’ve been writing, writing a lot about this right now, which is, look, the co-pilot is where we are because that’s the, that’s the technical side. Mm-Hmm. , right So you, we, I mean, GitHub copilot helps us write code faster, and we’re looking at these copilots for writing SQL queries, and it helps it, and, and look, at the end of the day, this is just your traditional text to SQL Vet stuff where I have a question and it returns me a query that I can go edit and whatever, and that, and, and go, go get an answer for that. And, at that point, the LM is kind of at the center of the, and this gen AI is at the center right there. and the users of personas for that are smaller, right

Juan Sequeda: They’re good technical personas. They’re the ones who, understand the data, they can edit it. Actually, they don’t want it to be perfect because they want to continue being busy. They’re like, they want to say, oh, I’m being productive, but I still need to do my work. Right So, right They actually have no incentives to say like, oh, this is automating my work. Right. So they, so, but it is making them more productive when it gives me an explanation, but it’s a very textual explanation, Hey, I took this table A and I joined it with table B, and then I did this thing. It’s like, okay, that’s where we are right now, but let’s acknowledge that it’s for, that’s, that, that is, that’s a low-hanging fruit. It’s a mm-Hmm. easier problem to deal with. And especially if we constrain it for only a couple of tables, then it actually can do a pretty good job. And this is just, I’d argue, tech companies being comfort in their comfort zone and addressing the technical problems that they know about. It’s just, this is all about the productivity. Now, copilots, I think the other way of seeing this is what I’ve been calling the, the concierge service.

Dave Mariani: This concierge service is, is business users asking a question and getting an answer Mm-Hmm. . But that answer, and they don’t have to know what’s going on underneath the hood, right They don’t need to know what’s going underneath the hood. But underneath the hood is not just an LLM taking a text and generated the code. No. There is this complex agent AI framework that is planning to figure out who is this persona What are they trying to go ask What is the type of question Is the question that can ask, do I have all the context behind it Do I need to go ask a clarifying question Right The LLM happens to be a puzzle, a piece of that puzzle. Mm-Hmm. . So this is a broader, these agent frameworks are, are, is, is, is a much broader play here. And really what it’s working on, it’s, it’s building on the context of your organization.

Juan Sequeda: Mm-Hmm. So you have this context engine that is talking to the brain of your organization. That knows the people, knows the data, knows the problems you’re gonna go solve. Because by the way, you’re curating that, and it learns through that stuff. So I think, and now in this case, accuracy explainability and governance are key. Like, yes, somebody, the question they, if you’re given an answer-back, you need to know it’s accurate and how do you know it’s accurate Because you can provide a, an explanation and you need to know who that persona is. So you’re giving an explanation targeted for that persona. And by the way, maybe the answer is, you need to go talk to Dave.

Juan Sequeda: Perfect. That’s a, that’s a, that’s a great answer for somebody, at, at that level. And guess what, who are the personas for this They’re gonna be, the rest of the organization, the non-technical folks who are the majority of your organization. So I think that’s the difference right there. And that’s where we need to head towards. And I think that’s game-changing because now imagine every single person in the organization has their trusted advisor. One of my, one of my, I was talking to one of our colleagues at Data Our World and we’re talking about this like this concierge and like he said, is, you know, these are like these ad hoc questions that you want to answer, but the ad hoc question, I mean, ad hoc has like this negative connotation, but if you’re in, you’re in analysis, you’re looking at a dashboard and you’re getting, you’re answering your questions, and sometimes you, that best question comes to that moment.

Juan Sequeda: and you, and like you have that moment of curiosity and that deep creativity. And, if you don’t get an answer to that moment, you kind of lose that magic. And, maybe not, maybe you didn’t lose anything, maybe you did lose a lot. So I think that’s it. And I, I also call, I’m, I’m calling this now, this, just in time question. So just like we have like, just-in-time inventory management where I want to minimize backlog requests and increase efficiencies, well, I wanna be able to answer my questions quickly, and efficiently. So I think that’s where we’re going. And it’s really this concierge service that understands the brain of your organization. You have this context engine in the middle. It’s hard, it’s not easy, but that’s what we should go straight for.

Dave Mariani: I, I love that. I love this concierge service. But you know, and I love the fact that if you add, you gotta add your semantics to make it, to make it, have the right context for the business. But there is, right now the LLMs are being trained on the, the internet, by very large, you know, cloud companies, that have the resources to do so, and the tech, the technical resource as well as the, the, the funding to be able to train these models. How do we get the leap where an enterprise can train their own LLM model It seems to be a big chasm today, Juan, right So because, for that concierge service to work for a particular business, it’s gotta be trained on their business. No,

Juan Sequeda: I honestly, here’s where I’m happy to get, well throw my position on. I’m happy to get pushback from you or anybody else who’s listening. I don’t think companies will be training.

Dave Mariani: Mm-Hmm. ,

Juan Sequeda: I think this is why I was talking about this concierge service is really, you have this context engine, which is really a large puzzle, and LLMs are part of that puzzle, but they’re not the center be all of everything. Mm-Hmm.

Dave Mariani: Okay.

Juan Sequeda: So you want to be able, and this I go, this goes back to just your, our traditional, we call good old fashioned AI, or you start Mm-Hmm. When you come up with agents, you have planning, and you define things like not every. A lot of things need to be deterministic. I mean, here’s what I, here, here, here’s how we’re defining our, our agent systems. You ask a question.

Dave Mariani: Mm-Hmm,

Juan Sequeda: What should I do First of all, let’s, what type of question is this Is this a fact-based question Is this an opinion question Is this a subjective question Hey, depending on that, I’m gonna take different routes. Mm-Hmm, right And if this is a fact-based question, is this a question about the business Is it a question we’re required of the data Do I have the knowledge about this All of these things I can go to determine what that type of question is. I may be using an LM for that. I don’t need any trained stuff, right Mm-Hmm, then, I need to go to the context. I mean, does this person even have access to this data to go to that That’s not a question about the LLM. That’s a question that we, so I think that’s the way to go think about it. And that’s the enterprise, that’s a concierge service if you’re just, and LMS are gonna be part of all these things.

Juan Sequeda: But I mean, at the end of the day, this is just state machines that we’re defining, and this is just classic computer science. There’s nothing really ground-picking here. The groundbreaking thing here is that we’re doing this at a scale and bringing in these LLMs that can do a lot of great stuff for us and automate many more things. so my point is, we’re not gonna be training things because frankly, it’s just, it’s just too expensive and there’s not enough people to go understand how to go train that you’re not gonna go hire somebody who knows how to go train things. Yeah.

Dave Mariani: Yeah. That’s my sense too. I love that it’s an ingredient to a much larger equation. So, I’m, I’m with you on that. That’s a great take. Well, Juan, we’re, we’re at a time here. This has been a great talk. I know I’ve learned a lot. I hope, the listeners have learned a lot. Is there anything, any parting wisdom, Juan, you want to you wanna, share with the listeners out there?

Juan Sequeda: First of all, you, my T-shirt always says Honest no BS. My, only plug here listen to are also our podcast catalog and cocktails, the honest no BS non sales data podcast. Just be honest to no BS like, you know, when you’re BSing, right Strive for excellence, push people, and we have to have more of these fierce conversations. So when people are asking them to go do something or whatever, ask why, go push ’em, and figure out how to go tie this directly to business value. And people don’t know, you know, they’re, they’re BSing. So just be honest and no BS.

Dave Mariani: I love it. It says it all there right on your shirt. So, hey, thanks, Juan, for first talking with me today. And thanks, everybody, for listening. Stay data-driven, everyone. Have a good day.

Juan Sequeda: Thank you. Bye.

Be Data-Driven At Scale