Episode 9 of The Applied AI Podcast

Jacob Andra and Alexandra Pasi, Ph.D discuss Agentic AI

About the episode

Two major ideas are shaping the next era of artificial intelligence: agentic AI and neurosymbolic AI. Talbot West CEO Jacob Andra and Lucidity Sciences CEO Dr. Alexandra Pasi bring together their complementary perspectives.

They unpack the confusion surrounding the term “agentic AI.” The most common misuses fall into three categories.

1.Digital employee. This use assumes an AI can fully replace a human role. In practice, jobs consist of overlapping tasks that depend on judgment, context, and social understanding. Substituting a human one-to-one with an AI system oversimplifies work and introduces risk.

2.AI interacting with humans. Many products describe themselves as agentic simply because they interact with people. Yet a chatbot or outbound assistant is not necessarily intelligent or autonomous. Interface does not equal agency.

3.Autonomous executor. Another common assumption is that an AI that performs tasks independently qualifies as agentic. Yet there are non-AI autonomous systems.

Jacob proposes a definition that is specific enough for real-world planning: an AI function able to complete a task as part of a larger ensemble or capability. This definition treats agentic systems as modular and composable. Each agent performs a defined function within a coordinated network of systems. This approach moves the conversation from vague marketing language to measurable performance outcomes.

From there, the discussion turns to large language models. Both Jacob and Alexandra acknowledge their extraordinary power but also their limitations. LLMs have made AI accessible to everyone through natural language, allowing rapid knowledge retrieval, summarization, and idea generation. At the same time, language itself is a constraint. Human language was not built for exact quantitative reasoning or precise logical relationships. LLMs lose reliability when they are asked to maintain long context or handle tightly coupled data. The guests agree that these models should be viewed primarily as interface layers that help people and organizations communicate with structured information systems.

The conversation then transitions to neurosymbolic AI, which combines neural networks and symbolic reasoning into a single architecture. The neural components are probabilistic and pattern-oriented. They generalize and infer. The symbolic components operate on defined rules and logical constraints. They ensure structure, coherence, and traceability. When combined you get an intelligent system that is both adaptive and verifiable.

Dr. Pasi explains how this concept has deep roots in earlier AI research. In some early mathematics experiments, language models were paired with formal systems like Lean to verify every logical step. In modern enterprise applications, this same hybrid pattern provides a way to reconcile innovation with control. It creates a bridge between the flexibility of learning models and the accountability required by governance and compliance.

Jacob shares two Talbot West use cases that illustrate these ideas. The first involves enterprise evaluation and roadmapping. Many organizations have complex, organically grown processes and data flows that are difficult to map or optimize.

The second example is BizForesight, a platform to help business owners understand and improve company value. It combines document ingestion, interviews, and machine learning within a defined symbolic framework. The symbolic layer enforces valuation logic and methodological integrity, while the neural layer interprets unstructured data and provides adaptive recommendations.

Episode transcript

Welcome to episode nine of the Applied AI Podcast. I'm your host, Jacob Andra. Today I bring back Dr. Alexandra Pasi, machine learning scientist, CEO of Lucidity sciences, and all around extremely well-informed person on topics of artificial intelligence and machine learning.

Jacob Andra: 

Welcome Dr. Pasi. It's good to have you again.

Lexi Pasi: 

Yeah, agreed to be back Jacob.

Jacob Andra: 

Yeah, so we had such a good first podcast. I got so much great feedback from that. Everybody loved it. And you know, you and I, when we, it seems like when we get our minds together, we just see things similarly, but we both come from such a different angle. Uh, I'm much more from the business application side. I'm not a machine learning scientist like yourself. And, um, you know, you come from that deep research perspective. So it's great to, I think, bring the two minds together.

Lexi Pasi: 

I agree. I mean, I think like the challenge of the era is forming that bridge. Right. And that's actually, um, one of the things I've been thinking about a lot when it comes to agent ai, so I'm sure we'll, we'll talk about that a little bit.

Jacob Andra: 

Yeah, you're, you're jumping right to it. A AI is what we're gonna talk about, so go right ahead.

Lexi Pasi: 

Wow. Well, okay. Yeah, let's jump right in. I mean, I think that the first place to start, kind of like we did last time and, and where most conversations I have about AI start is like, what is ai? And I, you know, I wrote about this recently. I've kind of given up on trying to like put some formal bounds around that concept. Um, because, you know, I think we can talk a lot about the tools that are being deployed and we can talk a lot about the use cases. But one of the things that I've noticed is that when it comes to how people think about a ai, and I think a AI is a great. Sub example of this, people tend to try to describe it in terms of a class of function or describe it in terms of a class of form and sort of like tools falling in under that. And if you try to like force it into a definition under either of those two paradigms, they don't. Really match up with each other, right? There's some overlap, um, but they don't really match up. And so I think that one of the things we do have in common, despite coming from very different backgrounds, is being very objective focused. And for me, that's the right approach talk about AI through in general. Um, and I, I think sort of the right way to understand agent to ai.

Jacob Andra: 

I absolutely agree, and I came into this wanting to sort of explore, um, to your point that people kind of get these very fuzzy categories, these muddled categories. Uh, it might be enlightening to just go over some of the. Ways that the term is used.'cause it's used in so many very distinct ways with, and I think when people use it, they don't realize what they're actually doing. They think there's just this accepted definition of what a agentic AI is, but there's not. And so to that, um, you can probably think of some ways, I can think of a few off the top of my head that people use. The term one is as a replacement for humans. So, um, there's kind of now this new term in business context of digital employee like. If you hire this AI and call it a digital employee. You don't need a human to do that specific role'cause this AI will do it. And very much ag agentic AI is used in a, as a kind of synonymous term for that. And you've probably seen that a bit. Um, and I think that's a misleading approach. I don't actually think any of the AI capabilities are able to do a human employee's job completely and replace human expertise. And it's just a very muddled way of thinking about agentic ai from my perspective.

Lexi Pasi: 

Yeah, I, I tend to agree with that. I mean, I think we talked a little bit about, about this last time, um, this idea that, you know, maybe when. It's a really strong labor market and everybody's hiring. You sort of create these really specialist niche roles and on paper they look like sets of tasks that should be easily automatable. Um, but then as people try to become more streamlined and efficient, uh, those tasks and the really, the constellation of tasks that an employee is asked to perform, become a lot more subtle and difficult for. Technology to automate. And so I, I think that, um, the idea that you can replace a human sort of risk-free, uh, or, you know, replace a human one-to-one is not going to be, uh, a very strong AI strategy for the vast majority of organizations.

Jacob Andra: 

Absolutely. I mean, obviously we know that, uh, for example, large language models, which are the class of ai that's getting the most usage and the most airtime in the press, um, certainly can be, um, force multiplier so that one human can potentially do the job of a. Multiple humans in the past. Um, but it's not replacing anyone. Um, so that's one common use of agent ai. And here's another is an AI that interfaces with. People such as customers. And so you often hear this in the context of customer service agent, uh, outbound sales agent, you know, making outbound calls and you hear it used like, oh yeah, I'm, I'm, I need an AI agent for this. Or a company is saying, yeah, we offer AI agents for that. And again, I think this is a misuse of the term agentic ai. It doesn't actually capture, um, it's not a particularly useful, uh, category in my opinion. You've probably seen this as well, l.

Lexi Pasi: 

Yeah, I think this is actually one of the most common starting places for a lot of people trying to, or at least it was. I do think it's falling out of vogue, but for a while there, as organizations, were trying to figure out like what's our AI strategy? Uh, the go-to was chat bots. And I think, you know, in a lot of ways it was really just'cause it was kind of easiest to do that, you know, I'm gonna set up a little vanilla rag system, it's gonna have access to my knowledge base, it's gonna talk to my customers. And that was pretty accessible. That was sort of the first, you know, um. Out of the box way that people were trying to use this, besides just like going straight to chat GPT. Uh, but I, I do think that, you know, we've very quickly reached the limits of what those patterns can do. Like if, if you've already implemented it thinking that was gonna totally revolutionize and transform your organization overnight, you're probably sitting there a little bit disappointed, thinking like, okay, what's next?

Jacob Andra: 

Yeah, totally agree with that. And so then I have one more which, uh, definition or or way that the term is used that I think is again, a, um, sort of misunderstanding and then I'll get to the one, um, that I think is actually the correct definition or use of the term agentic AI or AI agents. And I'm curious to know your opinion on this as well, but one more common one I think that is misleading is, um, the idea that, um, an AI can go complete a task autonomously with no human involvement and that that capability constitutes agentic ai. Like book me a flight to Fiji or, um. Go do this market research, you know, and I don't need to be involved. I just tell it what to do. It goes and does it, and then it comes back with a result. Um, while that is certainly useful and, you know, in my opinion, results are mixed on how well these, um, autonomous capabilities, uh, function. Some are better than others, and I think we're still in the early stages of that. Um, AI able to autonomously complete some function. Is useful in a lot of business context, dangerous in others. Um, there's a wide spectrum, but again, I don't think this is the most useful way of defining what agentic AI is. What are your thoughts on that?

Lexi Pasi: 

Yeah, I agree. I think that. But know, the definition is sort of subject to your own risk tolerance, um, which can change over time, change over organization, change over context. And I, I don't necessarily see as well that there has to be any AI component to a system performing something autonomously, right? That's done all the time. So it starts to get a little bit fuzzy. It's like, okay, well I'll use an LLM and then I'll, I'll not pay attention to it. That seems like a kind of flimsy basis for a definition.

Jacob Andra: 

Absolutely, because long before LLMs hit the market, you had all of these integrations and triggers. You know, you can use platforms such as Zapier to connect this up with that. If I get an email with this subject line, I want you to create a. I want you to do this action or, you know, send this alert or, you know, whatever you can, you can wire up all kinds of workflows that have absolutely no, uh, LLM or AI involvement whatsoever. And, you know, you could, by this definition call those ag agentic, right? But I agree with you that that's not the most useful, um, way of defining it. So I have my, uh, PET definition of what is the most useful way of thinking about agentic AI. Before I say mine, I'd, I'd be curious to hear yours. Do you have one?

Lexi Pasi: 

You know, this is, this is tough.'cause I, I don't, um, I think that Term where, um, you know, it, it comes a lot of terms and language don't have a meaning outside of their usage. Right. That, that's kind of my philosophy of language. I'm certainly not the only one. And it's just one of those terms, whose definition is so widely spread that I think for me, I, I choose to focus a little bit more on like. What are people trying to do when they talk about H-N-T-K-I? And a lot of times it just means like I am trying to a, accomplish a nuanced task with a lot of different moving components that have to feed into the evaluation. Um, and some of those components are going to involve. AI in some way. And I, I think that's probably the broadest I can get in the definition that sort of encompasses all of these things. Um, beyond that, I think it's, you know, again, this kind of split between, do we describe it by its form? Do we describe it by its function? Uh, and I I don't really have a strong personal opinion on one or the other, being the, the right way.

Jacob Andra: 

Yeah, so that, that's a good point. I'll, I'll tell you my favorite definition and the one I think is the most functionally useful. And you tell me if you think it's more describing it based on form or function or whether you think this is a particularly useful way or not. Feel free to totally disagree, but here's my favorite is, um. An AI function that is able to complete a task, and that task could be very narrow in nature as part of some larger ensemble or capability.

Lexi Pasi: 

Align pretty well with my intuition for the term honestly. I think that, you know, you can have, um, in that sense quite a lot of things qualified just through the broadness of the term ai. I kind of think that's the right way to think of it. Like I, I think there's a lot of people that try to hold a stronger line for a while. Like, no, only these things are agent or only these things are ai. We talk a lot about AI transformation and the AI revolution, and I used to kind of roll my eyes at that because I think that a lot of things that were getting muddled were around AI as LLMs, right? And I didn't see the LLMs themselves as being like, you know, necessarily the primary driver of this revolution

Jacob Andra: 

yeah, in the first episode when I had you on about a month ago, we went deep into the disa ambiguation of AI as much more than large language models. And that is such an important distinction, and I couldn't agree more with, um, the importance of that. And so, yeah. Another, I think fundamental misstep when people talk about a agentic AI is they're usually just referring to some large language model, uh, product, usually like a SaaS offering that somebody's created and like, Hey, it's ag agentic because. It'll make calls for you or it's agentic because it'll, you know, book flights and you don't need to be involved. Right. Um, so, so that seems to be kind of heavily weighted in the conversation.

Lexi Pasi: 

And I, I think that, you know, what I've come to realize is that the AI revolution is really about. People coming to see the role of technology as something different and, and that's it. Like it could be any technology, but it's about, you know, I used to think of technology as like. like, I need to communicate. I'm going to go communicate through this technology. And that's like the role of technology. And now technology becomes much more integrated in our organizations to the extent that, you know, it becomes part of the information flow, part of the model of our businesses and our world. a way that can't really be extricated so cleanly, it's like this is the one functionality. Um, and so I, I've, I've become a lot more empathetic to people describing this, you know, general broad AI revolution through the understanding that it's about changing the way we understand technology's role within the things that we do. And because. The LLMs were so, uh, really I think, captured the public's imagination on that front and opened people's eyes to like, what could we do? Uh, they've been sort of the symbol for this new way of thinking, but I don't think it's anything restricted to, you know, out of the box LLM.

Jacob Andra: 

Absolutely. That's such a great point. And you know, over the last several decades, or you know, many decades, the US has transitioned and much of the developed world has transitioned to a largely knowledge-based economy. Um, and, and so I like what you said about the technologies themselves are being so inextricably, inextricably woven. With the knowledge that they're mediating, that it's hard to even separate the technology from the product. And, um, there is that, you know, kind of deep interweaving of the two. So that's a really cool perspective. Um, one of the things, go ahead.

Lexi Pasi: 

gonna add onto that and say that I think that when it comes to age agentic ai, what the zeitgeist of the word agent. Points to is that sort of sprawling integration, right? Of all of these different components, um, that changes the way we interact with technology and the knowledge that it helps, you know, that it sort of serves as the logistic network for it.

Jacob Andra: 

Yeah, no, that's a great point. To circle back to what you were saying a moment ago about large language models. It's really weird because in my. You know, I'm, I'm going on a lot of, you know, panels these days and talking to a lot of people. It's like I'm giving two messages that almost are the guardrails for how I think about large language models. On the one hand, like. Let's look at how amazing they are. Like how awesome is it that an average person now can inter interact with a powerful technology in this way with natural language before you had to be a programmer, uh, a scientist, you know, um, you know, machine learning expert to even interact with these capabilities. So how amazing is it that everyday people, to your point a minute ago, can have direct access to interact with some form of artificial intelligence, even if it's, you know, one small sliver of the AI universe? They're still able to interact with it. Um, with layering. Now we're gonna get to that in a minute'cause we're gonna talk about neurosymbolic, which I'm excited. But with layering, you can even have. Natural language interacting with deeper layers of ai, not just the LLM, but with the LLM being a translating layer to get you to deeper layers. Right. So that's so cool, and I don't want to downplay that. I don't want to minimize how awesome that is. And LLMs are incredibly capable of a lot of, a lot of things. So I feel like on one hand I'm almost, uh, hyping how awesome LLMs are, but on the other hand, I'm, I'm documenting through all of the ways I'm using them in real world scenarios, how absolutely. Weak. They are at so many things. And so it's like both messages at the same time. I almost feel like I'm schizophrenic because I'm saying how amazing they are and how absolutely terrible they are at so many things. Uh, especially once you start getting to large context. I mean, they just absolutely fail, uh, right and left, uh, in a lot of different ways. And I'm documenting the specific ways, structural logical ways, and I'm planning on publishing about this. Um, so almost a. Detractor type of paper, but really I'm trying to showcase that this stuff is nuanced. It's not one or the other. You have to have some nuance and you have to have some understanding about what they're actually good at and what they're not.

Lexi Pasi: 

I, yeah, I mean, I, I very much live in that world too, and I, I think it's in many ways a kind of lonely world to live in, but it's the one that I see more and more people gravitating to because it's like the more you get exposure to these things and the more you understand how they work, the more that's just the inevitable position. And I think that a lot of that gap. Um, between sort of, you know, how incredible they are and how limited they are is actually the gap of language. And I talk about this a lot, um, when it comes to LLMs, which is that there, there's really kind of like two gates that must be passed through, um, for an LLM to be capable of a thing, right? And the first gate, I won't even get to the second gate.'cause the second game is about gait, is about like math, and the math that drives the LLMs setting that

Jacob Andra: 

Maybe you can get, maybe you can get to it at a high level. I'm, I'm curious.

Lexi Pasi: 

I, and, but I, I think that the primary gate is so powerful of a limitation that it, it's just like the only one that needs to be paid attention to, uh, to large extent, which is the gate of language, right? Um, language is incredibly powerful. It embeds a. Implicitly, so much meaning, um, you know, at least subjectively felt meaning into it, that it's easy to kind of like not think about what it is. But when you actually really start to get down into doing these nitty gritty little tasks, especially really quantitatively nuanced tasks, tasks with a lot of numbers, tasks with a lot of details, that's not really what the syntax of our language is designed to do. Right? Like. language was designed or evolved to be able to accommodate like between large hunting, proj, uh, you know, parties for instance. Um, it's not really something that's intended to, um, optimize along, you know, 200 different components of a large manufacturing or logistics network, uh, down to like a very high level of precision. That's just not at all what kind of created language. And I think that. LLMs are incredible from a user point of view because they allow us to bridge into, you know, deeper te technology through a format that we're very familiar with. We see the world through language. We interact with each other through language. It feels very natural. It feels very human. That's a double-edged sword for as powerful it is as it is, right? Because we often don't have to process the nuance or process the detail of what we're trying to do when we enter into it with a natural language prompt. So I actually got an ad, an email ad campaign from chat GBT recently that opened that way. It was like, you don't have to have a question. Just start. and like that's true. I actually think that's a big part of the value prop of lms. I also think that's where a lot of the danger comes from because it can coax us into thinking, we've kind of thought through a problem in all of its nuance, um, when we haven't. And so one of the benefits of coding something up is that like you really had to get. Specific to tell the computer explicitly what to do. So I, I think that's exactly where both the power and the danger come in is that now we're

Jacob Andra: 

Yes.

Lexi Pasi: 

these systems to try to bridge that gap and it's a very, very delicate gap to bridge.

Jacob Andra: 

Yeah, that's a great point. And I like how you bring that, uh, almost evolutionary anthropological lens to, uh, human language and the way humans communicate.'cause you're right, we, we have nothing, I mean. Modern society is the tiniest blip on the evolutionary timescale. And so we have nothing, no precedent for a large manufacturing organization to say nothing for a, you know, 8,000 dimensional vector embedding model. You know, something like that, right? It's just beyond our ability to even wrap our mind minds around practically.

Lexi Pasi: 

Exactly. I mean, it's the reason that we develop like Gantt charts and OKRs and stuff like that. Like can you imagine if you had to run an organization without using a spreadsheet, right? Just by talking about all of the things that were happening in it. It would be. Be just mindbogglingly inefficient and overwhelming.

Jacob Andra: 

Absolutely. Let's transition into neurosymbolic ai. I have a feeling that this is going to be a growing trend. It's one that I'm leaning heavily into. It's one that my company, Talbot West is exploring actively. Um, I wanna hear your definition of it, but it seems to me that. Part of what's going to be driving the wave of neurosymbolic AI is precisely the, uh, shortcomings of large language models. The way in the ways in which they just are inadequate to many types of tasks we need them to do, we wish they could do and they can't do. And that the neurosymbolic route is sort of a way of, um, almost propping up or compensating or helping. Extend or augment their capabilities, uh, is what it seems to me. I'd love, I'd love to hear your perspective and then how you would almost like define the term neurosymbolic ai.

Lexi Pasi: 

Yeah, I'm a

Jacob Andra: 

I.

Lexi Pasi: 

of neurosymbolic methods. I've been a big fan of neurosymbolic methods from the beginning. Um, I, I think a lot of people don't realize, but some of the early advances that. LLMs made in like, you know, the Math Olympiad problems and being able to like do math and prove math, um, came from these neurosymbolic methods. In particular, they were combining LLMs with a language called lean. Um, and just to describe sort of at a very high level what the symbolic part of neurosymbolic is. So Neuros neuro, uh, is, you know, neural networks. Um, so using some of those. Frameworks, uh, neural networks are at the basis of LLMs themselves of deep learning. If you hear that, it's just like, you know, a bunch of neural networks kind of working in concert together, uh, so to speak. The symbolic component is very structurally different. So the neural component is probabilistic, right? It's, that's more in the spirit of the sort of like traditional statistical machine learning. The symbolic part is more in the spirit of like math or computer programming. So most people don't deal a lot with symbolic systems. Um, most of the work that we do is, uh, a little bit more probabilistic. Most of the thinking that we do is often kind of probabilistic. You know, we're thinking of things that might happen based on things we've seen happen. Symbolic reasoning is about deductive reasoning using a set of symbolic rules. So you're leaning into the syntax of the thing, right? You've got a bunch of symbols that are allowed. You've got the rules by which those symbols can be manipulated, and then you have this automated way to look at a set of symbols and say whether its deduction from prior sets of symbols were valid. So you can go through this, you know, chain of reasoning that's so tight that you don't have to make reference to the external world. You can just look at the symbols and the rules for how those symbols can be manipulated in order to see whether that was a valid, you know, move or deduction. And the neurosymbolic systems combine both of these methods, right? You get the advantage of these more probabilistic methods that allow you to make these kind of like big inferential leaps. But then you get the rigor, uh, and the validity that comes from having the symbolic component to sort of check that each of those steps was made in a way that is consistent with logic really.

Jacob Andra: 

Yeah, that's a great way of explaining it. And I don't have your formal academic background and probably the, the rigorous type of language you bring to bear to describe it. But one way that I like to think of it is that the neural networks can be just very loose in terms of the. Uh, being tethered to reality, right? They're, they're very associative and that's actually part of their power, but it's also part of their downfall. It's where hallucinations come from. So they're very associative and symbolic systems are, um, very like rigid and tethered. And so it's kind of the combination of the two that you get, uh, best of both worlds and the strengths of, of both. Would you say that's fairly accurate?

Lexi Pasi: 

exactly right. Um, and so I think that on paper it's kind of the perfect solution. The challenge of neurosymbolic systems saying there's a couple. One is that it can be fairly computationally intensive depending on how it's done and sort of the scope of the pro project, that tends to be less of the barrier. Often the barrier for implementing this in a lot of contexts is just that it is very difficult to symbolically formalize a system. So mathematicians did it at the turn of the century, right? Uh, and that took a lot of work. And math is a quite rigorous and defined field already. Um, trying to put that kind of rigor and structure around the domains that businesses would be interested in is doable. It just takes more work than I think, you know, you would expect to put in trying to just use an LLM off the shelf.

Jacob Andra: 

Absolutely. And there, you know, there are so many applications for a good neurosymbolic system. I mean, we could, we could extrapolate many, I wanna get to one specific use case that's very core to what Talbot West does. But I mean, you could think of many, like for example, pharmaceutical research. The symbolic system could be anchored in all of the known, uh. Chemical interactions, all of the known, uh, compounds, uh, the formal logic behind, uh, the drug discovery process and all of this. And then the neural could be like very loose and associative. And that's what's gonna find you new combinations that you've never thought of before. That's what's going to extrapolate out and do a lot of inference and say, you know, come up with revolutionary ideas that can then be tested, et cetera. Right. But if it's not anchored in some un underlying kind of. Uh, symbolic structure to keep it tethered to reality. Those associations are gonna be far less useful. They could be just extremely fanciful and not at all really applicable. And so you can think of across like almost any industry, a, a very high functioning, neurosymbolic system would be incredibly valuable.

Lexi Pasi: 

Yeah, and I think that, you know, what you're describing is really kind of a generalization of what a lot of people might describe as neurosymbolic systems. So I think that, you know, one of sort of like the. Most canonical examples of a neurosymbolic system is the one I described that was used to solve math oly problems. I'm gonna like make a leap of reasoning and then I'm gonna use this paradigm of lean to check that that was correct. And if I'm using that paradigm, I know for a fact that what I end up with is valid because that's the power of lean in this formalization. The other way that you describe doing things, which it actually can be applied much more broadly, doesn't come with that. guarantee of validity, but it still structures and constraints the problems that for most business applications, you're still going to come up with an answer that's, you know, usable and reliable. Um, and so I think that in that paradigm, and I, I love this. Version because this version is one, it's very flexible. You can use a lot of different technologies. It doesn't even have to technically be a neural net. Um, so you don't have to have that neural, uh, component for the pattern to hold. But two, it really capitalizes on the domain expertise of the organization. Right. And in a lot of cases, that's that organization's differentiation. That's your moat. You know, you've been putting decades into gaining this knowledge. Um, and so this provides an opportunity for you to codify that in a way that AI and machine learning can capitalize on. So drug discovery is a great example, right? You can. Use that chemical knowledge to decide what components are relevant to then put into a machine learning model so that as it's forecasting out, it's going to be doing that on the basis of things that are more relevant. Um, that's the case for, you know, if you're trying to like forecast revenue, um, you want to be putting in the things that are relevant for your revenue, right? I wanna be putting in different components of marketing spend. I want, you know, some numbers related to different kinds of sales activity in the regions that it's happening in. I don't want to then. Put in a bunch of additional superfluous information that's not relevant because that's going to sort of increase the amount of noise that the model that I'm trying to train probabilistically has to try to find its patterns around.

Jacob Andra: 

Yeah, that's great. Um,

Lexi Pasi: 

Okay.

Jacob Andra: 

my business partner, Steve Karafiath, came up with what I thought was a cool analogy for neurosymbolic, which is, um, if you picture like the early single celled organisms that evolved. In the oceans, these are just like squishy little organisms, you know, single celled. But then, you know, there are even squishy multicellular organisms in the ocean, such as octopi, um, jellyfish, et cetera. But on land where gravity applies, uh. Uh, a squishy organism doesn't fare as well. You start needing a skeleton, you start needing some structure to hold it up. If we were just blobs, you know, we wouldn't, we wouldn't do that well. And so, um, you could almost liken the, uh, symbolic, uh, aspect to the skeleton that gives it a form, a structure. Um, and then, you know, the, I guess everything else would be the neural network in that analogy.

Lexi Pasi: 

You know, I think that if that's the pattern we're going to be thinking about, which is actually probably the right pattern for most business problems, I even venture to say, or, or a significant chunk of them, um, then you're actually more in the realm of. Machine learning, um, where that neural component doesn't have to be an LLM and it doesn't even have to technically be a neural network type of technology. It can be any machine learning algorithm that you put in there. Um, these sort of symbolic elements are found in your world model of what the relevant pieces of information are to that outcome. So to. Bring in like one other set of technical tools that you might think about using within this context is if I'm going to be creating, say, like a predictive model or even a model that like automates a label, so like a diagnostic model, right? I want to diagnose a certain condition. What I need to know and I'll, I'll use the diagnosis example, just'cause we've already talked a lot about prediction. Um, what I need to know is what are the inputs that go into making that diagnosis? I don't need to know and what the probabilistic machine learning algorithm is intended to do is to find the way that those inputs combine to generate. diagnosis. Um, and so one tool that you can use to try to understand what are the inputs, what, uh. What generates a certain disease state, you can use something called a dag, a directed acyclical graph, which just lets you kind of draw out what are the causal connections in this problem, right? So like what goes into this certain type of heart condition developing? And then I can put that in. To the input space, um, or, you know, what are the, what are sort of the, the signs, the telltale signs of this disease from a biological perspective. I can put that into the input space. And then the machine kind of learns the patterns between these things to produce the diagnosis based on. The historical, uh, the historical data. So that is a pattern that doesn't require necessarily an LLM and if that's the route you wanna take, it's a really good way for businesses to, one, be able to train their own custom models from the ground up in a very kind of accessible way. And two, really utilize their own differentiated knowledge and produce a little bit of a moat around that model and that product.

Jacob Andra: 

That's great. I love that because one of the things Talbot West is really big on is helping organizations differentiate. Through standardizing their tribal accumulated knowledge and then making that accessible and queryable, and to date, we've mostly done that through rag type of setups. Um, but as you know, that's large language model based. And if you get a large enough body of knowledge, it starts to get a little bit squishy and it's not, you know, the most reliable in terms of retrieval. Plus you can't extend it in some of the interesting ways that you're, the architecture you're proposing would allow you to do so. I like that.

Lexi Pasi: 

Well, and I think, you know, even just if we want to stick in rag, there's also ways that these worldviews can help a rag system perform much better. So, you know, when you think about like Vanilla Rag, you have this very large knowledge base and you're maybe like chunking it up into kind of arbitral. Arbitrary blocks to make, you know, and then when a user queries it, so I'll put in like a little user query and then it'll vectorize that and it'll just go check against all the blocks and pull the things that it thinks is the most relevant based on how similar those vectors are. With a more nuanced approach to rag, I might actually have a picture of what kind of information is going to be relevant for a certain type of query. And what I can do, and you can like describe that through like an ontology, right? So you hear ontology driven, rag discussed a lot, and what that could basically accomplish then is I have a user query and there are cues within that query that indicate sort of like where within that picture of knowledge or information or functionality. Is that user query living. And I can tag my data according to those different attributes. And so when I go to search, I can actually be running that search off of those different attributes that bakes that institutional knowledge or that domain knowledge on top. and creates a much more powerful rag system even across a very large knowledge base.

Jacob Andra: 

It sounds like that's almost even a prototypical neuro symbolic setup because you are anchoring the squishiness of the large language model to something. It sounds, in this, in this instance, it's a sort of an indexing function you're describing.

Lexi Pasi: 

Yeah. You know, it's funny, I hadn't thought of like ontology, you know, like sort of a graph driven rag as being a neurosymbolic system. But I think there's total argument that it is, uh, you do have these. Um, you know, sort of more rigid symbolic worldviews, and then you have the retrieval function, um, and sort of, you know, the generative function of the LLM. So that, I, I like that. I mean, I'm, I'm all for kind of expanding the use of these terminology, you know, these terminologies to the extent that it, it really helps to capture patterns that enable function.

Jacob Andra: 

Yeah, let me describe two use cases Talbot West is currently working on. Um, and that I think, you know, we, we are actively pursuing, uh, some neurosymbolic approaches and I'd love to hear your perspective on these. Uh, if you're game.

Lexi Pasi: 

Absolutely.

Jacob Andra: 

Okay. And these are actually very related, so I'll just rattle both of them off. Um, and because they're so interconnected with each other, and you can then, uh, weigh in. So the first is with our clients, we have to co, a lot of times we do an evaluative process, so we're not just going in. Uh, guns blazing. Let's deploy AI in your organization. Like there are so many intricate dependencies just like you. I love that you used that example early on of, you know, we're not equipped to, uh, deal with the complexity of a, uh, very large manufacturing organization. And so. Our first task is to get a handle on the complexity, all the dependencies and adjacencies and the direction these dependencies flow. And it's like a tangled mess of spaghetti where everything depends on everything else. And, you know, usually they've grown in this organic matter manner, not a lot of like forethought or foresight. And so it's so much just to even be able to, in a, any kind of intelligent way, recommend next steps. I mean, there's so much to get a handle on. And so. We do use large language models a bit. You know, obviously being careful with not sharing private data and all that. But um, so we're using large language models a little bit to help us in this process. But again, we're running into some of these limitations, which they're just so squishy they can't handle that much context.'cause when you're talking to a mid-size organization with this level of complexity, the. Body of context just gets astronomical. There's no way a large language model can even hold that all in context. It starts forgetting facts. It starts getting squishy, even if it has access to a lot of context. Um, it can't, it can't remember what depends on what, right? And so it's a lot of human cognitive load to do this process. And we would love a neurosymbolic architecture that helps anchor in all of the known facts about the business. And as those become more and more established, um, sort of becomes this, um, bedrock, if you will, that populates with the known facts, relationships, dependencies, and the directions those flow. So there's no squishiness about those. And then, you know, with sort of an indexing or tagging system as you need a large language model to help, you know. Let's now map this specific subtopic of how the, um. The new ERP, uh, data sources flow, right? Or, you know, where they need to flow from and how ready they are. And so in this body of knowledge, we probably have a lot of information about that, but it needs to be traced back through all of these, um, kind of precursor conversations, et cetera, right? So you get the idea there. There's just a lot to map. Um, you can take any one thread and follow it, and you wanna be able to have that firmly anchored in known facts and not just get like, all squishy with it. So that's the first use case. Second one is we're building a, um. A platform called Biz Foresight that essentially helps business owners. Know how to value their company and get it ready for sale. And it is AI powered. There's a, there's an AI chat bot. That's what the customer interfaces with, but it's also ingesting both in the form of like document uploads and conversations with the business owner. It's ingesting a lot of facts and information about the business and populating these, uh, machine learning models so that it can then better inform the business owner and guide and direct them, and it's based on a very defined methodology. We have this very robust methodology. It's about how to translate that into the right machine learning. And so this is a neurosymbolic system because again, just like my first example. Under the surface, these business facts about this particular owner's business need to be very anchored, validated, verified. The large language model needs to be referencing the known facts, and it needs to be building out a profile of the business. It can't get squishy with the facts or the logic, uh, or with the methodology that it's all based on. So you can see how those two are related. Um, there are a lot of other cases we're looking at for neurosymbolic systems, but these are kind of two of the, the front burner ones we're looking at. So I'd love to hear your, um, thoughts on those and how you might approach either one of them or both.

Lexi Pasi: 

Yeah, I, I think that these kind of high level questions of business strategy are. Operations are a really good example of a place where you can take. You can sort of templatize a lot of these questions. Um, if you have a good picture of how a business works, especially if it's in a particular industry and then you've got like a bunch of different configurations that can be gotten, then you can go, you know, to a large, uh, collection of unstructured data conversations with the business owner and try to kind of like parse these things through in a way that makes sense to sort of populate this underlying model. Of what a business looks like, how a business operates, and then in particular, you know, configuring it in the right way, how this business operates. So I think that that's a good example of such an application. Um, I'm always a big fan of, you know, if you're thinking about how do I understand my business in this symbolic way, um. If you are doing solid financial modeling to the point that you can do financial scenario planning, you're already thinking about your business in this way, right? And so then it's a matter of how do I take that knowledge and sort of structure it, uh, in order to, you know, use this in combination with some of these more probabilistic tools to try to understand my business better or use that as the interface rather than, you know, me going and running a bunch of different scenarios through my spreadsheet. I mean, I, it's funny, I think that people probably don't realize much they're already doing the kind of activities. That would allow them to architect out, uh, an agent for their use case. And I do think that, in a lot of ways it's easier for organizations to build their own, agents, right? And you can kind of, you can, if, if you have a robust framework, then you can really configure that to the customer. So I think that's still a possibility. Um, but it takes more work to abstract something out. apply to multiple businesses, then it might to try to create a system for your own. And so I think that when businesses are thinking about how to Agentic ai, um, I think that off the shelf tools may not always be the best place to start. They may seem kind of, you know, easy to access, but. Getting in there, getting your hands dirty, and what's this business actually doing? How is this business pieced together? I think that if you do that difficult work up front you, you may find that actually building, you know, an agentic system or building out some AI tooling that's gonna help you answer these questions is gonna be much. and lighter of a lift than it seems it. But if you go in without having done any of that heavy lifting, without trying to understand your business, um, without really abstracting to the perspective or formalizing to the perspective where you can introduce these tools and you're just like, we're just gonna do AI in a bunch of places until something works. Like you can burn through a lot of money with no ROI that way.

Jacob Andra: 

Absolutely. And you know, it's the, it is something we see all the time. And, and look, I don't wanna poo poo off the shelf tools'cause oftentimes you can find a fit where they solve an immediate pain point. But I look at those as more of a shallow implementation. They're not usually, um, integrated into the very kinda heart of the business. It's more like deployed on the, on a surface level, solving an immediate pain point, creating some efficiency and great if you find the right one that's getting you some ROI. That's perfect. But we're usually looking like, how do you integrate much more into the actual workings of the business? Much more customized. There are all kinds of trade-offs. Of course, you know, if you deploy a custom solution, then it has to be hosted and maintained, and there are all those costs, but. Uh, to your point, if it's very dialed in into the business and it's actually integrated with what they really need much more than an off the shelf solution could be, then the ROI is probably gonna be there and it's gonna be well worth the additional, uh, cost and expense. But all that has to be evaluated. There's a, um, it's, it's, it's a non-trivial problem to even do the evaluative process and be able to intelligently recommend what's gonna be best for a given company.

Lexi Pasi: 

That's kind of the chicken or the egg problem, right? So if you know your business well enough to quantify the ROI of a given AI initiative, it's probably gonna be a successful initiative. If you don't know your business well enough to do that evaluation, you're probably not gonna have a shot at making it impactful. And so I, I think that that's, that's one of the difficulties, right? Is that, um, even being able to model the ROI of one of these spanning, you know. AI initiatives is itself kind of an indication of the likely success of that initiative in the first place?

Jacob Andra: 

That's a brilliant point and uh, I think that's a great one to end on. Thanks so much for coming on the podcast again. This has been a great conversation.

Lexi Pasi: 

absolutely. Thanks for inviting me back.

Industry insights

We stay up to speed in the world of AI so you don’t have to.
View All

Resources

Subscribe to our newsletter

Cutting-edge insights from in-the-trenches AI practicioners
Subscription Form

About us

Talbot West provides digital transformation strategy and AI implementation solutions to enterprise, mid-market, and public-sector organizations. From prioritization and roadmapping through deployment and training, we own the entire digital transformation lifecycle. Our leaders have decades of enterprise experience in big data, machine learning, and AI technologies, and we're acclaimed for our human-first element.

Info

The Applied AI Podcast

The Applied AI Podcast focuses on value creation with AI technologies. Hosted by Talbot West CEO Jacob Andra, it brings in-the-trenches insights from AI practitioners. Watch on YouTube and find it on Apple Podcasts, Spotify, and other streaming services.

magnifiercrosschevron-leftchevron-rightarrow-right linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram