Episode 11 of The Applied AI Podcast

Jacob Andra and Bennett Borden discuss constitutional AI

About the episode

Talbot West CEO Jacob Andra interviews Clarion AI CEO Bennett Borden on ensemble AI approaches. 

Bennett Borden served eight years as a CIA data scientist identifying patterns in digital trails, he went to Georgetown Law and specialized in automated decision systems. Now as CEO of Clarion AI, he runs the only law firm that operates as both legal counsel and development shop, building AI systems that drive business value while maintaining legal compliance.

This episode explores multi-agent AI architectures. Borden explains constitutional AI, developed by Anthropic, which programs AI behavior through plain language directives rather than thousands of lines of code. Building with generative AI resembles forming psychology rather than writing deterministic algorithms.

Jacob pushes on the practical challenges of large context windows, where language models become unreliable when processing massive amounts of information. He describes the wobbliness that emerges when models forget what's over here when they're processing over there, and discusses neurosymbolic approaches that use ontological skeletons to help LLMs maintain context. This leads to a deeper discussion of ensemble architectures where specialized agents handle bounded contexts rather than expecting single models to manage everything.

Real implementations combine retrieval augmented generation with constitutional AI and adversarial oversight modules that audit primary agent behavior. These patterns, where modules challenge each other's findings rather than simply cooperating, create robust outcomes that monolithic systems cannot match.

The conversation covers practical enterprise applications. Back office automation handles repetitive, data centric tasks where companies apply the same judgments repeatedly. Knowledge worker augmentation transforms how lawyers, consultants, and accountants work. Borden estimates 80% of legal work can be better handled by AI, freeing professionals to focus on the quintessentially human 20% that requires judgment and strategic thinking.

Jacob probes the definition of agentic AI, noting that almost no one knows what they mean when they use the term. He identifies at least four or five common but conflicting connotations. Borden clarifies that agentic AI is fundamentally a recommendation engine on steroids, where an AI subcomponent makes decisions based on parameters it's given as part of a larger orchestration. This aligns with Talbot West's emphasis on coordinated systems rather than autonomous agents making high stakes decisions without oversight.

Data value extraction emerges as a critical theme. Companies sit on information locked in emails and file systems. Properly curated knowledge bases combined with constitutionalized AI surface insights that distinguish products and services. A retail client's app pulls weather and event data to adjust operations dynamically, increasing cookie production before predicted afternoon rushes. Borden describes predictive compliance systems that monitor for behavior patterns correlating with fraud.

The discussion addresses ensemble architectures that scale from individual modules to nested systems of systems. Specialized modules handle discrete tasks, feeding into domain ensembles that synthesize insights. Higher level meta-ensembles correlate patterns across domains, identifying coordinated activities invisible when viewing any single domain alone. Both speakers emphasize explainability and human oversight, with clear audit trails for every decision.

Talbot West delivers Fortune 500 AI consulting to midmarket and enterprise organizations through its FRAME methodology and Cognitive Hive AI architecture.

Episode transcript

Jacob Andra: 

I'm here with Bennett Borden of Clarion ai. Bennett, thanks for being on the show. Why don't you tell us a little bit about yourself?

Bennett: 

Thanks so much, Jake. I'm excited to be here. Um, so I am, uh, a lawyer and a data scientist and became a data scientist kind of by accident. I was almost done with this degree and these two guys in suits were there and they're like, Hey, we're from a federal agency and we're recruiting, and, and do you want to talk to us? And I'm like, sure, your jobs are awesome. Um, and so it turned out they were from the CIA and they wanted me to work. In a data analytics shop that they were just setting up. And I'm like, Hmm, I don't know anything about data. I hate math. I hate statistics even more. And I failed calculus. They're like, yeah, we know. They're like, that's not why we want you. Um, there, there, so this is early nineties, right? So 1992 and the agency was setting up a shop, um. To take advantage of a new kind of very individualized data trail that was being created. So if you think of the early nineties, the worldwide web was first coming out of academia email. Um, cell phones were just coming on the market in giant brick bags. You know, the leather bags you could carry around and be so cool. But they rightfully saw. That, that digital contra was simply going to get richer and richer and richer. And so what could we learn about individuals and groups of individuals based on this digital contra? They were behind. They were, um, starting to do, and so I spent the next eight years doing that. Right. Can you identify. Um, good guys are bad guys. Whatever that meant that day. Um, could you predict their behavior? Could you influence, uh, or even undermine their behavior? And so it was

Jacob Andra: 

CIA.

Bennett: 

yes, yes. Literally working for the CIA for eight years, um, but tremendous, like crazy cool science at the time, right? Like, now, Jake, as you know, everybody. Every social media company, every cookie, like every app, you know, tracks and tries to predict and influence what we do. But at the time it was really quite cutting edge, um, and very interesting work. But I'd always wanted to go to law school. And so, um, after eight years there, I went to Georgetown Law for a law degree and then onto, um. NYU for a graduate degree in data analytics. And so I've, my whole career, I've been this data scientist lawyer, and about 10 years ago, I focused my practice entirely on. Automated decision making systems or augmented decision making systems. So old fashioned algorithms, everybody talks about the algorithm that runs your life. Well. There's lots of them that run your life, but, uh, you know, so think every time you apply for a mortgage or a loan or a job, everything you see on your social media page, everything you see in your advertising on a webpages, that's all. Uh, because these algorithms have put you into a group of similar people and they treat all the people the same way. And so there's fairness around that. There's law around that. And so what I did was counsel clients on how do you design, monitor, and test these algorithmically play based decision making systems. So then when Gen AI came out, um, represented all the big gen AI companies, um, and so was very involved in all the meetings at the White House and these congressional hearings and EU parliament and AI ministries around the world, um, especially in the global south, and really came to believe very strongly that we are at this tremendously important. Pivotal moment in, uh, the history of mankind, right? That we need to learn not only how to technologically control these incredibly powerful, wonderful, miraculous AI systems, but also what's the public policy around that? Like what laws and regulations do we need? Um, and so we do a great deal of work with. Um, federal agencies and state regulators and legislatures, um, and around the world really. And so we launched Clarion for that reason, to be able to, to do things that I just simply couldn't do from kind of the big law platform that I was at before.

Jacob Andra: 

Yeah, that's great. I was gonna jump to Clarion AI ai. So it seems to sit at the intersection of generative AI capabilities, uh, legal, uh, coverage or, you know. about the legal ramifications of AI and some amount of actually, you know, provisioning companies with AI capabilities. So you seem to sit at, at a nice intersection there. There's some overlap with what my company Talbot West does with AI enablement, but you seem to have your own, uh, unique niche. Why don't you talk a little bit more about what you're doing there at Clarion.

Bennett: 

Thanks. Yeah. We, you know, we love this idea, right? So we're, we're literally the only law firm that we've ever found that is also a dev shop, right? So we answer, we're AI only AI boutique law firm. So we, um, clients call us up and say, Hey, we have this question about. Can we do this with AI and what are the legal, privacy regulatory ramifications? But we also set up entire. Uh, AI enablement strategies, right? So how, what we always say to our clients is, we help you decide how to decide to use ai, right? So things like a steering committee and AI policies, but really the meat of that take is the how do you properly assess the benefits and risks of a particular use case, and is that risk acceptable? Can it be reasonably mitigated? And is that acceptable to the company. Um, and then the other side of it is what we call our foundry. And this is the dev shop. And so we actually build AI based systems, especially, um, multi-agent systems and trying to orchestrate, uh, emergent behavior that comes out of these systems, which is just crazy cool. And so we have both the counseling side and then really the builder side of what we do.

Jacob Andra: 

Yeah, that's great. And you're mostly working with generative? I and large language models pairing multiple of these together to work together on some kind of larger, complex task or workflow. Is that correct?

Bennett: 

Yeah, and we're finding more and more though that these kind of hybrid systems, right, because good old fashioned classical ai, as of. You know, three years ago, um, works great, right? Like there's some, these classification and recommendation engine systems, they work really well on their particular line of sight, right? And so there's very often where we are taking, um, classical AI systems as an. Their output as an input into a more generative system. And so you get this layered kind of approach to understanding your data better. And then on the generative AI side, it's all a agent. And really how do you form the psychology of these agents? Because it's a very different world with algorithms where you're able to, you know, write 10,000 lines of code and you know, precisely exactly what it's gonna do. Right. That is not how you build with generative ai. Um, it is so much like you're forming the psychology of a agent. And then of course you build in the, uh, other agents that oversee the, the, the behavior of the.

Jacob Andra: 

Yeah, there's so much you said there that I want to dive into. I mean, first of all, I love how you talked about pairing generative AI with. of the other classical AI machine learning models that are much more deterministic and less prone to the types of, uh, shenanigans that large language models are. So that's something we talk about a lot and we're promoting heavily, so very, very aligned there with, uh, with your approach. Um, I wanted to jump into. uh, AG agentic I, you brought up,'cause everyone's talking about it. It's, it's the big buzzword right now. And a couple podcasts episodes ago, um, I talked with a machine learning researcher, Dr. Alexandra Pasi, about how almost no one knows what they mean when they say generative ai. They're just sort of jumping on a bandwagon. Bandwagon. And I've identified at least. or five common, uh, kind of connotations that people mean such as ai, that interfaces with a customer or you know, AI that does something autonomously and you don't need to be involved. And I don't think any of these are a great definition for agen, and I want to hear your definition. If somebody asks you, what is agen ai, what is the clearest definition you would give for this term?

Bennett: 

I agree with you completely, and this is one of the reasons why I love the work that Talbot West does. Like you guys just, you think about the world properly and distinctly and very deliberately, and it's one of the reasons why I've been so impressed with, with what you do, Jake.

Jacob Andra: 

you.

Bennett: 

And this, so audits very basic level, right? And think about an agent in a, like a lawyer is an agent, right? There's a principal and there's an agent, and the agent does stuff that the principal wants. And so the A agentic AI is really just a recommendation engine. That's on steroids. And so, so here's an easy example. Like we built a chatbot for the National Center on Sexual Exploitation. And so like many companies, they had. Tremendous amounts of data and guidance and for, for children and for parents, teachers, police officers, right? Like policy makers. But their data was non-dynamic, right? It's just in their website. It's kind of buried in places. And so we took all of this data, put it into a retrieval, augmented generation database, and then you. Point a model at it and say, Hey, this is where I want you to get your answers from. And only here, right? With the, so that cuts out all the hallucinations and all the, the stuff that's junk that's in these LLMs brains. And, but then you constitutionalized that agent. And so AI is this, um, concept that Anthropic came up with, uh, 18 months ago or so, and we have found it to be by far. The most effective way of controlling the behavior of these generative AI systems. And so, you know, you could look, there's some great resources out there, especially on YouTubes coming out of anthropic. You can read the original research paper, um, if you really want to dive into the science, but it's, you literally program the prime directive and the. Laws of robotics into these agents. Um, but then the real key to it is, and this is where a lot of people fall down, is that you have to build in sensors that measure. How these systems are performing and spit out metrics that prove that the, the agent is acting within parameters, whether it's legal, ethical, internal policy, whatever it is, right?

Jacob Andra: 

Yeah, so sort auditing and explainability function, uh, for a feedback loop to make sure these things are staying within the parameters. And I totally agree with that. Um, if I may, so it's, it almost sounds like, and this is kind of my own. Favorite definition of agent.'cause again, this is a, a loose term. And then I want to get back to this constitutional AI thing you're talking about'cause it's fascinating. Um, like the most, uh, know, understandable or sensible idea for agent would be an AI sub component that has to make decisions, but it does those based on parameters it's given. And as part of a larger orchestration or in service of a larger orchestration.

Bennett: 

Exactly. That's a perfect description of it because it really is. An orchestration. So you know, a chat bot, a knowledge agent, right? Whether internal or external facing. That's fairly straightforward and but you still want it to be accurate and you want it to. To express the kind of the ethos of the company, right? And so you, it's not just giving a correct answer, but it's giving an answer in the way that the company wants itself to be represented, right? Whether internally or externally.

Jacob Andra: 

Absolutely.

Bennett: 

And the other side of it is we always build. Two agents at least, right? So you've got the primary agent that does the thing, and then you've got another agent that does nothing but oversee the thing, right? And so this adversarial or warrior bots, sometimes they call it or adversarial ai, we have found to be tremendously powerful in guaranteeing the behavior of these Asians.

Jacob Andra: 

Yeah, even with the looseness of large language models where they're so prone to,'cause, one thing I've found is with large context, uh, large language models are very prone to get super wobbly and loose. The more context you give them, and so if the

Bennett: 

Precisely.

Jacob Andra: 

them is really large, have you found that this constitutional approach is helping them manage large contexts and stay on track better?

Bennett: 

Very much so. And it's so interesting, like the, the, the way you start with this, um, constitutional AI is that you give the bot a persona statement, and this is just a plain language text, or I mean a prompt, right. That you're putting in. And so like for the ZI bot, we were like, you are a chat bot to interact with. Parents, children, teachers, leaders, police officers to give accurate answers, um, on questions relating to child sexual exploitation. That was its persona and just giving it, that you'll be amazed at, and you can do this just in a just chat session, right? And, and so you, you, you, everybody listening should play with this, right? Give it a persona.

Jacob Andra: 

can build custom GPTs or Claude Projects or Gemini Gems and you can do all the stuff that you're saying. Absolutely. Um, what I'm interested in is the, um, the counterbalancing of them with each other, and I still. a skeptic having really pushed these things to their limits with large amounts of context, and I'm talking pretty massive where they start forgetting what's over here when they're over here and they just can't hold it all. Um, and we're exploring like some neuros symbolic approaches with an underlying ontological skeleton that kind of props them up and supports them, helps them keep more context in memory. So yeah, very interested. Like talk more about these, um, maybe large context projects and how they're performing when you use this constitutional approach.

Bennett: 

And we have found, we've run into the same problem, right? The, the bigger the context, the more like, I love your phrase wobbly, that it gets right. And so we actually learned a lot from how Deep Seek went about creating their large language models, this ensemble of experts, as opposed to just one giant, giant model. And so when we build in larger. Contexts. We often will build a bunch of different agents that are, have specific tasks and specific roles, right? So then they are, their contextualization is, is, uh, limited. And so you get better results out of it. But then it gets really, really interesting when you've got these multi-agent systems where these agents, if you aren't really, really careful with how you define their constitution and behavior, they'll compete with each other or they'll hide information from each other. But if you get it right, so you. You get this incredible emergent behavior. This is why I keep coming back to like, this is forming the psychology of these systems and you need like one ring to rule them all right? So there's this overarching orchestration, um, bot, I guess that. Monitors how each of the agents are acting and can tweak their constitutions automatically to get better stuff based on feedback from the, um, user or from just the effectiveness of the, what's coming out of the QC bots.

Jacob Andra: 

Yeah,

Bennett: 

So it's, it's.

Jacob Andra: 

a, it's a central orchestrator and, and we do similar with some of our ensemble approaches, so I totally get what you're saying and, and you're right, it becomes much more about, um, kind of tweaking the system and making sure you've set everything upright than it is. Um, anything else.

Bennett: 

And it's really, you know, we, we have found that the less you try to control these things, the better they do. So when Constitutional AI first came out, old fashioned coders would write these. Paragraphs of persona statements and every constitutional rule was like a, these giant paragraphs, and it just freaked its brain out like the it could. And so we started doing some testing to try to get these constitutional rules down to the least amount of information possible for, so one of the things we tested with this like ZI bot so we ran a thousand questions through it and we had it score itself on each one of these things. and so then we just asked it and said, Hey, when we said be compassionate, how did you interpret that instruction? And so it describes its thinking and you can literally have a conversation with it and tweak that, that. Understanding. Right. then we like, um, we would feed it back three different things that it generated, that it gave a high, medium, and low compassionate score and said, explain to me why you scored these differently and. It would explain itself and then you can say, well pay more attention to this and maybe, know, not so much attention to that. And you watched this, it, it, it transform itself because it knows, like if you think these large language models, it knows. Every definition of compassionate that human beings have ever described, that it's snorkeled into its brain. Right? So, so much expertise is there.

Jacob Andra: 

And there's an interesting dynamic there where it might know all of this, but it doesn't reflect on it unless you ask the right probing questions and the iterative process you described. I love doing that with large language models. You use them to refine themselves, to reflect back to you. It's, it's brilliant. I wish more people understood how to do that. And so you iterate with the large language model itself, feeding the right questions back to it, having it reflect on its own performance to self-improve. And I love that.

Bennett: 

So interesting. Like honestly, Jake, it's like just every. I work with these systems. You get this, uh, uh, like honest. Say, I get goosebumps every time I work with them. Like it's just so cool to watch their behavior emerge and to, and to it is, so you, do you remember the movie AI way back with Ha, Joel Osme?

Jacob Andra: 

I, I think so vaguely.

Bennett: 

yeah. So it, but it always, it is, is like this little robot kid basically this AI kid, and it's. Psychology. It learns over time and it develops over time. And, and that's really what this is like. Like, and anybody can do it because unlike algorithms where it takes massive coding and ugh, ugh, that world of typing 10,000 lines of code, this ain't that.

Jacob Andra: 

Yeah, yeah,

Bennett: 

literally plain language prompting and then encoding it as a persona statement or these constitutional rules.

Jacob Andra: 

yeah, absolutely. I wanna take the conversation into the kind of applied AI direction. I want to hear some of the. Really awesome real life use cases where you're finding helping companies deploy AI that's actually driving significant value within the organization. I wanna hear like real world stories from you. Do you have any of those to share? I.

Bennett: 

Sure, yeah. The, you know, so a lot of the benefits that companies are first looking for is kind of. Back office efficiencies, So the best use cases for AI are tasks that are highly repetitive, data centric, and that you're applying the same judgements over and over again, right? And so a lot, you're seeing a lot of projects where processing of HR. Requests, right, or vacation or paternity leave plus this is where the hybrid comes in so well, and you're just, you're just speeding up and you get better consistency much faster. Um, greatly improves your capacity to do more work.

Jacob Andra: 

Yeah.

Bennett: 

so a lot of kind of back office stuff,

Jacob Andra: 

Yeah. And let me ask you a question before we move on to the next category.'cause with a lot of that back office stuff, it's pretty known quantity. I mean, one company to another, managing vacation policy, et cetera, et cetera. All the HR stuff. Pretty similar across companies. So now you have an entire, you know, universe of SaaS products out there that have AI baked into them and they're trying to solve all of these problems. So with your clients, when they say, Hey, I wanna make this process more efficient, or you're doing some kind of discovery, and that's kind of a low hanging fruit, are you going out there and helping them evaluate existing SaaS products? Are you going right to custom solution? How are you approaching that?

Bennett: 

Both, but we to find that custom solutions are. Better. Right? Because SaaS products, by definition, there's some really great ones out there, but they're generalized. They're, they, they're meant to apply to a bunch of different companies, and so the company almost has to adapt its processes to the product

Jacob Andra: 

Yeah, and there becomes almost a trade off, right? Where custom solution, you have to invest more into it. Then you have to host and maintain. It, but you get it very customized to you. And so I find that can cut either way, where sometimes it is better to go custom. Sometimes it is better to accept good enough and go, you know, SaaS. Um, so that's an interesting one. Um, when you're building your custom ones, like, um, are, you know, so you're setting them up with their own hosting or us usually then hand handing it off to their internal teams. It's like, here you go. Now, um, the internal i it team takes that over and, you know, manages it with their own internal infrastructure.

Bennett: 

Almost always. So there's times where some companies want us to, to manage it.

Jacob Andra: 

Yeah.

Bennett: 

Like one of the fa one of my favorite things that we have built is, um, we call it athia, which means to uncon, um, in. Um, but, so basically we, we built a series of modules so the first one is like a, a data intake. So we monitor the world. We built it for ourselves first. So we wanted to know everything that was going on in ai. Worldwide. And so think scraping news sites, government websites, social media, Reddit, like all the things that, uh, where people talk about ai. So it pulls that data in then it runs through a series of classification modules. So is this regulatory? Is it a technological development, whatever, and then it runs through a series of grading. Spots because some stuff we care about, some stuff we don't. Right. So building in what we about, and then there's this recommendation engine at the end, and it can be anything from, um, you know. Write this client alert or LinkedIn post or put together a digest for a specific client. Right. And what we found though is we, we build everything modularly, almost like Lego bricks, right? And so. By changing what we're looking for in the first module and then changing the classification. Grading a recommendation. You can monitor anything. company, your, your competitors. A um, regulation is headed. Any market or supply chain, rare earth metals, like anything,

Jacob Andra: 

Yeah, you've built

Bennett: 

the recommendation.

Jacob Andra: 

generalizable classification engine, which is really, really cool.

Bennett: 

That last part, we're finding the recommendation engine. can actually not just tell you, here's what you should do, but it can do some of that stuff. Like there's a typhoon in the sea of Japan, and so gonna have. to your supply chain. So reroute your orders through here, here, here, and, and a human in the loop, of course, but then it can, it can literally connect to your supply chain system or order system or whatever. And so truly agentic when it gets to that recommendation layer.

Jacob Andra: 

Yeah, and I'm glad you mentioned Human in the Loop.'cause at Key Points that is so important that you know, you don't want to turn these systems loose. I want, you know, our, our audience to understand that, that almost never, with, especially with high stakes types of situations, are you gonna turn an AI system loose. So I just wanna make that clear. Um, human always in the loop, but AI doing a significant percentage. Um, but let's move on. So you, you talked about back office, um, you talked about this recommendation engine. What are some other, uh, instances or ways where you're seeing AI drive significant value in enterprise?

Bennett: 

It's especially in the knowledge worker class, right? So take lawyers, lawyers, we are dealers in information. That's all we do, right? Like no matter what your practice area is, you in information to analyze your client situation. You a can com add the law. You know, like what Is the, how does their situation compare to what? should be doing. You add your legal acumen in and then you deliver an information based product, whether it's a licensing agreement, a contract, an argument in court, a deposition outline. They're all information based, so like our research shows what many reports you've seen out there in the world. 80% of what a lawyer does today is better done by ai. Now the 20% is. Quintessentially and essentially human, right? It's it's, and so always tell our clients, think Iron Man, not Terminator, right? So you're not unplugging a person and plugging in a bot. You're taking a person that you've spent time and effort and money and training, and you are surrounding them with enabling technology that augments and extends their capabilities in ways they couldn't do on their own. so it's.

Jacob Andra: 

Is the way we think about it too, is that augmentation. Absolutely.

Bennett: 

And so finding, so think, lawyers, management consultants, accountants, like all of the knowledge worker class that are information based and making decisions or recommendations over and over again, those are just beautifully optimizable by a ai.

Jacob Andra: 

the legal tech landscape is just booming again, back to the SaaS products there now tons of SAS products for. Law firms, accounting firms, every knowledge discipline under the sun where these SaaS products are coming in to fill a lot of these voids and using ai. And of course you can do custom solutions as well, but I wanna take it to like say your average, you know, maybe upper middle market to enterprise organization that has a lot of different departments. And of course they have a legal department. And so for their legal department, that's easily disrupted by a lot of this, um, AI technology. They have a finance department, so there's, you know, financial analysis. They have a. Back office hr, which you touched on. So, you know, if you take the average, you know, enterprise or upper middle market company that has all these different divisions and departments, where are you seeing the most value driven for a company like that with AI solutions?

Bennett: 

Yeah. You know, it's understanding your customers a lot, understanding where, um. Like if for your services group, you, there's so much information you have in your data, in your interactions with clients that's locked up in your email and Slack and you know, OneDrive or Google Drive, right? And that. Data is just locked away. And so where we have found some of the most powerful transformations is actually in the provision of goods and services. By getting that information, putting it into a well curated data, right? You just because garbage and garbage out. putting those into these knowledge bases that then you point a model at, and different models do different things, right? They have different strengths, and very often you'll use an ensemble of models. Right, because, because they do have different strengths and getting insight, it's like where we are seeing. pharmaceutical companies and material science companies coming out with new molecules and new antibodies and new uh, treatments, interventions, right? Because they've to capture this data and point a model at it and get these incredible insights out of it. beyond just back office. which are fairly obvious. It's what information do you have about how you are providing goods and services, and how can you unlock that information by making it available to an LLM properly constitutionalized and QC to distinguish yourself and make your products better.

Jacob Andra: 

I love that. And that's one near and dear to my heart. I talk a lot about the, you know, proper use of data, knowledge management, um, if, and, and I would even expand out further. So yes, customer facing information where you can now collect that. Put it in one place and then find the patterns within it. Absolutely low hanging fruit, but there's tons of other stuff. You know, supply chain data, um, production data, if you're a manufacturer, you know, logistics, routing data, all this stuff, right? There's. Companies are sitting on so much data that they don't actually know what to do with. A lot of times when we come in, the strategy is just identifying what that opportunity is that maybe they haven't even thought of, um, and then how you would unlock that value. And so you've talked about one avenue for that. There are many others, but I like to use the analogy of, imagine you bought a ranch and you discovered, uh, that the ranch had this rich vein of gold, but it was sort of, uh, loosely distributed. Very fine. Gold particles distributed with the rest of the soil, and it had to be refined to even extract the value. So the value is there, but it has to have something done to it. You can't just take a wheelbarrow of this soil to your bank and like deposit it, you know, they're not gonna accept it. Um. But if you do the work to extract the value from it, you then have massive value sitting there. And so that's the way I see like this knowledge management, data value extraction process. And companies are sitting on tons of it. And often when people think AI for companies, they are thinking the efficiencies, the back office, HR stuff, the, you know, making humans more efficient, but they're not even thinking about the data unlock and the management of knowledge and how that's a huge, huge. Force multiplier. So I love that you brought that up. Um, do you have any other, uh, go ahead.

Bennett: 

Yeah, and how you refresh that, right? So a lot of people like you, like you've seen some headlines. Rag is dead well. That's stupid. Of course it's not. Dead Rag is by far, but what they're saying is that a static database

Jacob Andra: 

And just, and just for our audience, I'll just clarify, retrieval augmented generation, where you give a large language model access to a knowledge repository. Yeah, it's not dead. Um, go on, continue your thought.

Bennett: 

Yeah, so what, what the point of those articles are is if you gather information once and put it into a rag database and point a model at it, that will give you whatever knowledge is in there at that time. often you want to refresh that, right? You want to. new knowledge to it. And so you've gotta build in a way to do that, to recognize is new, is it valuable, and does it need to go into these systems? Right? That, I think the coolest thing that we've ever built is, um, call it, uh, predictive compliance systems. It's really a minority report, but it's so, you know, I was A white collar invest criminal investigator for the first 10 years of my career, right? Using data to figure out who did what, whose fault it is, and you know how much trouble you're in. And it's easy to catch somebody who's done something wrong once they've done something wrong. The critical path is, can you. Predict that someone's about to do something wrong and intercede to prevent it. You can. you have enough data, you can model anything. And so, you know, we built this system that basically taps into email. You know, collaboration systems, things like that, watches for certain kinds of behavior. that behavior correlates very highly with somebody committing fraud or somebody, you know, ripping off the company, inflating their expenses, whatever. so you can actually, with pretty good accuracy. Figure out when something's about to go wrong. Now how you intercede in that, it's tricky, right? You can't fire someone for something they almost did, right? But there's all kinds of ways to intercede to, to head that off. That still stays legally compliant, right? But, so we called this project Minority Report internally, but. You know, that's a little creepy. So, uh, but predictive compliance, uh, that is a pretty cool use of, uh, both classical AI combined with generative ai.

Jacob Andra: 

And for things like that, I mean, the way we like to approach it is you're gonna use, as you call it, more classical AI to find the patterns and be able to do the predictions. That's not something where a large language model would be. Particularly strong, but then a large language model can be the interface communicating those findings and being communicated with between the human and the, uh, the other, you know, parts of the ensemble. So I love that approach. And I would EI would even expand what you're saying, predictive compliance. You can predict almost anything if you have the right data set to train your, uh, machine learning AI models on, there's, there are many, many things you can predict, so you can expand that out. There's a whole universe of predictive capabilities out there.

Bennett: 

And we have found, especially Jake, that you know, yes, there's a tremendous amount of knowledge inside of A company, but then when you enrich that with external data. You can really start to create, so, so like weather patterns, supply chains, you know, whatever it is, right? Disruptions, political disruptions in countries, whatever it is. Um, like for example, just a really simple example. There is a, um, a client of ours who is, is like a retail food shop. Like they, they serve. Food and drinks, you know, and they an app for the general managers who are usually very young. These are 20 somethings who have, you know, running these little stores. So they have an app that has like their checklist, here's what you're supposed to do to open, here's what the lender, right, but it's. Non-dynamic. And so what we did is worked with them and pulled in things like weather data, event data. Like is there a rodeo in town? Is there a nascar, you know, whatever it is. And hooked it also to their point of sales system. So it can say, a Hey, you had a morning rush that was higher than usual. so for your, instead of putting in. 12 batches of cookies, put in 20 batches of cookies to meet your afternoon rush. Right? So it's that kind of stuff that you really can't unlock with your own knowledge enriched by external data sources.

Jacob Andra: 

And then the more of those data sources you pull in, the stronger the signal becomes.'cause sometimes you get a very weak signal or indication from one data source. You correlate it with another, the signal gets stronger, correlate it with 10 others, and you have a very, very strong signal.

Bennett: 

Exactly.

Jacob Andra: 

Great. Well, um, I think this has been a fascinating conversation. You and I could probably talk for days on these topics. Uh, we definitely speak the same language. We'll have to have you back, but this is probably great for, uh, for one conversation. Thanks so much for coming on.

Bennett: 

Absolutely Jake, I, I've always been very impressed with your thinking and Talbot West's thinking, and so it's nice to, to geek out with people who know what they're doing.

Industry insights

We stay up to speed in the world of AI so you don’t have to.
View All

Resources

Subscribe to our newsletter

Cutting-edge insights from in-the-trenches AI practicioners
Subscription Form

About us

Talbot West provides digital transformation strategy and AI implementation solutions to enterprise, mid-market, and public-sector organizations. From prioritization and roadmapping through deployment and training, we own the entire digital transformation lifecycle. Our leaders have decades of enterprise experience in big data, machine learning, and AI technologies, and we're acclaimed for our human-first element.

Info

The Applied AI Podcast

The Applied AI Podcast focuses on value creation with AI technologies. Hosted by Talbot West CEO Jacob Andra, it brings in-the-trenches insights from AI practitioners. Watch on YouTube and find it on Apple Podcasts, Spotify, and other streaming services.

magnifiercrosschevron-leftchevron-rightarrow-right linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram