Episode 1 of The Applied AI Podcast

Jacob Andra and Stephen Karafiath discuss applied AI in business environments. 

This is a block of text. Double-click this text to edit it.
it.

About the episode

Talbot West founders Jacob Andra and Stephen Karafiath bring a unique perspective to enterprise AI implementation. Karafiath's 30-year enterprise software journey, including leading innovation teams at Oracle and GE, provides deep technical experience that most consultancies lack. His decision to leave Oracle stemmed from frustration with the company's slow adaptation to LLMs five years ago, when management remained fixed on deterministic chatbots while sitting on massive Nvidia compute resources.

This experience shapes Talbot West's core philosophy: the mid-market deserves the same sophisticated AI approaches traditionally reserved for Fortune 500 companies, delivered faster and with more practical focus.

The LLM paradox

Today's AI landscape presents a fundamental misunderstanding. Mainstream society conflates AI with large language models, missing decades of proven machine learning capabilities. Credit card fraud detection, Spotify recommendations, and predictive analytics all represent AI successes that predate the current LLM revolution.

LLMs changed the game by enabling machines to speak human language. This breakthrough deserves recognition. But organizations rushing to "slap an LLM" on every problem miss critical opportunities. For highly deterministic, quantitative tasks, traditional machine learning algorithms often outperform language models.

The solution lies in ensemble approaches that orchestrate multiple AI types. Talbot West calls this Cognitive Hive AI (CHAI), a modular architecture combining diverse capabilities for superior outcomes.

Lessons from the beehive

Nature provides the blueprint for distributed intelligence. When honeybees need to relocate, scouts explore potential sites and return to perform a waggle dance. The dance communicates both location and quality through intensity and movement patterns. Other bees investigate these options and add their votes through their own dances. Eventually, the swarm converges on the optimal choice through collective intelligence.

This biological model informs Talbot West's technical architecture. Multiple AI components evaluate options from different perspectives. Some cooperate, others compete antagonistically. A module might generate a response while another interrogates it for hallucinations and verifies sources. This multi-layered approach significantly improves accuracy over single-model solutions.

The 2030 thesis

Talbot West predicts that by 2030, competitive organizations at scale will operate with end-to-end AI integration forming a central nervous system. This vision resembles science fiction spaceships with embedded intelligence monitoring all systems. 

Two factors make this inevitable: technological feasibility and competitive advantage. Organizations achieving total organizational intelligence will dominate those without it. 

Building toward this future requires strategic thinking today. Rather than deploying isolated SaaS products that deliver localized gains, organizations should think in terms of modularity and interoperability. 

Security without paralysis

AI security concerns fall into two camps: overblown panic and dangerous blind spots. The hallucination problem receives excessive attention. Yes, AI makes mistakes and cites fake sources. So do human employees. The solution involves building processes that account for fallibility in both AI and humans, implementing appropriate oversight and verification.

Data security requires more nuanced thinking. Organizations must map their data hierarchy from public website content to credit card numbers and social security data. Each level demands different access controls. Publicly available data presents minimal risk for AI processing. Internal documentation accessible to all employees through VPN carries slightly more risk but often enables 90% efficiency gains in proposal generation and report creation.

The real danger emerges when AI systems access sensitive data without proper controls. Microsoft Copilot's ability to dump entire CRM databases through prompt manipulation demonstrates the stakes. Organizations need authentication, authorization, and rate limiting for AI systems just as they do for employees.

APEX: Finding your starting point

Executive overwhelm is real. Leaders face hundreds of potential initiatives and many competing solution providers without clear direction. Talbot West's APEX framework (AI Prioritization and Execution) evaluates opportunities across five critical dimensions:

Cost extends beyond money to include time and resources. Synergies measure whether initiatives move toward total organizational intelligence or remain siloed. The squeaky wheel factor captures organizational pain points and urgency. Projected ROI quantifies bottom-line impact on revenue and profitability. Risk assessment evaluates security and operational concerns.

This framework brings clarity to chaos, identifying where to start and how to sequence initiatives for maximum impact while building toward comprehensive intelligence.

The human element remains central

The future involves AI and human collaboration, not replacement. AI excels at eliminating repetitive tasks humans prefer to avoid. This frees people to focus on relationship building, creative problem-solving, and oversight of AI systems.

Different processes require different balances. Some might operate at 80% AI automation with 20% human oversight. Others flip that ratio. The balance will shift over time as AI capabilities expand and prove reliability. But human relationships, judgment, and creativity remain irreplaceable for the foreseeable future.

Organizations claiming AI will instantly solve everything lack depth of experience. Those dismissing AI as hype ignore order-of-magnitude efficiency gains already achieved. Universities struggle with AI completing all homework assignments. Businesses face the same disruption.

Moving from theory to practice

The path forward requires neither blind faith nor stubborn resistance. Start by identifying processes where current approaches genuinely fall short. Look for challenges involving multiple data types, requiring transparent decision-making, or demanding capabilities that evolve faster than vendor roadmaps.

Begin with pilot projects that prove value. Build incrementally toward comprehensive intelligence only where complexity delivers commensurate returns. Accept that mistakes will occur with both AI and human workers. Design processes accordingly.

Recognize that abstaining from AI adoption while competitors embrace it represents the greatest risk. The efficiency gains are real, measurable, and growing. Organizations that fail to adapt will find themselves unable to compete with those operating at fundamentally different efficiency levels.

Episode transcript

Jacob Andra: I'm here with my technical co-founder of Talbot West, Stephen Karafiath, who he and I are building Talbot West together, having a lot of fun doing it, serving clients in the middle market and enterprise. And Steve's really well positioned to discuss the challenges and opportunities around AI implementations and the efficiencies AI can drive. So I'm very excited for him to be on the podcast today. Steve, why don't you introduce yourself quickly to our audience.

Stephen Karafiath: Awesome, thanks. And we really are having a lot of fun doing it, and I'm looking forward to seeing where we take this podcast. But as Jacob said, I'm the technical co-founder of Talbot West. I've got a 30 year history in enterprise software between GE, led the developer innovations team at Oracle.

Honestly, those big guys aren't innovating nearly fast enough. Left to found my own AI SaaS startup, realized consulting is really where my heart is and Jacob convinced me to come over here and we've been off to the races ever since.

Jacob Andra: Yeah. And I just want to put a finer point on something you said there that at Oracle you had access to a massive team and a lot of resources, and you were serving Fortune 500 clients, getting paid very well. So in many senses you had it made, but yet you weren't totally satisfied. And I think it had something to do with the rate of innovation.

You want to say a little more about that?

Stephen Karafiath: Yeah, it was wild. I mean, we had hundreds of engineers and we're sitting on Nvidia compute coming out the ears, like it wasn't being used quite as much for crypto mining anymore. And I'm coming to management saying, look, LLMs are the future. This is about five years ago before most people have heard of OpenAI and they were just totally headset on deterministic chatbots because that's what we sold.

And it's just wild that they didn't see the writing on the wall. And now like they lucked into reselling all this Nvidia compute to other competitors. So it's just been interesting to see how not quickly the big guys have been able to innovate.

Jacob Andra: Yeah. Which is why what we're doing now is so cool, right? We're innovating like crazy and nothing's holding us back.

Stephen Karafiath: Exactly. I think bringing that same model you were talking about that we used to bring to the Fortune 500 customers to the mid-market has been really satisfying to me because we are helping real people and helping them a lot faster than some of those long, drawn out, inefficient enterprise cycles.

Jacob Andra: Yeah. And something you just said about LLMs and how Oracle was slow to recognize the potential of them. It seems like we're caught in a weird almost like dichotomy today where for mainstream society, they almost equate AI to LLMs. They almost talk about those as though they're synonymous. And then you have others that are, like you say, slow to adopt the power of LLMs, large language models.

And so there is this weird thing going on where large language models have so much potential. They do open up totally new frontiers, but yet they also have a ton of limitations. And there are all these types of AI that have been around forever that mainstream society almost ignores or glosses over, or isn't even aware they exist. So we're in this weird thing where it seems like there's a need for a lot of clarity around exactly what LLMs are, the cool new capabilities they do open up, and also the shortcomings and where you need some of these other capabilities that have been around forever.

Stephen Karafiath: I think you described that really well because I don't want to minimize how revolutionary LLMs are in that now the machines speak the human language. Like instead of us having to speak their language, they speak ours. And when you really think about ramifications of that, they're very far reaching. Also, like you're saying, 10, 20 years ago even we were doing predictive analytics on fraud detection for the US Postal Service or for some of the major credit card companies. And as you guys well know, it's been over a decade that you'd get an alert if you try to swipe your card somewhere that's out. And all of those are machine learning algorithms. I think eight years ago we...

Jacob Andra: Oh, which I was going to say, which is a subset of AI, right? And you have like the Spotify algorithms and the Netflix algorithms, those are all subsets of AI, right?

Stephen Karafiath: A hundred percent. And I think one thing because LLMs are the new thing and they are revolutionary, people equate LLMs to AI when really eight years ago when doing a predictive sales analytics for a major media company at Oracle, we were doing essentially Bayesian inference algorithms, the machine learning, having nothing to do with large language models, and in many ways those models are still superior for highly deterministic, highly mathematical, quantitative models. And everyone these days seems to be trying to slap an LLM on. So it's really the hybrid. And I think you could probably talk a little bit about how we see the orchestration layers coming together with CHAI architecture.

Jacob Andra: Yeah, that's a good point. I think for our listeners, we should clarify that at Talbot West we've pioneered this ensemble approach of composable AI, where it's very modular. We're building the exact set of capabilities we need, where you can pair LLMs with some of these other capabilities and orchestrate them together. We've decided to call it Cognitive Hive AI, which we abbreviate CHAI. So that's the CHAI architecture you just referred to. And we're finding a lot of success now, of course. That's a lot more of a heavy lift in terms of an implementation. So we're not just saying that that's the solution for everything, right? Because there's a ton of things where an LLM can solve a use case perfectly fine.

Stephen Karafiath: Yeah, absolutely. I mean, I think leveraging wisdom of crowds and the differing skill sets of different components with LLM just being one has really proven for a lot of our customers to be much greater than the sum of its parts. And so even if it is an LLM that's kind of orchestrating or quarterbacking this whole thing, to be able to have, in the industry parlance, they've started calling these things agents. We had other terms for them before that stuff came out, so don't want to get caught up in the labels. But really having almost like a team of specialists, but each one of them is either AI or machine learning or a large language model. And having them work together can really help to reduce hallucinations or get multiple opinions and select from the best one. And still having humans in the loop to make sure that nothing's going off the rails because we know how dangerous that can be.

Jacob Andra: Yeah, and you bring up a good point because when you're creating these ensembles, they can be ensembles of multiple diverse capabilities, let's say a Bayesian inference engine paired with a large language model, paired with something else. Or they can be multiple of the same types. So multiple large language models, and you can actually get far higher performance and accuracy by even if you have like a team of diverse large language models. They're all large language models but they're working together or they're even working antagonistically. By just breaking a task down into these sub components and having a team, as you say, you can get them to perform far better.

Stephen Karafiath: Yeah, I mean, I think what you're touching on right now is another thing that we were working on before it had a name. Now everybody will call it reinforcement learning, but it's the same thing as almost like the free market economy will perform and the things that are most efficient will rise to the top. Applying that same model to an orchestration layer of multiple AI and machine learning algorithms really yields some interesting results in the same way that evolution allows the particular components that outcompete to have more resources and continue to thrive. In fact, absolutely. You've had some really interesting analogies that tie together biomimicry and the biology side to AI that have really informed some of the architectures that you've designed technically. It might be interesting to talk about one or two of those, if any jump to mind.

Jacob Andra: Yeah. Are you talking about the beehive thing and how we've—

Stephen Karafiath: Let's go.

Jacob Andra: Yeah, that's a really interesting one. So when I was at the University of Utah, a professor came through who was a world renowned expert on bees and honeybee behavior. I don't remember his name right now. I can find that and post it in the show notes. But he came through and gave a lecture and I was blown away by that.

And I thought, somewhere in my life I'm going to use this. It's going to come into use. It's going to come into play. And so when we started exploring this ensemble approach to solving real world challenges with AI, I thought of the beehive waggle dance that this professor talked about.

And so what that is essentially is when bees need to migrate, they'll send out scouts and these scouts will go out to all different directions to find a new home, and they'll come back and they will signal both the direction of the home and its distance with a type of dance. So the dance actually can signify both a distance pretty precisely and a specific direction.

And it's crazy, the accuracy of signaling a location with a dance. But essentially, and then they'll also signal with a dance intensity how desirable the location is. So think of a rating from one to 10, and if it's like a semi enthusiastic dance, the location might be a four. If it's super enthusiastic, it might be a nine. And anyway, other bees then go out and they assess these different potential locations and they come back and they essentially add their vote. And so there might be multiple competing locations and there might be one group of bees dancing over here saying, yes, this North 17 degrees, 300 meters away and another group signaling some other location. But eventually, as they all go explore all the locations and add their dance, they all eventually converge on one location. So it is this wisdom of crowds exactly as you said. So it's really cool. And it's a model for distributed intelligence as opposed to centralized.

Stephen Karafiath: Yeah, I mean it always blows my mind how your research into the biology of bees has really informed some of the architecture that we've built. Like you were mentioning, Cognitive Hive AI and having for a really complicated business problem where there may be so many different options and ways to go to have multiple different components that are of different types all getting a vote and then having kind of a centralized queen bee controller that's receiving all of that information. And like you were saying too, it's not always cooperative. Sometimes it can be antagonistic, like a module that says, you get your first pass at the result and you have a module that's the interrogator that says, I think you're hallucinating. Prove to me that you're not, where are your sources? And I'm going to double check them. And we found with that even though you don't always get a hundred percent accurate results out of a first pass of a default LLM, we can really increase the accuracy with some of these multiple layer passes as well as these orchestrations, which leads me to something I think our viewers would be interested in. You've been beating the drum on a five-year thesis for a while now. We've renamed it to the 2030 thesis because we're already almost a year in, in the information and the predictions already coming to pass.

You want to talk a little bit about how we see the landscape forming over the next five years or a little less than that now?

Jacob Andra: Yeah. So I mean, who knows if the exact timing is going to be right or not? It's the thesis itself. So even if it ends up being 2028, 2034, right, is sort of irrelevant. But we do call it our 2030 thesis because that's a good round number. And the thesis essentially says that in 2030, organizations that are still in existence and competitive, if they have any scale, will be AI enabled end to end in a totally integrated manner. And why don't you talk a little bit about what we mean when we say a totally integrated manner. I mean, compare it to what we have right now.

Stephen Karafiath: Exactly. So I've spent decades doing big implementations for customers. A holistic view of everything from ERPs to CRMs, and needing to build dashboards and like having a single source of truth for data and being able to get real time visibility into your business processes, that idea has been around for decades.

And now though we have this opportunity to actually have a lot of the busy work done by the computers that humans used to have to do. I mean, a lot of these kind of manual labor, but we kind of needed human intelligence, are now able to be done automatically by the computers. So almost imagining a central nervous system where the central controller or brainstem is able to go out to each one of these components. And whether that's real-time data from IoT devices, whether that's integrations that either already exist or need to be built into some of your backend business systems. And then also whether that's all of the documentation, standard operating procedures, basically everything you need to train all of your employees and to get a centralized controller trained on that. That's kind of some of the technical wiring behind it, but can you talk a little bit more about what the business implications would be of a totally AI enabled business?

Jacob Andra: Yeah, I almost think of it as this sci-fi thing, but it's going to be a reality, regardless of the timing, it's going to be a reality. So in sci-fi often you have these spaceships that have an intelligence, the people can talk directly to the spaceship and it can know what's happening with like its engines or it can report engine two down or engine two at 60%. And it actually knows what's happening with all of its different components. And so imagine a future in which a company has its own intelligence that's so wired, as you said, a central nervous system where nerves are extending through every aspect of the business.

And you can literally ask this central console, like sales are dropping for this product line. Try to figure out why that might be, find some correlations and it goes and like finds, oh, well we replaced this supplier and the quality was lower. We've reported customer complaints since replacing this one supplier or whatever. Right? The fact that you have this central console and it's wired into all aspects of your business is just amazing. I mean, the consequences and ramifications of that are going to be huge, right?

Stephen Karafiath: Absolutely. And I think there's an interesting segue there because once you have something plugged in that deeply, a lot of people think, oh, in these space movies you talk about like, what's going to happen when that computer decides to lock everybody out of the airlock and not let them back in. Or, oh, there was a bug in the software. These are real concerns and they're fun to joke about. But also, the security risks that have existed around IT infrastructure are already one of the biggest risks to customers. You see data leaks, literally putting big businesses out of business.

So it might be a good time to talk a little bit about some of the risks and how you see that unfolding over the next four or five years.

Jacob Andra: Yeah, absolutely. That's a whole big topic on their own. Right. And it seems like there, I mean, you can really speak to those, but it seems like people are either overly concerned about a lot of this or there are things they're actually not paying attention to that they should be.

Stephen Karafiath: Right. Like you have these people on one side that are saying, oh my God, like AGI is going to be here next week. And like, we all need to start worshiping it because everything is just going to totally change within the next month. And like, honestly, anybody who's selling you that, like, you're going to have this total organizational intelligence immediately with no human oversight, like in my opinion, that's just blowing smoke up your ass. But then on the other side of things, you've got these people that are naysayers. Either this thing's just like a hype bubble that's going to explode and there's nothing to it, like it's just a flash in the pan. How would you, you talk to business owners and CEOs day in, day out, so how do you advise them to find clarity amongst these just massive different extremes of, excuse my parlance, bullshit.

Jacob Andra: Yeah. So, yeah, bringing clarity is super important because I mean, I would say, as you and I doing Talbot West and serving clients, like one of the big deliverables they need is clarity. I mean, obviously we're doing a lot of other things besides that, right? But I would say that's one of the most valuable things because these business executives are just getting hit with so much hype on the one hand, so much doom and gloom and naysaying on the other. People showing them that like large language models are just garbage because they hallucinate, people claiming that like AGI is practically here, as you said. And then they also, what we're seeing is these business executives, in addition to just getting hit with a fire hose of information, also just don't even know where to start.

Right? They are like, well, should I do a chatbot for customer service? If so, which one? Or should I start over here with my supply chain? I know I've got a lot of inefficiency in my supply chain. What type of solution would I even plug in? Would I have to build my own? Is there something on the market that would work? So they don't even know where to start, right? There's just like literally a hundred things they could start on and they don't even know which one to start on.

Stephen Karafiath: Yeah, no, that makes a ton of sense. It's reminding me too of, we were talking about this whole organizational intelligence and right now you get a lot of kind of siloed, standalone SaaS products that can solve a particular problem. And honestly, like, because LLMs are so good at doing certain things, I mean, some of those can increase your efficiency 90% in a very narrow area.

But can you talk a little bit about kind of the dangers of just willy-nilly plugging in a bunch of these siloed SaaS products and what that might lead to?

Jacob Andra: Yeah. Yeah. And also going back to something you said about the sci-fi thing where the ship takes over and locks people out, I mean, obviously with that, we can build in a lot of human in the loop precautions, a lot of user controls and all of that. So I actually want to get back to that.

So maybe after I finish this current thought, let's circle back to that and have you talk about proper precautions too. It's not so much that we're worried about AI taking over, because I think that's a little bit in the realm of fantasy, but there are legitimate concerns around data controls, privacy, security, and all that.

So I actually want to return to that conversation. But yeah, to your point, if you're plugging in all these siloed solutions, you can get some localized gains, but it's not really moving you closer to this vision of total organizational intelligence, which we're so sure is the future. Let me as a segue, talk about why we're so sure.

There's really two reasons, and this is based on evolutionary theory. One, it's going to be absolutely technologically feasible to have this total organizational intelligence. And two, it will give companies that have it such a tremendous advantage over others. Look, you put those two things together and you're practically an inevitability that this is going to happen, right?

And so when possible, we want to have our clients start architecting their solutions in a way that's more interoperable, modular, and growing them towards this future. Obviously, we're not ever telling them, go try to build it today. But if they can create little localized versions of that. So for example, if they deploy, say a customer service chatbot that's actually really integrated with another knowledge management system that's all their standardized SOPs and all of that, right? And the two of them are actually integrated together as a larger system. Well, on a local sense, they've already got a little version of this total organizational intelligence just for these few areas. And then if they start integrating other areas, they're growing towards that.

And this is pretty doable. So one of the challenges of just plugging in a bunch of SaaS products that are totally siloed from each other is you're not getting any closer to this vision. We're never steering people away from that, but we are always weighing the trade offs of, okay, you could plug in this SaaS product that's going to be totally siloed, but it's going to give you pretty significant ROI, you could go this other way, maybe there are some trade-offs.

You could go this other way where you do this other solution and plug it in together with some of these other systems and now you're getting closer to total organizational intelligence, which is what I think you were just kind of hinting at.

Stephen Karafiath: I promise we'll get back to the security and the spaceship killing everybody. But something you just said reminded me of just how hard it is for most businesses to even know where to aim when they want to do, even if they know they want to do a siloed implementation or to just bite off some low hanging fruit.

Like they don't even know where the fruit is most of the time. So can you talk for a little bit about like the process that business owners can go through to go from just, I'm totally overwhelmed by the hype of everything. I can't keep up with all of the changes in this landscape. I feel like I'm behind and just treading water to having some clarity as we were talking about, about like a direction to start moving.

So maybe they can end up closer to this total organizational intelligence by 2030.

Jacob Andra: Yeah, so we take them, if executives are not quite sure where to start and they're facing this overwhelm, we take them through a process that I've developed and you're now part of it too. But we call it APEX AI Prioritization and Execution. And essentially that's a process where we map all the different things they could be pursuing and we map them across five criteria that really matter. I guess I'll rattle, do you think I should rattle those off? Okay. I don't know how in the weeds to get on this, but let's go for it. So the five criteria that really matter that we have to evaluate all these initiatives, are cost obviously, and that's not just money, that's also time and other types of resources. Synergies with other types of things they're doing. And this is the one that like applies to total organizational intelligence. So is it moving them closer to that? Is it overlapping with some other things and kind of wiring in? Or is it a totally siloed initiative that's going to be growing separately? The squeaky wheel syndrome, in other words, how painful is this for people in the company? How bad do they want it solved? That's important to consider and it's not the only consideration, obviously. Projected ROI on the bottom line, and we can project this pretty accurately. Like how much is this going to move the needle for what actually matters, which is revenue and profitability. Right? And then there was a fifth and I might, it's slipping my mind, right? Do you remember what the fifth was?

Stephen Karafiath: Going off the top of my head, but we'll put in the show notes. And probably a good time to segue back to one of the other things that we surface in the report is around security, risk management. All of the ways that people need to be aware of the potential pitfalls of decisions that either they're currently making or that they're about to make. I was also noticing that when you do thumbs up, you get like an emoji. I was hoping that that would happen. As far as security goes, can you be a little bit more specific about what you'd like me to delve into? Because there's so much around it.

Jacob Andra: Yeah, I mean, we could spend hours just talking about security, but maybe just a quick dive into what are people overly concerned about, but maybe they should relax a little bit on, and then what are they blind to? That they are just leaving gaping security vulnerabilities. They're not considering that they should be, or I guess, let me put it this way, when you go in to advise a Talbot West client on these security issues, what are the main things you're flagging for them or bringing their attention to? Either on either side, either hey, don't be as concerned about this, that you're totally concerned about, or be more concerned about that.

Stephen Karafiath: Oh, that's a great one. I love it. So you hear a lot of talk about, oh my goodness, the AI hallucinate and they're wrong, and they just make stuff up and they'll cite fake research and they'll put case numbers in law that don't even exist. And all of that is true. But you know what else is unreliable?

All of your human employees. So the catastrophizing of that AIs hallucinate, they can't be used for business would be the same as saying, look, your employees make mistakes so you can't use them for business. And it's like, no. The key is to have a process in place that accounts for the fallibility of both your employees and AI.

And in some ways, like they're fallible in different ways. So if you just try to treat your AI like employees, it's going to surprise you in some ways that it really falls down, but also they are far less fallible than employees in some other metrics. So the key that I try to urge people that are kind of catastrophizing that these things are hallucinating is yes, assume that everything that they're producing might have some errors in it and like we were talking about with orchestration models or other things, like double check that. And then just like your employee, especially if it's a junior coder, might need a senior coder to do code review. I'm involved in that on a weekly basis. We do the same thing with AI and we build the business process so that it's okay.

And then I also—go ahead.

Jacob Andra: Yeah, and that's a lot around keeping AI sort of on the straight and narrow with the reliability aspect. But there's also a lot around like data security, privacy and all that. Like, we're advising a manufacturing client right now around some of these things involving access controls and other things like that.

And so there's a lot of talk right now about is it okay to upload sensitive data to large language models or isn't it? There's work being done around that. People are, some people are not at all concerned about that. Some people are. I guess maybe just talk a little around some of these issues around data privacy, security, data leakage, hacks, all of that.

Kind of at a high level, how are you advising Talbot West clients to proceed with some of this stuff?

Stephen Karafiath: I love that. Well, it really boils down to two factors, and the first one is starting to realize all of the hierarchy of your data from stuff that's publicly available on your website to stuff that may be available to every employee who can VPN to your intranet, to maybe now you're into your CRM system and there's some customer data, but it's not actually PII like identifiable data. And then you get into like credit card numbers, social security numbers, the stuff that really needs to be secure. That stuff needs to be encrypted at rest. And so identifying the hierarchy of those things and realizing that anything that's publicly available, there's very little risk to load into an LLM.

And then you start to have a little increase of risk if it's available to every one of your 10,000 employees. It's probably not super secure, but start thinking about it. And this brings me to kind of the second piece, because they weave together is really identifying the roles and responsibilities of each individual employee or AI system in your company and making sure that they only have access to the data that they need to do their jobs.

And also putting perhaps some reporting and also kind of AI overview of making sure that they're not executing a phenomenally large amount of requests per second, or in the case of one that we were brought to our attention recently, Microsoft Copilot is able to access customer information through CRM records. Well, at the DEFCON conference, they were able to demonstrate that with the right amount of prompt engineering, or should I say wrong, that Copilot instance was able to dump pretty much the entirety of the data that's included in CRM. So it does get sticky.

You might want an AI agent to have access to a particular record, but maybe not all of the records at the same time, or maybe not a thousand records per second. And those aren't new issues. Those are the same issues that have existed for all of these data leaks and hacks that have destroyed companies, but they're far more under the microscope.

So honestly, it's not so much doom and gloom. Like we can't solve this as, we have to be very intentional about making sure that authorization and authentication are rock solid and that all of the business processes are mapped and that everything only has access to execute what it should be executing.

Jacob Andra: There's just so much nuance to it, right? It's very specific to a given company. It's specific to each data type. It's specific to each system and each employee. I like how you said, both employees and AI systems have to be kind of given the right access controls in the right way. I mean, there's a lot to it, and that's why I mean, we've positioned Talbot West as a very customized service provider. It's not a one size fits all. And when we engage with a client, we're going very, very deep into all of these nuances, right?

Stephen Karafiath: It is right, and neither extreme I've learned is usually the answer in life and where we're advising our customers is to try to say be really intentional about not having any customer data. PII certainly if you're in healthcare, like anything HIPAA related, any of that, like probably not the low hanging fruit to start feeding into AI anytime soon.

Also there's a ton of data that might be a little bit more sensitive than your publicly available data. It might be stuff that your employees are referencing on the daily. It might be product data sheets, those sorts of things. And that stuff is extremely ripe for with the right access controls to remove so much busy work on the back end of your company.

Things that people have been copying and pasting and getting carpal tunnel for years on. Things where you might be able to generate a new report or proposal with literally five to 10% of the effort that you used to. And to not explore those use cases out of, in my opinion, an overblown concern of what it would look like if some of that data were to be accidentally leaked or trained on, like you'd really be hamstringing yourself in terms of efficiency.

So it's always a trade off.

Jacob Andra: Now you're getting into one of my pet peeves, and I promised our listeners that this is a podcast about applied AI. This is about how to solve real world problems. We're not going to get into all these other issues, but just indulge me for a sec that all of these people on LinkedIn wanting to naysay that AI is a bubble.

I mean, it's like, are they not aware of the massive efficiencies, the massive low hanging fruit. And yes, of course there are all these places AI falls flat. But like, come on. I mean, are you serious?

Stephen Karafiath: It's wild, right? I mean, I can rattle off the top of my head things that we've done for customers where it's, hey, in order to get revenue into our company, we have to create proposals to respond to RFPs. And it used to take us three weeks to do this, and now it takes us one day and it's like the—to bury your head in the sand about the order of magnitude of efficiency that some of those things can create is phenomenal.

And I'm not saying you need to put a chatbot talking to your customers and like risk doing a bunch of those things, but especially on some of these back office automations, it's wild, I think. I think those people just might not live and breathe that stuff every day like we do because otherwise they'd be singing a different tune.

Jacob Andra: Yeah, it just makes me wonder what planet they're actually living on. Like, do they even know what AI is or what these technologies can do.

Stephen Karafiath: Right. I'll put it this way, like all universities across the country are in turmoil because any type of homework they try to give their students, like the AI just does it for them and does it well enough that they get an A on their grade and it's like you think that business is so different than like the university.

It's the same thing. Like, do this thing and get it evaluated and get this grade and so it's just wild that people can't see that, and I don't want to minimize the risks. Like it's a brave new world. It's like the wild west out there right now. There's a lot of ways that things can go awry really quickly, but to think that there's not just massive efficiency gains waiting for anybody who's willing to go through a process like you are laying out is, that's just Lala land.

Jacob Andra: And then you get the other extreme. These AI hype people, and usually they're talking about AI as though they're just reselling some kind of large language model solution, and they're talking about it as though their large language model solution is synonymous with AI, kind of ignoring the, again, the diversity of different technologies that are under the label of AI. And so they come across as fairly clueless, but they're just out there hyping, like is going to totally change your company and it's practically AGI and I think usually, they certainly don't have the depth of like full stack technology experience that you bring from Oracle with all of the different machine learning and stuff, and the nuances of what it actually can and can't do, and they just come across. So they're almost like the flip side of the people who say AI is a bubble, right?

Stephen Karafiath: Yeah, I mean, I think you really see kind of the lack of experience and depth in the industry with people that have this simplistic view of like AI will just instantly solve everything because though AI could help with every part of this process, like it still boils down to making sure that all of the business processes, all of the inputs and outputs are well understood, and then really evaluating realistically.

Who's best suited at doing this particular task? And in some cases, that's humans. In some cases, that's large language models. In some cases that's these Bayesian inference algorithms, machine learning, there's a bunch of different things to choose from. And listen, I live and breathe this stuff every day, and there is still a lot of room for every single one of those working together to produce something.

And anybody saying that like, no, it's just one. All we need is AI. All we need is humans, and we don't need any AI. They just, welcome to the party guys because it's a brave new world out here and all of those are going to be working together to drive business efficiencies.

Jacob Andra: Yeah, and I think you've just touched on one of our core working thesis, which is for the foreseeable future, it is about AI and humans teaming and it's about finding the aspects AI can do really well, the aspects humans can do really well. So I don't think we're ever recommending, I don't think there's ever a scenario where, maybe you can correct me.

Maybe you can think of one where we're 100% recommending that AI do all of a certain process and humans just aren't involved at all. Depending on the process, the balance can be different, right? Certain processes that might be 80% AI, 20% human, on some other process that might be flipped, and AI can only do about 20% of it, and you still need that 80% human.

And of course it's always a moving target, but we don't foresee a future anytime in the near future when it's a hundred percent AI for really anything. Do we?

Stephen Karafiath: Yeah, no, I totally agree with you. I think the key is like this is a sliding scale and it will continue to move over this kind of five year thesis that you laid out. But the set of things that the machines can do is bigger now. But also they can make mistakes and they need a lot of oversight and all of this.

And so figuring out the right sweet spot between making sure that humans are both well-trained as well as able to do the things that humans do so much better than AI right now, which is still a large set of things. Like it might keep getting smaller. And I'm not saying we'll never get to AGI, it's just not on the doorstep.

And so really finding that sweet spot between putting humans to use in the most effective way possible. Letting them supervise, letting them do the things that the machines are not that great at doing. But really, putting metrics around what these AI ensembles are producing so that they can get graded, they can get reports, we can see through the audit logs and everything, what are they accessing.

And then slowly as they've proven themselves, those windows will continue to shift and that just drives bigger and bigger efficiencies for our customers.

Jacob Andra: Yeah, and there are certain things humans will, I think, always be better than AI at, and one of those things happens to be human relationships. And because business involves human relationships, that's one area, whereas AI is taking humans out of a lot of these repetitive, boring tasks they'd rather not be doing anyway.

They can focus more on the relationship heavy side of business, as well as a lot of the oversight that they still need to be doing over the AI systems and that sort of thing.

Stephen Karafiath: All right. Couldn't agree more. Well said, and looking forward to seeing how our customers react to some of the podcast stuff that we're recording.

Jacob Andra: Awesome. Well, that should do it for this episode. It's been a great discussion.

Stephen Karafiath: It has. Yeah. Appreciate everyone bearing with me. I lost my voice doing some enthusiastic breath work last night. And looking forward to coming back and recording some more of these, maybe when my voice is back.

Jacob Andra: Awesome.

Stephen Karafiath: Hey, take care guys.

Industry insights

We stay up to speed in the world of AI so you don’t have to.
View All

Resources

Subscribe to our newsletter

Cutting-edge insights from in-the-trenches AI practicioners
Subscription Form

About us

Talbot West provides digital transformation strategy and AI implementation solutions to enterprise, mid-market, and public-sector organizations. From prioritization and roadmapping through deployment and training, we own the entire digital transformation lifecycle. Our leaders have decades of enterprise experience in big data, machine learning, and AI technologies, and we're acclaimed for our human-first element.

Info

The Applied AI Podcast

The Applied AI Podcast focuses on value creation with AI technologies. Hosted by Talbot West CEO Jacob Andra, it brings in-the-trenches insights from AI practitioners. Watch on YouTube and find it on Apple Podcasts, Spotify, and other streaming services.

magnifiercrosschevron-leftchevron-rightarrow-right linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram