Episode 2 of The Applied AI Podcast

Jacob Andra interviews Kevin Williams on AI governance and risk mitigation. 

About the episode

Kevin Williams, founder and CEO of Ascend Labs AI, joins Jacob Andra to explore the practical landscape of enterprise AI adoption. Williams brings direct experience from both implementing machine learning in his previous direct-to-consumer businesses and now helping organizations ranging from small companies to those with tens of thousands of employees integrate AI effectively.

Kevin's four pillars of enterprise AI adoption

Williams identifies four primary areas where organizations successfully deploy AI, moving beyond the hype to actual implementation:

1. Development acceleration

The most straightforward wins come from AI-assisted development tools. Well-trained development teams using tools like Cursor, Windsurf, or GitHub Copilot see 30-40% productivity gains. Williams recommends starting with "bug hunts" using AI to tackle the backlog of minor issues sitting in Jira that never get addressed. This approach builds team confidence while delivering immediate value.

The critical caveat: developers need solid technical foundations to effectively leverage these tools. Without proper scoping abilities and the skills to handle the final 10-15% of project completion, "vibe coding" can create more problems than it solves.

2. Product integration

Organizations must integrate AI capabilities into their core offerings to remain competitive. This means adding LLM layers for data aggregation, implementing AI-driven features that improve user experience, and increasing product stickiness. Service businesses can deliver better experiences at higher margins by incorporating AI into their delivery mechanisms.

Williams notes that companies closest to existential threats from AI (marketing agencies, advertising firms, modeling agencies) move fastest in this area. They recognize the imperative to reinvent their fundamental value propositions.

3. Workflow optimization

The majority of Ascend Labs' work focuses here, ranging from basic GPT training to sophisticated automation systems. The key metric Williams tracks is cycle time reduction. Marketing teams that previously spent six months on video production can compress timelines to three months by automating early-stage ideation and iteration.

A concrete example from the insurance industry: one company processes tens of thousands of commercial policy binders monthly. Instead of manual auditing for signature compliance, they combine OCR technology with LLM interpretation to flag the 1% of documents requiring human review. The auditor focuses on anomalies rather than spreading attention across thousands of routine documents.

Sales augmentation represents another major opportunity. Dynamic coaching tools ensure salespeople follow playbooks, while automated systems handle CRM data entry and follow-ups.

4. Strategic future orientation

The most forward-thinking organizations examine how AI will reshape their entire industry ecosystem. Williams describes a company that transforms proprietary data into documents for hedge funds. The uncomfortable reality: their customers immediately digitize these documents for their own LLM systems. The company's core value (access to proprietary data) remains intact, but their delivery mechanism faces obsolescence.

This pillar encompasses change management beyond internal operations. Companies need to anticipate how their customers will consume information differently, how new competitors might emerge from previously unviable niches, and how vendor relationships might shift as AI reduces barriers to entry.

The human element: making AI adoption stick

Despite the technology's promise, Williams confirms what many practitioners observe: most organizations struggle with adoption. Even large, sophisticated companies in industries that "should know better" often haven't started meaningful AI implementation.

Success requires more than dropping licenses on desks. Williams insists on organizational manifestos from leadership, clear policies establishing guardrails, and comprehensive literacy programs. The approach must be intentional: identify tasks that drain time and energy, then use AI to reduce that friction, not necessarily to automate entire workflows, but to complete work more efficiently.

The payoff comes when employees experience their "aha moment" when they realize AI can eliminate the administrative burden that keeps them from their zone of genius. A salesperson hired for their phone skills shouldn't spend 40% of their time in CRM systems. An insurance auditor shouldn't manually review thousands of routine documents when AI can surface the few requiring attention.

Managing risk in the enterprise

Williams emphasizes that risk management cannot be an afterthought. Organizations need clear understanding of when they're dealing with "high-risk applications" such as healthcare, consumer credit, insurance, hiring, where regulatory compliance and potential litigation create serious exposure.

Common pitfalls include employees unknowingly using free AI tools for sensitive data, creating hiring bias through poorly designed resume screening, or failing to maintain proper audit trails for AI-driven decisions. The upcoming wave of state regulations (Texas in January, Colorado in February 2026) will require heavy compliance layers for certain applications.

The solution starts with universal AI literacy similar to mandatory cybersecurity training. Employees must understand basics like avoiding "free Chinese AI tools" and recognizing hallucination risks. For high-risk applications, organizations need involvement from general counsel, proper logging and auditing systems, and governance frameworks that extend potentially to board level.

The path forward

Williams' message balances enthusiasm with pragmatism. The opportunities are real: development teams can be more productive, products can be more compelling, workflows can be more efficient, and organizations can position themselves strategically for an AI-driven future. But success requires deliberate action: clear leadership commitment, comprehensive training, thoughtful risk management, and patience to let adoption develop organically.

For organizations feeling FOMO about falling behind, Williams offers reassurance: while action is necessary, you're not alone in just getting started. The key is beginning with clear wins then expanding systematically as the organization builds confidence and capability.

 

Episode transcript

Jacob Andra: Welcome. I'm your host Jacob Andra, and I have with me Kevin Williams. Kevin, why don't you tell a little bit about yourself and what you've got going on with applied AI these days.

Kevin Williams: Sure, Jacob. Again, I'm Kevin Williams. I'm founder and CEO of Ascend Labs AI. I have a background in machine learning from an applied perspective. That seems to be the thing that we're gonna be talking about. I used to be a direct to consumer brand owner, and I applied machine learning in my previous companies.

After I exited those companies, I found myself looking for my next road. And Mr. Altman obliged me by dropping OpenAI in what feels like five internet minutes after I exited. So I pivoted pretty much everything I was doing into generative AI. Ascend does two major things. Three or four if you really want to count.

On one side we do training and literacy. So we work with all sizes of companies, all the way from just a few employees up to now tens of thousands of employees to level up large swaths of their populations with foundational AI skills. We run 12 week, not boot camps, but ambassador programs where we will try and level up select populations and now annualized programs.

And then the other side of the business is implementation. So lo and behold, if you provide a lot of literacy and education around these technologies, people start coming up with really good ideas, and they need organizations that can help them implement those, and that can be strategic. I serve as a fractional CAIO or highly tactical as far as building workflows, really deep in the organizational structure.

Policy-wise, I've been heavily involved with the Utah Office of AI Policy and the Responsible AI Initiative through the U. Been in the right rooms as far as making the voice of workforce development heard. And there are a lot of really exciting things that are going on in this state around AI. So it's been a fun ride.

Jacob: Yeah, that's great. And you said a lot of great things there. One thing I wanna circle back to a bit later is this part about workforce development. You and I are both in the trenches helping companies get efficiencies with AI, drive value creation with AI. And one of the big sticking points I think we both see is getting people to actually use it and use it effectively.

So I wanna circle back to that, but I wanted to actually kick things off with kind of just an exploration of, so you're big on generative AI. Your whole practice is based around getting organizations literate and using generative AI. What are some of the biggest actual use cases for generative AI? And you can be specific to given industries, you can talk in general across industries, or a bit of both, but where are you seeing the biggest opportunities for value creation with generative AI?

Kevin: So I'll preface this by saying that there is a lot of FOMO that's going on out there where people are really like, oh my gosh, I'm falling really far behind, and all these other people are doing all of these interesting things. And I tell you, I walk into rooms every single week of surprisingly large and sophisticated companies in verticals where they should know better, they really aren't doing anything yet.

So it's still early days as far as the adoption curve, which I think is being reflected a little bit in how people are starting to think about the timelines here. So for those of you who have FOMO, know that yes, you do need to get on this, but no, you aren't the only one who isn't on this. But organizations that are adapting or adopting the technology, I generally see four different pillars that they tend to lean into.

The first and the most obvious, which frankly gets a lot of the press and a lot of the valuation in the markets, is in dev implementation. Like it's sort of face palm obvious when you see the gains that a well trained and well enculturated dev team can make when they're leaning into AI assisted tooling.

This is 30 to 40% is what a well-trained and well enculturated dev team will see if they lean into AI assisted IDEs, Cursor, Windsurf, using the extent of GitHub Copilot, whatever it might be. Again, leadership is really important, but you see people rampaging forward with what I like to see called bug hunts.

So as a way to get their feet wet, the tech leadership will rally around a new technology and be like, hey, we're not gonna actually integrate this into all of our development process yet, because that's big and scary, and all of your heads are going to explode. So instead what we're going to do is we're going to do a sprint. And we're going to learn together through that sprint, through identifying and quickly resolving all of these tier three type bugs that have crept into our platform and are sitting in Jira, and nobody is ever getting to them. So let's use these technologies to smash all of those bugs.

And lo and behold, because they're generally like niggling little issues, the technology's well suited to solving those problems. And then everybody starts getting more enthusiastic about it, and then they start experimenting more with auto complete functions and getting into sort of the vibe coding land. There's a whole topic there within dev structures that everybody needs to be aware of. That vibe coding is probably not all that it's cracked up to be quite yet, but you can get so far using just the AI IDE enablement and be able to point to an ROI right away. So that's one that's dev. Like if you—

Jacob: I wanna comment on that one really fast and we'll get to your other—

Kevin: Yeah.

Jacob: So your first one is around using it for dev and the vibe coding thing. We don't have to jump in too much to the deep end of vibe coding, but it does seem like a huge force multiplier if you know how to use it correctly.

But it seems like also that devs need to actually be good devs to do effective vibe coding. That for people who have no knowledge of development to jump in and try to do vibe coding, they're probably gonna introduce a lot of bugs, a lot of vulnerabilities, a lot of bad code by not knowing what they don't know. But good devs using AI tools to force multiply what they can output and knowing how to check it and use these tools correctly is a huge clear win.

Kevin: Oh, I agree. I totally. And the ability to MVP to create a minimum viable product of an idea, even as a manager, such that you can test out to see if this is actually going to do what you want, can be really liberating in the right structures. But like anything in dev, most important is scope. Like having a really solid scope. And then second is probably your closing ability.

And both of those are really fragile in vibe coding. For people who don't have a traditional dev background, if you're not particularly good at writing a good technical scope, you're going to get yourself into a world of hurt in a hurry through vibe coding because you're going to try and iterate it like you're chatting with ChatGPT, and they make it easy to do that, but without necessarily locking in the right approach to it such that it has guardrails around it.

The other part is the last five, 10, 15% where it gets super painful to tie off these projects and they end up, you end up with a lot of different dependencies that are in the structure that can be super painful for people to deal with.

Jacob: No, that makes perfect sense. All right, let's go on to your next—

Kevin: Pillar number two. So pillar number two that I see is in product, and this is also obvious, where if, depending on what your product is and your product might be yourself, but it's leaning into integrating AI tools or AI approaches into your product itself. So if your SaaS product is gathering a bunch of information from somewhere, the obvious application is how do you use an LLM layer to aggregate? How do you add type features. How do you make the user experience better and smoother and faster, such that you can increase stickiness of your product. Maybe fight off some of those wolves that are circling around, that are offering better, cheaper, faster options around your product. And then the next, where we spend a lot of—

Jacob: A little bit on that one. 'Cause I think that's a really good one and let's comment a bit on that one. Yeah, at Talbot West, we focus heavily on that as well. And even service businesses, they may not have a product, but they may be offering a service. There's almost every company out there, regardless of what they're offering, whether it's a product or a service, they can probably deliver it either cheaper, better, at higher margin to themselves or give the customer a better experience or somehow improve what they're doing by integrating AI into it. I mean, there are very few exceptions to that rule. Would you agree?

Kevin: There are very few exceptions to that rule. The imperative to do so may differ based on the industry and how close it is to an existential threat from some of these technologies. You find that the early adopters are in industries where the leadership has realized, oh my gosh, this is gonna change everything about the way we do business. So they've leaned into it first. The bigger the business, the slower the industry, probably the more opportunity there is for disruption.

Jacob: No, that's great. I just wanted to dwell on that for a moment. Okay. Pillar number—

Kevin: Yeah. Pillar number three, where we spend a lot of our time is on workflow optimization. And this could be things that are just as basic as teaching people how to use GPT and GPTs or Gemini Gems to accomplish little things in their day to day world, to accomplish things faster. So the little bit related to product. If you are your own product, like a lot of the stuff we're talking about bleeds over between them, but it's accomplishing more by identifying tasks that are essentially transformation of data with some limited human involvement, and finding automations such that you can bypass that step and find the inflection points where people really need to be involved.

And there's endless opportunity here from basically drag and drop workflow tools like n8n, which is very similar to Make or Zapier. But in general, one of the things that we like to see organizations do is to try and shorten cycles. There's been a lot of press recently about, oh, AI projects aren't yielding the results that people expected. Well, it depends what you're measuring. Like, beyond other criticism of the MIT study that's been floating around, like unless you actually fired somebody or unless you actually increased revenue, you're not measuring the right things. So how do you know whether something was successful?

So one of the ROI factors that I like to lean into is reduction in cycle time. So we do so many different things in businesses that we do because we do them, because we've always done them, and we're handing information off from one source to another. We get together and we have a meeting about setting the meetings, and then we ideate and like weeks and weeks and weeks go on.

And marketing is particularly guilty of this, where if you've ever been involved in a video production project, these things always take 6 to 12 months because of all the different moving pieces, whereas if you can automate or augment the automation part of the early part of the process, you can squeeze the whole process down. So it doesn't have to take six months, it might still take three months to do the production part that you're handing off to another partner. But your team can be loaded for bear because everyone's coming to the table having done a better job of ideating and iterating and bringing better ideas to the fore before you're pulling the trigger on the rest of it.

And it is a little bit of a leadership challenge to capture those gains because we do things the way we do things because we do things. So it takes some cultural change beyond the individual person's workflows to capture some of those advantages and create opportunities looking forward.

Jacob: I really like that and that's great. What's pillar number four?

Kevin: So pillar number four is, it's interesting, it's sort of a bucket, which is this is what I like to call future orientation around AI and companies that are on the existential edge of all of this. So, marketing agencies, advertising agencies, like modeling agencies, like these are organizations that are like, oh my gosh, like it's pretty obvious that they're going to be in trouble unless they reinvent their entire reason for being right and leaning into what changes will be wrought by AI in my industry, in my company, in my customer base, and how am I going to accommodate those?

So a lot of well-established companies have marketing layers that have a heavy dependency on search engine optimization, for example. And in a world of LLM driven discovery, it isn't entirely clear how you're going to surface your identity in that world. So you need, you may need to change significant parts of your go to market strategy, and you may find yourself leaning into workflow and workplace development activities in sales and marketing that you wouldn't have otherwise because you recognize that you're on a bit of a slippery slope.

But looking around with how your consumers are going to change, how are your consumers consuming the information you have. We work with a large organization that essentially has proprietary data on one side and customers on the other. And there are hundreds of people in between who take all of this proprietary data, turn it into these really beautiful documents, and then the salespeople sell those to hedge funds and whoever else buys these things.

But essentially on the customer side, they're doing the digital equivalent of slitting the back of the document, scanning it all, deconstructing it, and putting it into whatever documents that they're putting it into. All of that qualitative activity in between, it's almost irrelevant. So this company needs to decide what it's doing is very valuable to the people on this side, and it has access to this incredibly valuable resource on this side. So it has a reason for being, but its end product in another few years is very unlikely to look anything like it does now because the end product will need to be absorbed by its customer's LLM layer and then redeployed in new ways. And it probably won't have a lot of those hundreds of people that are sitting in between.

And then your competitors who's moving faster, who's doing what, what does it mean if somebody leans into this? I think that you and I both see this in the vendor layer, that there are a lot of people who have the chutzpah to be jacking up all of their prices based on some sort of AI feature. And while they have this nice distribution advantage that they're leaning into, what happens if there are a bunch of new insurgents into the marketplace that are disrupting their position.

One I just heard last week is somebody who is building a CRM for automotive restoration, and apparently the automotive restoration business has relied on automotive repair CRMs, and there's a big difference between the two for some reason that I don't really know about. But nobody really cared to develop that product for the restorers, even though it's probably a hundred million dollar market or something like that, but it wasn't worth the time of HubSpot or Salesforce or anybody to do that. Well, now it's worth the time of somebody to do that. And those people are using CRMs, so they're going to take a bite out of those other providers that were unknowingly providing these services, and that could disrupt some of those business models as well. So disruption is sort of the name of the game there as far as looking forward.

Jacob: Yeah, so pillar four is really about helping them see into the future a bit, see around the corner, and strategically position themselves to not get disrupted.

Kevin: So, and it's not necessarily me, I mean, I do do that, like from a strategic perspective, I help them try and think about that. But when you think about applied AI, it's not just about the technology, it is about the change management. And these are organizations who are recognizing that change management is not just about the internal bits, it's also about the external bits. And they're inviting that idea to the table such that it's part of the conversation and it's really great when you see that sort of integration. It's also super rare.

So it's, I would say, of these pillars, anybody doing dev is at least should be experimenting with pillar one. Like you just need to, and if your dev staff is reluctant about it, like you gotta find ways around it to demonstrate that this can work because it is an easy win. You're not gonna do away with your whole dev department, but you can get easy wins.

Then if you have a product where these things are needed, so like that CRM example, you better figure out how you're going to integrate these tools in a way that your customers are going to be excited to stick around with your product and not have them dribbling off.

Workflows, start little. Have people experiment on their own, solve some of their own problems, and then start gently expanding those shortening cycles, et cetera.

Jacob: I like that. And I like what you said about change management for that pillar number four. And it gets back to what we started out with that I said I wanted to circle back to, which is the human element, right? So we deal with a lot of technology, but ultimately, if this technology isn't working for the humans in an organization and those humans aren't deploying it successfully, it doesn't amount to much. Right.

And so what are you seeing with getting people to get on board with this? I know that I see that it's really hard that in most companies that we work with on the Talbot West end, you have your few champions that want to jump on it and really use it for all it's worth. And then you have a few people that, a bell-shaped curve. You could say that middle area where there's a significant portion that use it somewhat, and then a lot of your stragglers that aren't really that interested in using it at all. So what are you seeing on that front?

Kevin: It is tough. I, a lot of what we've seen resembles what you're describing there. It does start from the top. I like to see an organizational manifesto. At this point I'm almost insisting on it that it come usually from the CEO who's establishing how we're going to think about these things, shining a light on the fact that it is going to bring change to the organization, and I don't care what organization you are, it's going to bring change and that there are reasonable guardrails that are in place. So having some sort of a policy, so we have a manifesto, then we have some guardrails as far as how we actually use this stuff. So we're avoiding some risk in the organization.

And then it's literacy and of course talking in my own book there, but I see this all the time where you get a CTO who's really excited about technology and they talk the CFO into supporting, getting, I don't know, 500 OpenAI licenses or an enterprise set of licenses, and then they just drop them on the organization and they're like, have fun guys. And they're not realizing that although folks like us are living and breathing this, it's a tool orientation that most organizations are accustomed to that isn't appropriate here because any fool can get into GPT and prompt a little bit and get some interesting responses. But to really use the tools to the extent of their capability or even the first third of their capability, requires a lot more training and exposure to how to really think about using them.

And that helps get people's feet wet. And then I like to talk about, I sort of have it in my keynotes. I literally call it my face palm slide, which is, okay, cool. I just told you about all of these neat things that you can do in your organization and now you have FOMO worse than ever before, but get GPT licenses, teach people how to use GPT, and importantly, how to build things like custom GPTs or projects, and then give them a little bit of latitude to try and find things in their daily life that they don't like to do. The things that suck their time that they hate or that are impediments to the things where they wanna spend their time and have them not necessarily automate that whole task, because you know, and I know that's generally not going to work, but use the tool to solve some of the pain, to get the task more expeditiously completed, and then that frees up time for them to do other things.

And you see this aha moment when people do that, but only once they actually understand how to do it. And although it, again, it's not that hard to use a lot of these tools on a basic level. It's not Excel where you open it up and you put in a formula like you need to interact with it on a different level. And once you get that unlock, and you'll see it in organizations that have done sort of broad based literacy development and license deployment. Like when you combine those things together, you start seeing ROI, as long as you know how to define the ROI, okay, everybody's a little bit faster, and then they go like, eh, you know, I don't know. That's not necessarily great. Although the people are happier. There are some—

Jacob: I mean, having the people happier and having higher quality of life, I guess, could be a win right there, even if nothing changes with the company's bottom line. I mean, if people like working there more, that could already be somewhat of a win. Right?

Kevin: Yeah, yeah. No, totally. Every organization's different that way. It really is. But most people are overburdened or at least feel overburdened, and most people are overburdened with these things that kind of stink. Like think about hiring, like what we do in organizations, it's sort of nuts. Like you interview somebody to be like a salesperson and why do you hire them? Because they are awesome on the phone, they could sell snow to Eskimos, whatever it is, right? And then what do you do? You hire them and they spend 40% of their time in HubSpot or Salesforce, just loathing, like tracking things. You want that person on the phone doing what they do best and what they like to do, right? Particularly in that case, but you load them down with the administrivia of a modern organization. So if you can pull away some of that administrivia, you're putting them back in their zone of genius. And if they can spend more of their day in that place that made you hire them in the first place, that is a huge win for them and a huge win for you.

Jacob: Absolutely. Yeah, and I like how you mentioned custom GPT. Or, you know, Claude has custom projects, Gemini has gems, and it's amazing how much you can get done with these things. For the types of engagements Talbot West does, we consider all those like lowest tier opportunities. You know, anything that you can do with a custom GPT, you get all that stuff implemented first, and then you go onto your next tier, which might be some custom stuff, right? And we also do that as well. But all this lower tier stuff, it's amazing how many people don't know how to effectively build custom GPTs and deploy them, right? And so there's just so much low hanging fruit there. But there's also some risk, which I want to get to in a minute. 'Cause I know you do a lot with de-risking AI use.

Before we get to that topic, I'd love to hear, just you throw out a few real world examples. You can anonymize them if you need to. Maybe from buckets three and four. So, you know, maybe just quickly run through a few examples of product augmentation where it, you know, specific ways a product was automated and also specific kind of workflow improvements used with or driven by AI implementation.

Kevin: On the product side, it's sort of what I was saying earlier. Generally it's recognizing that generative AI is fantastic at dealing with unstructured data is an important realization. And unstructured data is data that sort of sits outside of general tables. So I see a lot of organizations sifting around for opportunities in unstructured data. So it might be customer feedback, it might be customer service transcripts, things like that that they can pull pieces from to amplify the customer experience better.

As far as internal workflows, sort of again, what I was describing before, it's about smoothing out whatever that process is. So we're working with a big insurance company and they have to deal with tens of thousands of binders for commercial policies every month. And they have issues with compliance internally about is the document signed? If it's not, I mean, insurance is not exactly scintillating, but if the client hasn't signed the document and the white van gets in an accident, it creates a real problem for both the client and the insurance company at the end of the day.

So their overall growth might be constrained by their ability to audit a certain percentage of their overall policies within their risk thresholds. So that takes a poor soul who's going in and basically manually looking at thousands of different documents. So if you can use a combination of image scanning technology, so OCR on top of the policies, and then add an LLM layer to it that interprets the data—is that a signature? Or is that like a tea stain that's on the document? Which is it? And you know, 99% of the time it's all good, but let's keep our auditor focused on the 1% that has some sort of anomaly with it.

So that's a theme of a lot of these workflows at the moment is keeping humans in the loop at the right inflection points such that they can, again, it's not—I'm not saying insurance auditor is a zone of genius, but keeping them in the place where they're looking at the right documents at the right time as opposed to their attention being spread all over the place. Essentially anything that is shuffling data around. You see them in SMEs, you see people using them for customer service response templating training.

I would say that sales augmentation is a huge deal. We have a practice that's focused around augmenting go to market teams, and one of the first things that I tell people is you need to be gathering all of that unstructured data and then doing things with it. So using a good tool that isn't just one of the transcription tools that we have around, but that does dynamic coaching, such that is your salesperson actually adhering to their playbook? Can you give them coaching, feedback, response and then as opposed to them having to go to HubSpot or copy and paste things around, is there a clean integration from the platform that gets all the follow ups from the calls into HubSpot and keeps them on task? So, I mean, literally thousands of different things.

Jacob: Yeah, there are literally thousands, like you say, of workflows and implementations like that. But yes, that sort of shines a light on some of these low hanging fruits where large language models do them exceptionally well. Not perfectly as we need to point out, right? So totally agree with what you say about a human still needing to be in the loop. It's just that they need to be in far less of the loop than they were before, right? And that's a good way to look at it.

But yeah, let's segue right into the de-risking of it because obviously there's a lot that can go wrong with deploying large language models into enterprise contexts and a lot of your practice is centered around de-risking that and putting the right guardrails in place. So let's jump right into that aspect of it.

Kevin: So it is funny because I'm obviously very enthusiastic about this, and I know you're very enthusiastic about this as well, so I'm like, do all of the things. It's so cool and like, you know—

Jacob: Yeah.

Kevin: Stuff will happen and I'm sad to say that the fun police are definitely showing up. It is very obvious that the larger the organization is or the more adjacent it is to sensitive information. So there are actually some regulatory terms around this, which is high risk applications. So they're generally obvious. They're like mental health, healthcare, consumer credit card data, insurance, hiring, et cetera. Like this stuff's kind of scary to think about it getting out of the organization.

And major enterprises are leaning into their OpenAI enterprise accounts. So the one thing that I will clarify for the audience is that real risk versus perceived risk are a little bit different. That there is a non-zero chance, if you talk to people who build the models, that that data could be used in training. If a non-zero chance, even if that's a 0.00001% chance is unacceptable based on the application of the data—HIPAA comes to mind because people can go to jail for violations of HIPAA use—then you run smack into risk and governance.

And a lot of this is really poorly understood because we've sidestepped nationally any sort of federal regulation around these, about how we treat these topics. Instead, we've left it to the states and there are several laws that are coming online over the next few months, Texas, January 1st, Colorado February in '26, that require a really heavy layer of compliance as far as reporting a lot of different issues and what—

Jacob: I wanna point to what you're saying, which is not every application of AI runs into these issues. And so a big part is discerning which ones you need to pay special attention to these guardrails and these issues where you can get into some seriously hot water really fast. And which applications are relatively benign. They may still need a human in the loop, but you're not up against some of these issues you're talking about. So I think step one is know which of these categories you're falling into with your use case. Right.

Kevin: I would say that from a generic perspective, everyone needs a general understanding of general AI risk. Just like you will not walk into any even mid-size organization today as a new employee and not be subjected to cyber training. Don't pick up a USB in the parking lot and plug it in. Don't use data in this way. Like it's just not gonna happen. Like you will get trained on this stuff.

And we're providing a similar level of assurance for organizations of all sizes, such that their employees will understand the basic risks that fall outside of these severe high risk areas. And these are things like data discovery, so putting your information in the wrong tools. Friends, don't let friends use free Chinese AI tools, like terrible idea. But people don't really realize that. And the stats back it up that you have something like 80% of employees under 30 who are BYOAI. So they're there and they're cheap and they're using whatever tool it is to help them accomplish their job. And that creates risk in the organization. Hallucinations, which I know you've talked about before, can create risk in the organization. Drift, et cetera. There are all of these things that a general counsel or your organizational counsel is gonna feel a heck of a lot better if everybody in the organization has some understanding of what this stuff is. So when the bad things inevitably happen and depending on the organization, they will inevitably happen, you will have some way of managing that.

Jacob: Yeah. And I think I agree with you there that some level of AI literacy and understanding of both the strengths and weaknesses of generative AI should be mandatory for everyone. And so I'll totally give you that. That's absolutely common sense. And then you have these special high risk categories that are in a tier of their own. So let's talk about some of those and help navigate those.

Kevin: So this gets challenging because you have people who are building the models, the rocket scientists who are building OpenAI and et cetera. And then you have people deploying the models. And if you're just deploying the model in a straightforward sort of way, and you're not really transforming it and you're not doing anything that's particularly complicated, then you're probably fine. But if you are transforming the way that the model works and transformation of the model can be on a couple of different fronts, it can be through fine tuning of the model to your particular use case. It can be actually through your prompt layer and the type of information that you're asking the model to yield can create real problems if you're dealing with things like hiring.

So a totally typical application that so many organizations are doing is to deal with this influx that's happening from AI generated resumes. And they're fighting AI with AI and they're vetting things. And if you didn't know better, the way that you would think about that is to say, hey, you know, this is my perfect resume that I wanna match and let's scan through all the other resumes and then find the people who match closest to that. Well, you just injected bias into your hiring process, and I promise you that if your GC or your CHRO knew about it, their head would probably explode if they understood that that was actually bias that you interjected in there.

So you probably have additional scrutiny that you need to follow if you're a product. And generally these are for larger deployers, by the way. So most of the laws are written with a threshold of 50 employees. Employees makes no sense. But states are states, they're gonna do their own thing. But if you're over 50 employees and you're doing these more complicated manipulations of the model, you're going to be subject to additional scrutiny, and that means liability.

This is going to be a total smorgasbord for IP trolls who are out there—not IP necessarily, but litigation trolls who are out there looking for violations of various laws and filing class actions and things like that. So I have major concerns about what that's gonna look like as these laws start coming online.

Jacob: I wanna go back to something you said and challenge it just a little bit, which is if you're not really tampering with the models, you're okay. If you are tampering with them, you need to be really careful. I agree that the more you're tampering with the model, you are injecting new surface area for different types of risk. But let's go back to the scenario where you're just straightforwardly deploying a model without fine tuning or otherwise messing with it that much. I mean, I would argue that in some of these sensitive areas, you still have tremendous risk. And I don't think you meant to say that those weren't there, but I just want to go back and let's talk about those. 'Cause there are risks with uploading certain types of data to a commercial large language model. Even if you opt out of the do not use my data for training, et cetera. Right? Who knows how airtight that actually is. There are other types of risks around hallucination. I mean, really in these kind of sensitive use cases, you have a ton of risk, even if you're not tampering with the model. Right.

Kevin: Yes, absolutely. Again, even just prompts, that exercise I just described with hiring, you could do that with just some basic automations and GPTs, and you might be following afoul of either state or federal employment laws by doing so. So you need to think about it. And then anything that is high, these high risk categories, like you just fall into a different realm when you're dealing with highly sensitive data. And it could be secured data, it could be health data, it could be in like formula for Coke type competitive R&D data is when companies really get themselves bound up, leakage.

You know, this gets practical on the ground with not using external models. So one of the things you'll find is a proactive organization that buys all these GPT licenses. People will still be using their same browser and when they log into the corporate account, their personal account, which is the free account, is the one that it's defaulting to. And they don't even realize that they're putting sensitive information into a model that is explicitly telling them that it's going to train on that data. How that data comes out the other side is an item for debate. But yeah, there's risks abound. Again, this is what I'm saying about the fun police. Like I want all this stuff to happen right away, but I also acknowledge that there's a reason that a lot of these organizations are completely paralyzed.

Jacob: Yeah. And you know, at Talbot West when we work with our clients, I mean, we don't want to come too far on the side of put the brakes on because we do want them to adopt these technologies. We just want them to adopt them safely and smartly, and not open up new risks and vulnerabilities for themselves, which I know is what you are doing as well. So you're both encouraging adoption, but you're saying pay attention to these specific things, don't fall afoul of them. And what are you putting in place to help companies kind of navigate that, thread that needle of still getting the benefit of the adoption, but not falling afoul of these landmines.

Kevin: For companies that have a risk orientation, it is first risk literacy. So that's the first place to start. So the type of program that we run provides guidance from senior executives for managers who are managing other people who are using these tools to rank and file people who need to know about sort of the basics. And once those pieces are in place, you have to decide whether or not you have dev oriented responsibilities as far as logging and auditing and et cetera. So that takes pretty heavy involvement with general counsel's office and the appetite for risk management within the organization.

And then on a technical level, it's getting in there and working with the teams to make sure that those pieces are in place. And that's an industry that's likely to be huge as far as doing that risk mitigation from a data management perspective. It's already in place for SOC 2 and some of these other big standards where companies are relatively used to doing audits and things. This is just another flavor of that, but they have to understand the difference in the flavor because of the nature of the technology and how slippery the technology can sometimes be to define.

Jacob: Yeah. And so putting the right corporate governance frameworks is also a big part of that. Right. And I think you do a lot of that in addition to the training and education component.

Kevin: Yeah, exactly.

Jacob: With—

Kevin: Yeah. What are the—it starts with the manifesto and the policy and the policy is going to be reflected in the governance approach that the organization is taking, and that can drive its way all the way up to the board.

Jacob: Okay, great. Well, I know that we've only scratched the surface. There are so many other issues to delve into. Subtopics of all of this. And so I think we have done enough damage for one day at least, kind of teasing some of these issues out. There's a lot more to explore. Maybe we'll have to have you on in the future to dive deeper into some of these areas. But I really appreciate you joining.

Kevin: Awesome. It was great. I always love these conversations. My wife's tired of hearing about it, so any opportunity I have to get it off my chest in like a 45 minute period is a winner for me.

Jacob: There you go. Glad I could oblige. Thanks for being here.

Kevin: Of course.

Industry insights

We stay up to speed in the world of AI so you don’t have to.
View All

Resources

Subscribe to our newsletter

Cutting-edge insights from in-the-trenches AI practicioners
Subscription Form

About us

Talbot West provides digital transformation strategy and AI implementation solutions to enterprise, mid-market, and public-sector organizations. From prioritization and roadmapping through deployment and training, we own the entire digital transformation lifecycle. Our leaders have decades of enterprise experience in big data, machine learning, and AI technologies, and we're acclaimed for our human-first element.

Info

The Applied AI Podcast

The Applied AI Podcast focuses on value creation with AI technologies. Hosted by Talbot West CEO Jacob Andra, it brings in-the-trenches insights from AI practitioners. Watch on YouTube and find it on Apple Podcasts, Spotify, and other streaming services.

magnifiercrosschevron-leftchevron-rightarrow-right linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram