Episode 7 of The Applied AI Podcast

Jacob Andra interviews Scott Peiffer of i4Ops about data security in AI deployments. 

About the episode

Most enterprise AI projects fail because companies hold back their data. They spend hundreds of thousands of dollars training models on sanitized datasets, afraid to expose sensitive information. They get generic answers that create no competitive advantage.

In this episode, Scott Peiffer from i4Ops cuts through the AI hype to discuss how to deploy AI systems that actually create value while keeping proprietary data secure.

What you'll learn

Scott Peiffer brings 35 years of data storage and security experience from Intel, NetApp, and now i4Ops. He explains why the current approach to enterprise AI deployment produces disappointing results and what companies should do instead.

The FOMO problem Companies receive mandates from the C-suite to "do AI" without clear objectives or strategy. Research shows 90% of these models fail to deliver value because organizations train them on limited data subsets, withholding their most valuable information out of security concerns. Employee data, sales conversations, customer support transcripts, and strategic documents remain locked away, resulting in AI systems that cannot deliver insights specific to the business.

The challenge compounds when companies lack a systematic approach. They bolt new AI tools onto poorly designed foundations without addressing underlying digital infrastructure issues.

Why digital transformation comes first Successful AI deployment requires a foundation in broader digital transformation strategy. Companies need to start with a clear end vision, map current systems and processes, and create a stepwise progression rather than bolting new tools onto poorly designed foundations. This means defining where you want to go (higher efficiency, preparing for acquisition, competitive advantage), understanding your current state through systems mapping, and identifying a practical path forward that does not break the bank or disrupt operations.

Knowledge management as competitive advantage The future requires every competitive organization to maintain an in-house fine-tuned RAG system trained on company-specific knowledge. This means addressing fundamental questions about documentation, data quality, and information flow before implementing AI solutions. Scott emphasizes that approximately 75% of companies now use local data models rather than cloud solutions when dealing with sensitive information. The security wrapper stays in private data centers where organizations maintain complete control.

The data security gap While data at rest and data in transit receive encryption protection, data in use remains vulnerable. When you download an Excel file to analyze it, that data sits unencrypted on your machine. You can copy it, manipulate it, send it to competitors by accident or malicious intent. When employees ask public AI models to summarize files, that unencrypted data gets ingested into public language models.

i4Ops' approach Rather than plugging holes after they appear, i4Ops uses a patented virtual machine approach that starts with a default of zero data egress. Data cannot leave the protected environment unless explicitly whitelisted, regardless of credentials or authentication methods.

Where AI creates the most value Beyond the obvious cost savings in customer support and repetitive tasks, AI delivers transformational value when companies train models on their complete proprietary datasets to solve specific business problems. Scott describes how his team solved a weeks-long coding problem in hours by training a model exclusively on their kernel code. They asked two questions and had their answer.

Episode transcript

Welcome to episode seven of the Applied AI Podcast. I'm your host, Jacob Andra. Today's episode covers important issues of data privacy and security, AI deployment, and other enterprise issues. Enjoy.

Jacob Andra: 

I'm here with Scott of I4 Ops. Scott, tell us a little bit about yourself and what you're up to.

Scott Peiffer: 

Yeah, my name is Scott Peiffer, I've been doing the, uh, data storage and data technology thing for about 35 years. And near and dear to my heart is ai. It's taking care of data I work for iops I4 ops. Uh, we are a data security AI security company, and our mantra is we keep the data in. that's what we're gonna talk about.

Jacob Andra: 

That's great and we'll, we'll get into data security and what you guys are doing, the good work you're doing in a little bit. I wanted to talk to you about something that, uh, is near and dear to both of our hearts, which is this idea that right now AI is a bit of a, a fomo, uh, thing. A lot of companies feel desperate that they need to be doing AI because it's the hot thing. They don't wanna miss out. But they don't really need to know. They don't really know, uh, what they should be doing, and a lot of'em maybe aren't going about it in the most, uh, intelligent or systematic or cohesive way. Do you have thoughts on that?

Scott Peiffer: 

Yeah, I, I've read a lot of reports where companies are, getting, uh, mandates from the, the C-suite. Go do ai. Well, everybody's got some level of AI going on anyway. There's, uh, shadow ai. I'm using perplexity, I'm using chat, GPT or Gemini, this, this mandate to go fix things, uh, go use AI to fix things. It, it's being driven by this, like you said, fomo, where people are like, okay, I'm gonna do this. And. And I just shake my head. They're spending 200,$300,000 trying to, to set up something a clear objective of what what's gonna happen. And so expectation is that they're going to find great value, 90% of these models don't turn up Huge value. Because they, they have, they, the research we've done is they hold back. They, they take, I'm gonna only share just a little bit of my information and to train that model and think it's good enough.'cause I, I really don't want to disclose my really important secret stuff. My employee information, my sales data, my conversations on chats with, with customers and support. So they hold back and they're just not getting a value from that project.

Jacob Andra: 

Absolutely. I think that's true and. Talbot West when we consult clients in true digital transformation. Um, which I think as an aside, I think that's a big part of the conversation here is that, uh, for an AI strategy to really get traction and create results, it needs to be based in a broader digital transformation discussion. Um, and so they're just running to kind of, uh, sometimes put lipstick on a pig as they say. Without really looking at the underlying digital transformation that needs to take place at a deeper level in the company. Um, but we always say, uh, when we're engaging with a client. Start with an end vision of where you're trying to get to or why you're doing this. And that could be some v vision of a future state that's fairly clearly articulated where things are much more efficient than they are now. Um, it could be that they want to prepare the company for sale and get a higher exit by demonstrating certain level of technological efficiency and sophistication to a buyer. Um, whatever that is, they need to clearly define that. Then clearly define where they currently are. That involves a mapping of their systems and processes. And then sort of look at the delta between the two and what is a, a natural progression, a stepwise progression that's not gonna break the bank, not gonna kill them, where they can roll this out in, in an incremental way. Um, and to me that's, that's super fundamental.'cause otherwise you're just running to bolt some new gadget, some shiny new gadget onto a. Maybe a foundation that's not very well thought out to begin with.'cause a lot of these businesses, they grow in a very sort of haphazard way, and nobody ever takes the time to put these pieces together the way they should. And so a more fundamental re-imagining or, um, envisioning of how things could be, I think is, is important here. I.

Scott Peiffer: 

I think you touched a point that's important here. Let's highlight this. It's when you train a model, when you do the, the diligence. You want to train it to speak your language, your business, you can ask questions, but one word may mean something different to your business aspect than to somebody else's. So having a, a locally trained model, a model that's trained on information, and maybe you're, we can get into rag and stuff later, but where you're. the data constantly, so it's always fresh and it's about your business. That's when you're starting to gain value from what you're doing. You have the ability to throw all of your information in train model and keep it local so that when, you know, I use the term Dexter, the, uh, the new kid on the block, who's gonna say, I'm gonna impress the boss and I'm gonna ask chat GPT some questions. You know, they have. A generic answer, but when you ask a local model that's been trained appropriately, you have a specific answer that's relevant to your business and your customers and your environment, you're gonna get such a better return on investment.

Jacob Andra: 

Absolutely. And what you're talking about there is, you know, really organizational knowledge management, which is so fundamental. Um, I think in the future it's gonna be table stakes that every organization that wants to be competitive has a in-house, uh, fine tuned rag system. Probably neuros symbolic, meaning it's anchored in some kind of a ontology. Knowledge graph or what have you, uh, basis of truth for that company, but also, um, uh, you know, fine tuned given access to the company's, uh, knowledge repository and able to just give answers to anyone in the company that needs them. Of course, there'll be the appropriate permissions in place based on, you know, user roles and access and all of that. But really this free flow of information that is very, very specific to that company. It's not generic, and that, uh, requires a much deep, deeper, uh, rethinking than just. Doing AI or going and like signing up for an AI tool. I mean, you have to actually look at what is the state of your knowledge repository. Uh, do we, do we have things well documented? Well documented? For most companies, the answer is no. And so there's a whole cascading, uh, lot of dependencies, precursors, and all of that that have to be addressed. And again, this gets more to the true digital transformation rather than just, Hey, let's do ai.

Scott Peiffer: 

I just finished a conversation with a gentleman who's working with, uh, the government in Canada and. He used a, a great analogy. He, he told a quick story on, if you mind, uh, he's got divers that work and they do, they do debug bombs. Okay? Mines water mines, and they do it underwater. And one of the guys in a certain country was doing this diffusing a bomb. And he, they, they got the, the bomb done, but the detonator detonated any loss in fingers. Well, a couple days later, another person was doing the same thing. was only two days later, but that information couldn't flow through. So he also lost fingers. They, they correctly, did the bomb, but they didn't get the detonator. So when you're talking about company and you're losing your fingers, you want to have the most current, fresh information possible. There are a lot of companies out there. Very large companies have a policy. No ai. You cannot use generic ai, there's a good reason for that. as you said, if a company has the ability to look holistically, they create a model that says everybody inside of corporation A, B, C now have questions and answers, and it's constantly learning. That is a a hundred percent freshness in the learning model, the corporate use case, and it's secure because it's your model, your language, your employees, and you just kind of, yes, you have to put security around it, but that that model is the future. Companies will train, augment, and stay fresh on all the information and nobody loses their fingers.

Jacob Andra: 

Percent. Um, I'm glad you told that story. One of our clients, uh, which I'll keep anonymous, but had a similar situation. There were no fingers involved, but it was a case of there was some well-known information that wasn't passed to the right people. The right people didn't have access to it. And so, um. It, it was re regarding a vendor with some known quality control, control issues that needed a, a higher level of scrutiny on their deliverables than, uh, normal vendor scrutiny, uh, that the company did. But the, the fact that this particular vendor had been escalated to a, a higher scrutiny level wasn't passed on to the right people. And so they signed off on some stuff that ended up creating a huge liability for the company. Had they just known this one fact, Hey, this vendor needs to be scrutinized at, you know. Uh, code red instead of, you know, code green or whatever, you know, normal pass off. Right. They would've caught this stuff and, and that risk wouldn't have been passed on to the company. So it is just a case of the right people having the right information at the right time. It's that level of visibility, just like you're saying with the divers.

Scott Peiffer: 

So it could be lose a couple fingers, uh, you know, something very physical, or it could be a stock price hit. know, if you, you do something that is augmented specifically, or dialed in specifically to your business, you make a choice that. That can have a material impact on your company value or a customer, or, you know, a scenario where not an easy recovery, if at all.

Jacob Andra: 

Absolutely. Uh, you have a. Pretty impressive background in technology and you know, data stuff. Why don't you tell us a little bit about your background, where you come from?'cause I think that plays into this conversation.

Scott Peiffer: 

Yeah, I, I, uh, I got to learn, uh, technology from Intel Corporation. This is, this, go way back, like 30 some years ago before it was, is what it is today. It was a technology expertise, training grounds, and I got the opportunity to, uh, work with some of the best and the brightest. we started a company, uh, an internal division on storage. Now, if you think about computes and intel, everything's chips and stuff, but a chip has to have the right io, it has to have the right storage. And so my job for 15 years was to really drive the, the data value of things. So this is back in the days when everything was about faster, faster, faster. Well, you fast forward to my next role and I worked at NetApp Data Management. I got the opportunity to run ontap, which is a very complex management of data. And this is security operations, DevOps, uh, San Nas, object file, block object, all the different pieces of remote replication and business continuity. And all those things, again, come back to the data. What is our job is to protect the data. Data has value. you fast forward again, and at I4 ops, uh, as operations and product, about data. So when Jacob, when Talbot West goes and does a digital transformation, your goal is to, to, to create value from that data, right? You're, you want to help a company take what they have securely and create value. So you can't just lock it away in a safe, it sounds good, it's super secure, you gotta use it. And so for most of my career, I've been in the, the opportunity of how do I take data, how do I use it to create value? So where we are today, we're, uh, really talking about securing ai. So we talked about the, the market of ai, how it's growing and everybody needs to use it. I do talk a lot about securing it such that anybody that's using it can do it in a, a secure environment, meaning data doesn't escape, it's not ingested by a third party. It doesn't allow some nefarious person to, you know, steal your data'cause that happens on a daily basis.

Jacob Andra: 

I. In a minute to the specific mechanics of how you're, you're doing that for, uh, for your clients.'cause that is very important. But I wanna put a finer point on this, uh, the importance of data and how useful it is. Like I've, I've been saying that data is the new oil. Uh, it's so valuable in the age of AI getting even more so. So anytime you have a proprietary data set, you wanna be using that. I mean, if it's just. Tucked away somewhere. Um, it's no use, right? You're not actually realizing the value of it. Uh, just like oil under the ground, right? It's not actually, uh, driving any value. And so. Or another analogy I sometimes use too is the data has to be sort of cleaned up and, you know, uh, often you have to extract the insights from it, almost like refining or so, you know, if you have ora in the ground, it's, it's not as valuable. It might be like a very low percentage of valuable minerals in it, but once you refine it and pull those minerals out and you concentrate them. Right, then you have something much more useful. So a lot, a lot of times what we're doing is working with clients to even figure out how to extract, uh, value from a lot of messy data. So all of which is to say that absolutely the data needs to be used for it to be meaningful or valuable in any way. And to your point, uh, in order to use it, you wanna do that securely. And that's, I think, where you come in. So I'd love to hear, um, first of all if you have any comments on that and also, um, what you guys are doing specifically to keep it secure.

Scott Peiffer: 

Sure. So let's go back to the oil, uh, analogy. Oil's fantastic, right? It's a, it's a commodity. It's, you know, it's, it's out there. When you refine it, you can put in your gas tank. And you can use it, right? You can do a lot of really fun things. Maybe it's playing in the water, or you're driving a car or a boat or whatever. You get value from it. Having that asset commodity available, gives you of doing something. So, uh. When you start thinking about data in this perspective, data that's in a primary storage solution. So whether it's a NetApp and Dell EMC, or you know, Hitachi, it doesn't matter. typically encrypted, it's protected, it's, it's really locked down that very few people have access to it. Then you have your shared information, and this is a real problem. Uh, it could be a SharePoint or whatever. studies that show that 50% of the companies in the US. We're sharing up to 1000 files that have sensitive information on it, not because they want to, because they just don't know better that those, those those credentials allow people to get into a SharePoint folder that has home address. Now, that's not, you know, a big, big deal, but I personally don't want to put my address or my phone number or my, social security number or my medical records on. For, you know, every employee to get to. It could be things like, uh, more, more specific about responses. They might just, salesperson might drop something on there that says, you know, big customer corporation A, B, C decided to us because of a two points in, you know, whatever. Those are things you just don't want out. So data, shared data that is shared needs to be protected and, and rights need to be given to it. Data that's in primary is encrypted. Data that's in backup is encrypted. Data that's in flight is encrypted. But when I can't grab information and I want to do a data science project, or I wanna do some research, it's unencrypted because I need it to be unencrypted. say, well, Omo, uh, encryptions there, and I can open applications. Nobody can afford that. So let's talk reality. So I'm gonna download a, an Excel file, and that Excel file is not encrypted when I'm using it. Well, if I have access to that Excel file, I can take it, I can copy it, I can maneuver it, manipulate it, push it, send it to you know, a com, a competitor or, or whatever, by nefarious design or because I just made a mistake. So data in use is a huge vulnerability for, for every company. Now that might be me as Dexter, the new product person grabbing a file and saying, Hey, uh, could you summarize that file for me? And now all that stuff that I just read that's not encrypted is now ingested in a public, uh, language model. So I4 comes in, I4 ops comes in, and we are like, Hey, this is really important. We think AI drives value, drives business. But we want to make, so Dexter doesn't grab a file and, and take it out. We also know that, uh, there are tens of billions of passwords and usernames and, and, uh, credentials on the Russian marketplace, on the dark web. And for just a few hundred bucks, I can go get a whole lot of them for you. Let's say 20,000 of'em for 150 bucks or maybe a thousand bucks. So credentials are no longer good enough to protect my data because at some point we're gonna get hacked. And if you don't believe me, just set up a Google alert that says, uh, data breaches. you'll get a daily list of, you know, six to 20 breaches, uh, that probably have somebody that you've used. It's your, your hotel, your car rental, your um, bank, your favorite place to shop or buy movies. Your, your credentials are gonna get hacked. Well, once they've hacked or they purchase these things that a hacker doesn't have to hack anymore, they just log in, they buy the credentials, they log in, and then what happens? They can steal your data. They can walk away because of credentials

Jacob Andra: 

I mean, I guess two. I guess two. FA is designed to kind of protect against that, right?

Scott Peiffer: 

It, it is, and it is fairly effective, but it's not fully effective. So, uh, two weeks ago there was a company that got, uh, hacked and they didn't steal data. They stole the, um, the t uh, multi-factor authentication tokens. So did they take tokens? Because they had a whole bunch of credentials. So now they can go into a company that they had the credentials for, and the tokens, MFA, they, they it in and they can answer those tokens more quickly than you would notice that. Hey, did, did I just get something on my phone? Oh, ignore it. It's done so quickly that I don't have a time chance to say that. No, that wasn't me.'cause my phone always asks me that question. Was this you that logged in? So with those credentials and tokens, people can get in and start to take your data and they're gonna do the, the most logical thing. They're gonna grab as much as they can, as quick as they can. Peace out. I'm out and you may not know that they're ever there. That information is now sold again, back to the dark web, or it's used as a competitive disadvantage to you, and it may kill your stock price. It may take your customer list, it might do your sales price, right? There's stuff that is in there that just doesn't need to get out, so we stop it from getting out regardless of what your credentials are. Jacob, when you log in, we're gonna let you do what you need to do, but we're gonna stop you from getting data out. So that's, that's the whole purpose of I4 Ops is we wanna make sure that AI happens, a data science project or your customer billing happens where you have a, an oversight, that data doesn't get out only based on what we want to let go in or out.

Jacob Andra: 

So talk more about how that works and how you guys are able to do that.

Scott Peiffer: 

Sure. So we'll keep this at a very high level basic, uh, functionality. So everything in a computer is an input or an output. Okay? So when I put data in, I. It's an input when I want to take it out, it's an output. one of those iOS input outputs is, goes through a, a certain, um, structure in the operating system. simply do this. It's a patented approach, but we simply get in the way of rights. So if you wanna write something and it says, I wanna write customer file to this location. it's inside of our environment. You write customer files from volume A to volume B, perfectly normal for a data science project or for a copy. wanna write it to a temp file so I can put it on my thumb drive. We are going to say that as a right. It is not allowed to go outside of this, this box that we create. Okay. Now you cannot take anything outta that box unless it's written on a white list. That white list says you can only move it to this directory, this volume, or location. So it's very prescriptive.

Jacob Andra: 

Clarification question there. So is the box, uh, roughly corresponding to an organization and their, uh, parameters they set?

Scott Peiffer: 

We will call a box a virtual machine.

Jacob Andra: 

Okay.

Scott Peiffer: 

you can make a virtual machine as large as you want. It can run an Oracle database, a vector database. It can run a billing software, HR software. It could be data science, PyTorch, uh, Python, scripted based XY Zs, can do anything you want inside of a virtual machine. Just like a server, and that's the box. Those are the boundaries. So you log in, you can do your thing, we can share things, but to get it outta that box, it has to be on a list and only a program manager. A data owner will put yeses and nos on that list. Now, just to sharpen that point, again, loss prevention, endpoint detection and response, typically plugs as many holes as they know, and you have to update that constantly. We start with a defacto of no holes. It doesn't leave, period, unless it's on that list, which is a couple of yeses. Okay, so it's, it's the other side. do penetration testing. That's how people get in. We're gonna assume that those companies do their thing. We are going to take the opposite, which is exfiltration testing. Exfiltration means simply unwanted egress data. Okay? So we're gonna make it so that you can't take the data out unless it's on that little white list.

Jacob Andra: 

Yeah, that makes sense. And so really that's applicable, uh, broadly applicable, not just for AI applications. Um, so really anything company-wide, um, but. You, you are focusing especially on, on, uh, AI applications because it seems like there's just a larger risk profile associated with them. And so let's take the, uh, the example of a, uh, a large language model that a company wants to deploy. Just like we were talking about. It's, uh, trained on their data. It's, uh, it's theirs, but it's connected to a cloud. It's uh, let's say a private cloud instance relatively secure. Um, does the cloud instance live inside this virtual machine also? Um,

Scott Peiffer: 

So you can.

Jacob Andra: 

how that set up.

Scott Peiffer: 

You can deploy it either way. So typically a large language model, you can have the data sitting outside, you can use pointers so that I'm going to use that database as my, information for this AI model. Trying to stay simple here I'm, I've got pointers to it. It's living in the cloud, it's doing that thing, but me, when I access it. I have to access it through the virtual machine and the virtual machine will vet that. Yes, I am Scott and I'm the person that should be coming in. That's my, my multifactor authentication identity access management. So I have credentials, but when I want to ask a question model and say, I did this today for, for one of my demos. Who am I? I was talking to a, a local language model and said, are you a human? You know, whatever. I start asking those questions. Nothing leaves the environment. Everything stays in. if I ask it the question such as explain this file or tell me about a normal language model, we'll go off and, and look at the repository of data somewhere outside of this vm. We have it such that none of that data gets out, only the inputs come in and then the answer and so forth. it's simple. We prefer to run the, uh, the large language model inside the vm. We don't have to bring all of the data in'cause we can use a thing called ETL with the Linux, you just pull the data in, uh, at least pointers to the data so it stays on primary and we get the opportunity to run the models, keep them secure. Nothing goes out. And we protect your, uh, investments.

Jacob Andra: 

Okay. Uh, yeah, I'm trying to envision that. So the, uh, the large language models running inside the virtual machine, um, the cloud instance in this case is outside the machine, although, as you point out, you could, you could bring that inside as well if you wanted to. I assume there would be some additional cost associated with that. So. Probably the, uh, the most logical thing is to run the cloud instance outside. And then the, um, your solution essentially is a gating function. As data passes in and out of this, uh, this box, it is, uh, sort of gating it and verifying that it, it has the right permissions associated with it.

Scott Peiffer: 

Correct, and we don't do it by a file name or by your credentials because you may not be who you say you are. You

Jacob Andra: 

Yeah.

Scott Peiffer: 

deep stakes are too good. So, when we do it by capacity, or it can be by instance, it doesn't matter. But the reality is if you're running a model, you've done 10 billion, uh, parameters and you've trained and you've done whatever, it's a million parameters. It doesn't matter. running a model. It's not really that much capacity. And depending on the needs, you might need. GPUs you might use,'cause we have GPU support. might be running something very complex and you have a. Powerful situation. The recent models that I've read have about three quarters of the companies are using local data science or data models. Because cloud is cloud it. Everybody knows that there's risk with the cloud, with a security, it's not that. You don't trust it. But when you're talking about sensitive information, you want to be hypersensitive. So you put the wrapper around the model locally, and you run it in a private data center. Super easy. We can deploy in about an hour, hour and a half. Uh, we don't get into the reads and rights of everybody in the corporation, but we do it based on the areas that are most important. So, uh, you wanted your hr, your sales, your models, uh, your ip. I have a, a consulting business and all of my information is on, on protected this way, so my client's information is secure.

Jacob Andra: 

That's awesome. Um, and we've talked about doing some work together. I think we're going to. Bring you in on some projects we're working on because I think some of our clients are gonna, uh, be a great fit for this. So I'm excited to work together, uh, in more detail on some of these solutions.

Scott Peiffer: 

So, Jacob, let me, let me throw a, a little bone out there when you're talking. isn't a Talbot West, but if you're going to do an AI project, go find a Talbot West specifically. Somebody like that or you that is going to help you train the model locally, it for your company, tune it to your your language, and do that and secure it in a way overhead's small, secured in a way that allows that. Data to be used securely. Nobody's going to nefariously take it out. Nobody's gonna, oops, I shouldn't have done that. You are not gonna have, you know, messy situation where I have 14 copies of the same data. Right. It's, it's in an environment, it's gonna be clean, it's gonna be protected, and you will have fresh data. So to do this on your own. Yeah. Uh, you can do this, you can put parameters in place to say, oh, I'm gonna plug the holes. I'm going to have policy. But people don't follow policy. They just don't. Half the companies have people using, you know, agen ai or they're using, um, models out there that are just not approved by the company. And that just is data leak every single time.

Jacob Andra: 

Totally agree. Um, one more question to throw at you. Where do you see AI technologies driving the most value in companies from, from what you've seen in your own experience?

Scott Peiffer: 

Well, most of the, the conversations I hear is it's taking care of things like, uh, customer support. Because I, I don't have people tied up answering very simple questions. So the repetitive things, the real big value that I see that nobody talks about is when you start to look at, uh, sales churn something. It's a data science project. You can call it data science for ai, but it's, it's ai. I'm gonna train my information to ask it questions. I don't wanna do that in a way that allows my information to go out. So having a, an internal model that is completely safe and understanding my sales churn or answering questions. We did this in our company. We had a, we had a code thing that we just couldn't figure out, so we trained a model just on our kernel code. It was very specific. We asked two questions, we had an answer, and we fixed it right? It was a problem we struggled with for weeks. we trained it. We no way. No. How are we giving our IP out to perplexity to say, Hey, how do you do this little thing here? No. So it took us a couple of hours to, to create the right, find the right data, throw it in, train a model, and then we asked it two questions. That's the answer. can happen everywhere, and that's where I see AI becoming the biggest value. You know your company, you know your business, it's trained well. You can ask it questions, just saving the the junior person on the check line, but creating real company value.

Jacob Andra: 

I think that's such a good point because so often people think of it as, you know, sort of saving time or cutting costs or, you know, needing fewer people, but it can actually create capabilities that wouldn't have even been possible without ai. So yes, it's all of the above, but I think this capability enhancement piece is, um, a welcome addition to a lot of the. What, what seems like a lot of hand wringing about AI is gonna take all the jobs. Um, it's actually gonna create some new capabilities as well that don't even exist today and do some really cool things. And, you know, not to minimize some of these, uh, social concerns about displacement and all that.'cause those are real in any, uh, technological, uh, wave, you know, all the, all the waves of technological innovation that have come along since who knows when, have, uh, have brought that with them and it's real, it's worth talking about. But, um, there's the bright side as well, um, which you highlighted.

Scott Peiffer: 

So you know, weighed into the pool of AI and, and the, the new digital transformations, have to go in boldly, but have a plan. just throw your stuff out for the, the cheapest, lowest cost plan. you can train a lama. Yes, you can do different, you know. Different ways to get there, but have a plan. Buy a good consulting company to train your data locally. Let it speak your language and don't forget to secure it and then throw everything at it. Don't hold back on your information. The more model, more information a model has, the better it's gonna be at giving you value. So throw your data at it, give value, and go bold.

Jacob Andra: 

I like that. Go boldly and have a plan. There's the tagline of the day. Thanks for joining, Scott.

Scott Peiffer: 

Hey, it was a pleasure. Best of luck in your, uh, your growth.

Industry insights

We stay up to speed in the world of AI so you don’t have to.
View All

Resources

Subscribe to our newsletter

Cutting-edge insights from in-the-trenches AI practicioners
Subscription Form

About us

Talbot West provides digital transformation strategy and AI implementation solutions to enterprise, mid-market, and public-sector organizations. From prioritization and roadmapping through deployment and training, we own the entire digital transformation lifecycle. Our leaders have decades of enterprise experience in big data, machine learning, and AI technologies, and we're acclaimed for our human-first element.

Info

The Applied AI Podcast

The Applied AI Podcast focuses on value creation with AI technologies. Hosted by Talbot West CEO Jacob Andra, it brings in-the-trenches insights from AI practitioners. Watch on YouTube and find it on Apple Podcasts, Spotify, and other streaming services.

magnifiercrosschevron-leftchevron-rightarrow-right linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram