Episode 10 of The Applied AI Podcast

Jacob Andra and Stephen Karafiath discuss the impact of AI on the labor market

About the episode

While people fear wholesale workforce replacement, the actual transformation is far more complex and ultimately more optimistic for organizations willing to adapt strategically.

This episode cuts through the hype to examine three distinct zones of AI capability. First, tasks where AI excels at things humans never could do well, like fraud detection algorithms or protein folding analysis. Second, uniquely human domains like relationship building and creative problem solving across diverse contexts. And third, the contested middle ground where AI augments but doesn't replace human workers.

Jacob Andra and Stephen Karafiath share real insights from Talbot West's consulting work, including an aerospace manufacturer case where their top recommendation wasn't an AI solution at all. It was hiring a human to orchestrate digital transformation across departments. This reveals a fundamental truth: the future isn't humans versus AI. It's humans working with AI as force multipliers.

Large language models get conflated with AI itself, but they represent one narrow slice of available technology. They excel within certain domains but fail catastrophically when pushed beyond those boundaries. That's why Talbot West pursues two complementary approaches to expand AI capabilities beyond current LLM limitations.

Neurosymbolic AI combines neural networks with symbolic logic structures. Think of AlphaGo, which paired a neural network exploring game possibilities with a mathematical language enforcing the rules. The neural component provides creativity and pattern matching. The symbolic structure keeps everything grounded in reality and prevents hallucinations.

Cognitive Hive AI takes a different approach by orchestrating multiple specialized AI modules into coordinated systems. A single large language model might serve as just one small component, perhaps handling translation between machine language and human users. Other modules handle specific tasks like sentiment analysis, predictive analytics, or compliance monitoring. Together, they create business capabilities no single AI could achieve alone.

The MIT study claiming 95% of AI projects fail to see ROI likely reflects implementations that lacked this level of strategic thinking. When you bring proper analysis and architecture to AI deployment, returns become inevitable. Talbot West's customer feedback suggests near-universal satisfaction when projects are scoped correctly from the start.

Organizations face a choice in how to handle this productivity multiplier. The short-term approach fires people and maintains current output with fewer workers. The strategic approach keeps the workforce intact and uses AI augmentation to scale operations dramatically without proportional headcount increases. Companies taking the second path position themselves for massive competitive advantage.

This gets incredibly nuanced when you consider all the variables at play. Different job types face different displacement risks. Various AI technologies have different strengths and limitations. Neurosymbolic systems excel at different tasks than ensemble architectures. Single machine learning algorithms solve different problems than large language models. Understanding these distinctions matters enormously when planning organizational transformation.

You absolutely need humans in your company, but the nature of their work will shift. AI involvement will vary dramatically across roles from 1% to 100% depending on the specific tasks and available technology. Success requires bringing rigorous analysis to determine exactly where and how AI augments your workforce.

Episode transcript

Stephen Karafiath: 

Well, we really need to talk about this manufacturing story.

Jacob Andra: 

Manufacturing story.

Stephen Karafiath: 

You know, that aerospace manufacturer that, uh, we were finding all the AI opportunities for, but your top recommendation ended up being hire a human.

Jacob Andra: 

Oh yeah, that was really cool. So we knew going in, we were gonna find all of these opportunities for AI to make them way more efficient. What we didn't expect is that our number one recommendation was get somebody in here that can own digital transformation and function across disciplinary across all your departments to drive these initiatives forward.

Stephen Karafiath: 

Well, and don't forget that you already had someone in your network to, uh, plug into that position for them,

Jacob Andra: 

Ha ha. That was kind of cool, like a puzzle piece fitting into place. Uh, had a good friend who the company he was working for had just wound down its operations and he was just on the market and had the perfect skillset. Uh.

Stephen Karafiath: 

right? It's those type of network connections that are invaluable.

Jacob Andra: 

That's right. But I do think that this highlights something really cool, which is everyone is afraid of AI taking all the jobs, but it illustrates that humans are still gonna be needed. I think the ways they're going to be needed is shifting though.

Stephen Karafiath: 

Right. I mean, humans and uh, have always needed to shift based on the changing technical landscape. This one's no different.

Jacob Andra: 

Absolutely. And you know, if the entire goal was just keeping as many humans employed as possible. Of course, we could create all kinds of really lousy jobs that no one wanted to do, uh, just for the sake of giving people jobs. I mean, we could outlaw backhoes and put everybody out digging ditches or, you know, outlaw calculators and computers and get everybody like calculating, uh, all this stuff or, you know, you could come up with all these examples. That's never been the goal.

Stephen Karafiath: 

Yeah, I mean, outlawing technology has, uh, never led anywhere good. Um, but there are real concerns here and I'm excited to dig into'em with you.

Jacob Andra: 

The concerns are legitimate and uh, we don't wanna deny that. But the main thing we are here to do is guide business executives in how to approach these technologies in a nuanced way.

Stephen Karafiath: 

Right, and the good news is there is way more opportunity for the, uh, business executives as well as individuals than there ever has been.

Jacob Andra: 

That's right. So one thing I think that gets really glossed over with this discussion around job displacement and AI taking jobs is that this is not uh, equally applicable across all job types. There are these different zones or categories. I mean, there are things that AI does exceptionally well, that have no human precedent, and nobody's worried about that. I bet you can think of a few examples of that.

Stephen Karafiath: 

Right. I mean, I think some of the, um, examples that include, um, you know, AI drawing, new pattern recognitions around protein folding or, uh, you know, coming up with new or novel technology that didn't used to exist. Nobody's worried because that's not displacing a human job.

Jacob Andra: 

Exactly, or you have your recommendation engines in Netflix and Spotify, uh, fraud detection algorithms for your credit card company that are flagging, uh, transactions they think are suspicious. That sort of thing. Nobody's, um, you know, concerned about those taking human jobs'cause those aren't things that humans could ever do well in the first place. So that's one category. And then you have on the opposite end, um, the types of things that humans still do uniquely well, that. Uh, AI isn't really doing well at all, and that's not really at at risk either. So it's the middle category, um, that, that everyone's kind of concerned about. And so, uh, let's talk about this other category. You know, kind of on the opposite extreme of the fraud detection algorithm, the things that are uniquely human. Maybe AI can be leveraged a little bit, you can use a large language model as an ideation partner, but these are roles like. Relationship building, you know, jobs that involve heavy human relationships or other types of things humans do really well. Um, you can probably think of some examples there. I'm trying to think here.

Stephen Karafiath: 

Yeah, I mean, I think especially, you know, what we do, where we're the trusted advisors that look over all of this. We use AI as a tool, but, uh, I think we know better than anybody the limitations where they're falling flat, where years of experience, decades in a particular industry or doing particular types of relationship building, um, just cannot be simulated by a computer. At least not these ones.

Jacob Andra: 

Yeah, and any role that involves a high amount of creativity that needs to be applied in different types of contexts. So, you know, for example, AI can get really creative if you want to call it that, at creating music or literature or whatever. But can you take that same AI and apply it to a totally different domain that has no precedent and can that carry over? No, we're not seeing that.

Stephen Karafiath: 

Yeah, I mean, I think the promise of artificial general intelligence, you know, uh, steam tantalizingly closed. But the farther we get along with these, you know, specific LLMs that everybody's excited about, the more limitations we see, um, that they're not able to actually generalize their intelligence from, uh, one category to the next.

Jacob Andra: 

Yeah, and there is a lesson here for job security that if you want to have a secure job, um, one, get good at leveraging LLMs to just be faster, better at what you do, but also get really good at this kind of relationship building general intelligence. Synthesizing information working across very diverse domains and carrying insights over from one to the other. Uh, those are the types of things that have a lot of job security.

Stephen Karafiath: 

Yeah. And I'd say for those people that are actually willing to learn and adapt, it's, it's better than even just security. You're gonna have better jobs, you're gonna be more effective, you're gonna get paid more, um, than you would've, than the people that are just kind of sticking their head in the sand like an ostrich or just so afraid of everything that, uh, they're not willing to learn or adopt.

Jacob Andra: 

Absolutely. And so let's dive into this middle category'cause it is where all the anxiety comes from. And this is things like, you know, graphic design, writing. Um, honestly, a lot of the knowledge management type work that the US economy or developed, developed nations across the world, uh, economies are based on. A lot of that is at risk of being displaced by ai. But even there, I think there's a lot of nuance. I don't think that AI is gonna totally take over any of those jobs, but it certainly makes enough inroads that you need far fewer people in those roles, is the way I see it. Do you see it similarly?

Stephen Karafiath: 

I do, you know, and an example of that that's near and dear to my heart is software development. So. you know, we're already at the point where a lot of these tools are as good as, let's say a drunk junior developer would be. And so somebody who just did a coding bootcamp and doesn't have a ton of experience, um, that is not a very valuable commodity when AI can do that sort of thing. Um, but also, um, the senior developers, people that have architecture experience, people that understand what's hallucinations, what's not, and uh, what makes sense from a business perspective are more valuable than they've ever been.

Jacob Andra: 

Yeah, absolutely. So even in that middle category, humans are still needed. It's just that far fewer of them because one human can leverage AI capabilities to do far more, but AI is not good enough to completely take over these jobs. I mean, you live in a world of coding. I come from a background of a lot of, uh, content creation and marketing. And so in my domain. Like AI is nowhere near good enough to output quality content. And it even across the new releases of these new large language model, uh, you know, Gemini, ChatGPT, et cetera, as they release new models, uh, those models aren't getting much closer. Um, they improve in certain parameters, but in other parameters, they might even backslide or, or no, better at some of the fundamental limitations they have, um, with this. But they certainly enable a, a good human to do these tasks far faster. These creative tasks far faster when you learn how to leverage them.

Stephen Karafiath: 

Yeah, and I think you're really hitting on something, which we see so many people on the extremes of either people that believe that AI can do anything and that humans are no longer necessary, but usually those guys are trying to kind of sell a solution that's probably not gonna work. Um, and then on the other side are the, are the kind of naysayers who think that, you know, AI is just a parlor trick and doesn't have any real value. Um, and really I think, you know, everything we've done with so much experience with actual companies, um, is showing us that the middle road approach there, um, is the truth.

Jacob Andra: 

Absolutely. And so I think for our audience, you know, of business executives trying to figure out how to navigate this, I think they're gonna wonder a lot about, you know, how is this gonna change my company? How is this gonna change my workforce? Will I still need humans? How many humans? What types of roles? What types of tasks? Um, I think that's what we can really speak to here, which is, yes, you'll still need humans. In that middle zone of, uh, the types of things that large language models are encroaching on, you will need fewer humans. Um, and you can look at that two ways. The less desirable way is you can lay off a bunch of your people because the few that remain can do the job of all of them. And the other way which we try to steer companies toward is keep your same workforce, grow your company to much higher volume with the existing workforce, and have your people leverage these tools so they can be far more productive.

Stephen Karafiath: 

Exactly. I think there's such an optimistic message for both the employers and employees, uh, who are willing to adapt, where the employees are becoming at least two x more effective, sometimes up to five or 10 times more effective, their employer can actually afford to pay them more, generate far more revenue, um, and not have to increase headcount the way that they would've to correspond with that amount of growth in any previous economy. Um, it allows them to be agile, lean, um, and really rely on the tools, which is what they are, um, with a workforce that is, uh, actually embracing them. Um, that is the gold standard that we're trying to advise companies towards

Jacob Andra: 

so one of the things we're running into is not only are we not worried about AI taking people's jobs, but large language models which a lot of people conflate with ai they're just one type of ai. They're the top candidate for taking people's jobs, and they really suck at a lot of things. In fact, they're no nowhere even close.

Stephen Karafiath: 

We had super high hopes, but starting to see kind of limitations in the unraveling, um, as we're pointing them at more complicated use cases.

Jacob Andra: 

Exactly. Within certain narrow domains, they do extremely well and can be quite convincing. And in other domains, or as soon as you start to push the boundaries, they just fall flat right and left in all kinds of ways. And we're documenting a lot of those. I keep saying, I'm gonna publish a paper on this and I, and I will eventually. Not only are we not worried about large language models taking people's jobs, I mean, yes, as we said earlier, they're gonna be force multipliers that enable fewer people to get more work done. But in terms of complete job replacement, complete role replacement, not happening. Um, so we're actually trying to push the capabilities further to get them to do far more. And even then, we're not expecting that, you know, they're gonna completely take people's jobs.

Stephen Karafiath: 

Yeah, I think, you know, asking LLMs to keep improving. When their limitations are clear, um, they keep falling into the same traps. It's almost like, um, they have addiction to a certain type of level thinking, which I think you, uh, are gonna highlight in your, uh, research paper, right.

Jacob Andra: 

Definitely will. So you and I and Talbot West in our research team, we are pursuing a couple of tracks for getting, uh, better capabilities, more advanced capabilities out of artificial intelligence technologies. And these kind of fall into two categories. Um, the first is neurosymbolic, and the second is cognitive hive ai. Uh, which we abbreviate chai, and these are very complimentary, but like separate initiatives, um, and they can pair together very well. Why don't you quickly explain what neurosymbolic AI is?

Stephen Karafiath: 

Sure. So neurosymbolic is a combination of neural networks, which is the neuro side, and symbolic, which is more traditional deterministic architecture, uh, symbols. So. think of, uh, the creativity, the pattern matching, the ways that, uh, like language can be, um, strung together, that's all the neural networks, and that's a lot of what LLMs do. Um, but they get super loosey goosey with the facts. They're not actually grounded in into any objective reality. It's why they hallucinate so often very creative pattern, matching engine just out there blooming creativity. Um, but neurosymbolic means pairing all of that creativity with some actual kind of grounding and some discipline around the whole thing.

Jacob Andra: 

Yeah. And I think the core of that is that the symbolic component is enforcing a specific architecture or a specific, um, ontological reality. And the early example of that was AlphaGo. It's what beat go right and it, and it was a prototypical neurosymbolic architecture alphaGo, I believe, had a neural network exploring all the different permutations and combinations of the game. And then underneath that, the symbolic structure was actually a mathematical language called lean that kept it fixed in the realities of the game. And so the, uh, the neural component could go explore all of these possibilities, but then it had to check in with the underlying symbolic language that told it what was actually feasible and what wasn't. And between the two of those, it, it figured out the game really fast.

Stephen Karafiath: 

A hundred percent I think, um, you're referring to, there's a kind of Monte Carlo simulation and um, kind of a tree of probabilities that it was, uh, mapping based on the neural side of things. And it's complicated math, but really combining those two seems to be, um, an expansive path forward.

Jacob Andra: 

Yeah, so neurosymbolic is one avenue that can move the capabilities of AI forward. And then the other is cognitive hive ai, which is our approach to ensemble AI or pairing different aI machine learning capabilities together, because as we said, obviously there is much more out there than large language models or even neural networks. Um, so why don't you talk a little bit about cognitive hive ai, what that is and why we're so hopeful on that approach.

Stephen Karafiath: 

Absolutely. I mean, I think the key features of being modular and explainable. LLMs are just one module that could make up a complicated CHAI architecture. You can have traditional machine learning, um, bayesian inference algorithms, mathematical constructs, and also you could have, um, you know, workflow humans in the loop. Um, and all of those things kind of orchestrated together.

Jacob Andra: 

This can be extrapolated out to all sorts of business use cases where multiple modules, multiple capabilities working together can suddenly get incredibly effective. And a large language model might only be one very small component in this entire architecture. It might be the translation component that allows human users to interact with the central controlling module because it passes or translates between machine language and human, for example.

Stephen Karafiath: 

Yeah, a hundred percent. I mean, there's so many different modules that can be combined, whether that's, you know, sentiment analysis on social media posts. It could be the traditional kind of, uh, big data lake Hadoop type analysis that happens on say, uh, you know, a loyalty shopper card and what people's purchasing habits are.

Jacob Andra: 

So the reason we're going into some of this techie, nerdy stuff is to show that all of this gets incredibly nuanced. You have different categories of human and AI capability, different zones at which humans either Excel, AI excels humans and ai, both team together, and then all these different AI capabilities, right? What large language models are really, really good at what they're not good at. A neurosymbolic system can be good at what a chai architecture can be good at, what a single machine learning algorithm can be good at. I mean, this gets super complicated how all of these things layer together.

Stephen Karafiath: 

Yeah, I mean, I think it makes my head hurt and we do this all day, every day, so there is really a lot here to cover.

Jacob Andra: 

Yeah, and so I guess the overarching message is it's a very nuanced picture. You're definitely gonna need humans in your company a hundred percent, but the nature of the jobs those humans fill will shift and AI will be involved to different degrees across those different roles, and there will be these gradations of ai, a hundred percent ai, 1%, AI 50%, and the type of ai, whether it's single, large language model, constellation of machine learning, AI capabilities, neurosymbolic is going to also matter.

Stephen Karafiath: 

And that's all predicated on an understanding that AI is far more than just LLMs, what all of these other components are. And then b. Um, what their strengths and weaknesses are compared to humans.

Jacob Andra: 

Exactly. And so it gets very multidimensional in the levels of analysis that needs to be, uh, applied to this. Bottom line, the future is human AI teaming. But the way that teaming looks is very, very different from company to company role to role. AI is a lot of things. It's not just one thing, and that all has to be evaluated and scoped throughout.

Stephen Karafiath: 

I mean, uh, giving your, uh, human employees the equivalent of a powerful AI exoskeleton or Iron man suit. Um, is extremely valuable, um, and it has to be done clearly and intentionally and directed right.

Jacob Andra: 

Yeah, it makes me think of that MIT study that said 95% of AI projects don't see ROI sort of suspect they didn't bring the nuance and level of analysis we're talking about here to those projects because honestly, if you do bring this level of analysis, nuance and scoping, I can't see how you couldn't get in incredible ROI from, from those applications.

Stephen Karafiath: 

I mean, I can see what they're saying in that study, but you know, it's just, uh, you know, informal feedback from our customers. I'd say it's been the opposite of, you know, 95% have been thrilled with, uh, what's been, uh, coming out. So I think it really does. Set to level, setting expectations and really understanding and mapping, um, the landscape that you're operating in.

Jacob Andra: 

95% satisfied. I, I haven't seen the five that aren't. Who are they? Have they, have you been hearing complaints?

Stephen Karafiath: 

I assume that they're out there and just maybe not, uh, not willing to come forward, but.

Jacob Andra: 

Uh, I'm gonna call, I'm gonna call BS on that.

Stephen Karafiath: 

I love it. Well, you know what? Unless I can trot one out for you on evidence in a further podcast, I'll let, I'll let you have the win on that, but I'll be looking, I'm gonna, I'm gonna look for these guys.

Jacob Andra: 

All right. You do that. All right. Thanks for coming on, Steve.

Stephen Karafiath: 

Yeah, absolutely.

Industry insights

We stay up to speed in the world of AI so you don’t have to.
View All

Resources

Subscribe to our newsletter

Cutting-edge insights from in-the-trenches AI practicioners
Subscription Form

About us

Talbot West provides digital transformation strategy and AI implementation solutions to enterprise, mid-market, and public-sector organizations. From prioritization and roadmapping through deployment and training, we own the entire digital transformation lifecycle. Our leaders have decades of enterprise experience in big data, machine learning, and AI technologies, and we're acclaimed for our human-first element.

Info

The Applied AI Podcast

The Applied AI Podcast focuses on value creation with AI technologies. Hosted by Talbot West CEO Jacob Andra, it brings in-the-trenches insights from AI practitioners. Watch on YouTube and find it on Apple Podcasts, Spotify, and other streaming services.

magnifiercrosschevron-leftchevron-rightarrow-right linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram