Episode 12 of The Applied AI Podcast

Jacob Andra and Dr. Alexandra Pasi discuss Lumawarp beating the TabArena benchmark

About the episode

Lumawarp delivers 7% higher accuracy than leading ML models while running 300+ times faster. On the TabArena HELOC default prediction benchmark, it topped the accuracy leaderboard while training on a gaming laptop in about an hour. Competing methods required hundreds of hours on large compute clusters to achieve worse results.

This is the breakthrough that breaks the accuracy/speed tradeoff that has constrained machine learning for decades.

In this episode, Talbot West CEO Jacob Andra sits down with Dr. Alexandra Pasi, CEO of Lucidity Sciences (a Talbot West partner), to explore how Lumawarp achieves these results and what it means for enterprises building AI systems where precision is non-negotiable and milliseconds matter.

The technology employs a novel mathematical framework grounded in partial differential equations and geometric manifold regularization. Rather than relying on deep learning or tree-based methods that struggle with sparse or imbalanced data, Lumawarp constructs optimal kernels directly from training data. The result: superior pattern recognition with microsecond inference times, deployable on edge devices without sacrificing accuracy.

In this conversation, we cover:

Benchmark results showing Lumawarp outperforming XGBoost, MNCA, and other leading models on structured data tasks

Why a few percentage points of accuracy improvement translates to millions of dollars in fraud detection, clinical decision support, and risk modeling

Microsecond inference enabling real-time applications in high-frequency trading, robotics, and predictive maintenance

Edge deployment capabilities for wearables, industrial sensors, and environments where cloud connectivity isn't reliable

The critical difference between models optimized for linguistic plausibility (LLMs) versus mathematical precision (Lumawarp)

How the Talbot West and Lucidity Sciences partnership works: Lumawarp solves the prediction problem, Talbot West solves the deployment problem

As Dr. Pasi explains, traditional ML forces you to choose: fast models sacrifice accuracy, accurate models require massive compute. Lumawarp sits completely outside that tradeoff curve, delivering both simultaneously.

For high-stakes applications where 90% accuracy means a 1-in-10 failure rate, and 99% accuracy means 1-in-100, that difference determines whether you can deploy ML at all.

This episode is essential viewing for executives evaluating AI investments, data scientists looking beyond the LLM hype cycle, and anyone building systems where accuracy and latency both matter.

About the Guest:
Dr. Alexandra Pasi is CEO and co-founder of Lucidity Sciences and sits on the Talbot West advisory board. A PhD mathematician, she spent over a decade advancing the mathematical foundations of machine learning before pioneering the GPU-parallelizable geometric manifold regularization techniques that became Lumawarp. Her work has demonstrated real-world impact across healthcare (predicting hospital-acquired conditions), finance (high-frequency trading), and scientific research (particle physics detection).

Episode transcript

Welcome to the Applied AI Podcast. I'm your host, Jacob Andra, CEO of Talbot West. I'm really excited for today's guest. She's been on the podcast three times. She's developed an incredible technology that has beaten a lot of machine learning benchmarks, and on today's episode, we talk about the Luma Warp model she's created, how it's beaten the benchmarks in both speed and accuracy, and some of the use cases it would be useful for in industry.

Jacob Andra: 

For the third time to the Applied AI podcast, it's great to have you back.

Lexi Pasi: 

Yeah. Great to be here.

Jacob Andra: 

So we just announced a partnership between our two companies, which I'm really excited about.

Lexi Pasi: 

Yeah, we are too. I think it's gonna be a great synergy between the two teams.

Jacob Andra: 

Exactly. We can help companies figure out how to apply your Lumawarp machine learning model, which is of course, what we're here to talk about. So why don't you introduce that model, what Lumawarp is

Lexi Pasi: 

Lumawarp implements a new mathematical and hardware paradigm for machine learning. And you know, there's a lot of machine learning and AI tools out there. A lot of them are more at the application layer. A lot of them are around, how do I deploy machine learning models? But then at the very foundational layer, there's this question of how did the machine learning models actually find patterns in data, right? Because that's what machine learning is all about. It's about taking data, learning patterns from it, and then applying that in new scenarios and situations. So there's a couple different factors that come into play when you're talking about how to build the best machine learning model. When it comes to the algorithms themselves, there's a couple of different dimensions of performance that are both really important in application, but often have kind of counterbalanced each other. So those two variables I would summarize as really just like accuracy. Efficiency and efficiency can be speed. It can be size. Accuracy can be measured in a variety of ways. It also includes sort of, you know, how well does this perform under new situations? But those two broad categories are the important parts of a model, right? How good is the model? How fast is the model? And within the existing solutions for how to train a machine learning model. There's often been this trade off between accuracy and deficiency.

Jacob Andra: 

So traditionally you've had a pulley effect where if accuracy goes up, speed goes down. If speed goes up, accuracy goes down, and these things move in, inverse relationship to each other.

Lexi Pasi: 

If I was to plot. You know, accuracy. So how accurate is this and how fast is this accuracy and speed on this axis like this? Well, then what will have sort of historically seen is, you know, you might have, and I'm, I'm borrowing some of these from the quite exhaustive. Listing of different machine learning approaches that were put together by, uh, some of the researchers at Amazon and others involved with a TabArena benchmark. But you know, you might see something like. XG Boost, which is, you know, a very familiar kind of canonical machine learning algorithm be typically very high in speed, but might be a little bit lower down in accuracy. And, and this is just, you know, kind of illustrative. Whereas something like MNCA, which is actually, you know, not one that you see as often, largely because it is so slow. Um, and so sort of computationally intensive, this might be a little bit higher in accuracy, but it comes at the cost of both inference and training speed, right? So it's a much slower, more computationally intensive model. Um, and you'll often see, you know, these will be towards the top of the benchmarking. Tests. Um, these will be a little bit lower down, but when you look at the, the speed metrics you know, those will kind of flip. So typically, you know, overall that trend when you look across all of the algorithms will look something like, you know, this line or a curve where the, as you increase the speed, you're gonna be decreasing the accuracy and vice versa. So exactly this kind of pulley effect that you just described. The big innovation here is that when you look at where Lumawarp is. We are completely outside of this trend line, right? So we are able to achieve comparable speed to something like an XG Boost while having in many cases chart topping accuracy that is comparable to something like MNCA. The difference here is that, you know, this took in many cases, you know, several hundreds of hours to train on a very large cluster. This was able to train. The same data set with better performance on a gaming laptop in a matter of an hour.

Jacob Andra: 

So a huge difference in speed as you point out. Tell us about what this set of benchmarks is, kind of at a high level and how significant this is.

Lexi Pasi: 

Yeah, so this is a new benchmark that had been put out, it's called TabArena. It's been out since around June or July. And you know, really the importance of this data set is that we haven't had a particularly robust canonical benchmark for tabular learning problems. So this is one that we've been in the process of evaluating. Our model against we found, of course, some really interesting results. I think sort of the primary result is the one that we've just been talking about here, where you see this disruption of this normal accuracy speed trade-off, right? So Lumawarp is able to perform at. High speed while in many cases actually exceeding the accuracy of some of these largest ensemble models.

Jacob Andra: 

In this benchmark, there was a data set around people who had taken out HELOCs on their homes and it was predicting who would default on their heloc. So that's a home equity line of credit. In the dataset was included who actually did default. Then you take another data set, similar, but totally different people who also had HELOCs and some defaulted and some didn't. You strip off the results of who ended up defaulting, and then you have the models predict. Based on that data set who defaulted, and then you compare that against the actual default rate.

Lexi Pasi: 

So I think that's a great explanation of the problem. Um, very kind of typical risk prediction problem that we run into in machine learning. And one of the things that I really like about this example from the data set is that it's fairly straightforward to sort of quantify the returns on accuracy, right? So if this is like, you know,$50,000 loan, then you know, you've. Few thousand of these across your portfolio, a couple of percentages of an increase in accuracy can translate into several million dollars. What we actually saw within that benchmark is that we were outperforming the top performers.

Jacob Andra: 

And so how accurate was Lumawarp?

Lexi Pasi: 

We were topping the list on all of the different models, including many that had been ensemble and trained and tuned. Over the course of hundreds of hours, again, just training these on a gaming laptop over the course of an hour. So we were able to beat the benchmark on that dataset against all of these different models while still being, you know, very accessible in terms of compute.

Jacob Andra: 

Yeah, so low compute, fast and more accurate. So that seems like a winning combination to me, and it does seem like there are gonna be a lot of high ROI use cases across. A variety of industries where this could now be, uh, applied. Any, anytime you need to predict an outcome where there's a monetary value, as you pointed out, associated with higher rates of accuracy. Where the stakes are high for being wrong and you really want to be as accurate as possible. Uh, I could think of a lot of healthcare applications, uh, medical diagnostic applications. Applications within the intelligence community where you need to maybe correlate signals and, you know, predict. Financial modeling, financial prediction, market predictions, right? You can probably think of a lot of others.

Lexi Pasi: 

Yeah, definitely. And I, I think that, there are many cases where. Let's say, you know, your baseline model has like a 90% accuracy. Uh, that seems pretty good for a lot of applications, but a one in 10 failure rate is an absolute no go in a lot of contexts, right, that are a little bit higher risk. Whereas like a 99% accuracy. So you know, that 9% increase translates into only a one in 100 failure rate. So that is sort of like a tenfold increase. In the reliability thought of that way. So, you know, there's a number of applications where getting that improvement is actually the difference between being able to, safely or strategically deploy a machine learning model versus not.

Jacob Andra: 

Exactly, and not only do you want it to be accurate, but you don't wanna wait 500 hours and have a giant compute cluster running for those 500 hours to even get your prediction. So anytime. You need low latency, immediate predictive ability.

Lexi Pasi: 

Old intelligence is not really intelligence. It's kind of useless after a certain point, it goes stale. And you know, especially when you, you are looking for a low latency. It might be an application like robotics, you know, or it might be something like real-time trading. The speed at which you're able to make that decision using your machine learning model and return it, uh, to the end point is really critical so that latency can really matter. And even independent of that, the training time, retraining regathering data, how efficiently you're able to do that to kind of keep up to date with the latest data is also. A really important factor in deployability.

Jacob Andra: 

That's a really critical point. And also I was just thinking of edge computing, where if you don't want to have to send this off to a cloud and get a response, or even in high stakes situation, if you don't want to be de dependent on cloud infrastructure, you have the option to deploy this on an edge device. Have it right on premises and it'll still run. You know, it'll be even lower latency, but then obviously even much more reliable because you're taking a lot of the complexity out of the equation.

Lexi Pasi: 

Exactly. Well, and now this is bringing in another variable that is often related to that speed category, but I think it's worth touching on specifically, which is model size. Right, because model size is often the determinant of what types of devices you can actually put this on. So, you know, if you are talking about wearables in many cases you are actually required to have a cloud connection just because of the size of those models. Uh, you know, they, they don't really fit on a wearable device or they're not able to actually inference with any kind of speed on a wearable device. And so. You know, one of the effects of using this improved mathematical framework for machine learning is that you do get high fidelity information compression, right, not manifest through having these smaller, faster models that maintain really the highest degree of accuracy, so better than state of the art in many of these cases.

Jacob Andra: 

The fact that it can just run on a tiny device, it opens up a whole new set of categories of, of usage that wouldn't even have been possible before. So. I love that. And then, you know, just for our audience, I think it's worth touching on that this is not even related remotely to large language models. It's an entire different branch of machine learning. You've talked at length about the unwieldiness of trying to convert large amounts of tabular data into something that large language models can consume. And I don't think you're anti large language model. You see that they have a place in the ecosystem, as do I. You and I are, we've talked at length about this. You've been on the podcast, you know, now three times. For the purposes of this episode, give just a summary of how you see that landscape and how you see the, the role of. Lumawarp and similar technologies kind of compared and contrasted to large language models.

Lexi Pasi: 

Obviously there's a lot of applications, especially those that are really close to sort of linguistic processing. But when you have data that comes in more structured form, you might even have image data. But basically it's in its highest fidelity form when it's closer to that. Original form that the data presented itself in rather than having to pass through this linguistic layer, right? So you're really talking about. Trying to take the data that you know is most directly relevant to the problem and putting it in that form through a machine learning model that can learn those specific patterns.

Jacob Andra: 

Trying to use a large language model for everything and converting all data to something a large language model can easily consume is the equivalent of forcing a self-driving car to compose a sonnet or a haiku about everything it senses on the road before it can react. Machines can consume the data in native machine formats. Why force them to convert to something that humans would understand when they can communicate much more effectively on that level. And then you introduce the large language model only where it makes sense to do so. Very excited for this partnership we have. The way I see it, you've created the world's most incredible engine, the most fuel efficient, high powered engine. We help we help companies assemble the car that that engine can go into if they need that assembly. To me, it's just a perfect match. I'm so excited for what 2026 holds and uh, what you guys have built.

Lexi Pasi: 

Yeah, we are too. And I, I think that's such a good analogy. I think Talbot West does a great job at envisioning the car that can really leverage the world's most powerful engines, uh, to get people where they're trying to go.

Jacob Andra: 

Exactly. Well, thanks so much for coming on again. It's a pleasure as always.

Lexi Pasi: 

Always a pleasure. Jacob, thanks so much for having me.

Industry insights

We stay up to speed in the world of AI so you don’t have to.
View All

Resources

Subscribe to our newsletter

Cutting-edge insights from in-the-trenches AI practicioners
Subscription Form

About us

Talbot West provides digital transformation strategy and AI implementation solutions to enterprise, mid-market, and public-sector organizations. From prioritization and roadmapping through deployment and training, we own the entire digital transformation lifecycle. Our leaders have decades of enterprise experience in big data, machine learning, and AI technologies, and we're acclaimed for our human-first element.

Info

The Applied AI Podcast

The Applied AI Podcast focuses on value creation with AI technologies. Hosted by Talbot West CEO Jacob Andra, it brings in-the-trenches insights from AI practitioners. Watch on YouTube and find it on Apple Podcasts, Spotify, and other streaming services.

magnifiercrosschevron-leftchevron-rightarrow-right linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram