The SaaS AI Trap: Why Fast Answers Are Costing Enterprises Millions

In Episode 194 of SaaS Backwards, host Ken Lempit sat down with KG Charles-Harris, Founder & CEO of Quarrio, to discuss “The SaaS AI trap”:  believing fast answers are good enough when the real advantage comes from trustworthy, decision-grade intelligence and why most AI tools fall short.  As KG put it:

"An organization is really a decision and action machine - we've spent hundreds of billions centralizing data, but we just can't get access to that data to turn it into information when and how we need it." 

Enterprises poured an estimated $30–40 billion into generative AI in 2024, yet a 2025 MIT study found 95% saw little to no measurable return. That’s because most AI today generates statistically likely answers, not verified ones. That's fine for drafting emails. It's not fine for decisions that move capital, trigger workflows, or feed regulatory filings. Foreseeing that probabilistic AI cannot be trustworthy in such situations, Quarrio came to be. 


Q: KG, you've been building AI and data companies for decades. What problem is Quarrio actually solving and why hasn't it been solved already? 

After selling my previous business intelligence and data integration company – a platform comparable to Domo – I still couldn't get the information I needed, when and how I needed it. We had built the software. We had some of the smartest people in the world around me. And I was still waiting hours, days, sometimes weeks for an answer to a basic business question. 

Rather than assume I knew why, my former VP of marketing and I interviewed approximately 120 past customers – executive teams, business unit leaders, analysts, supply chain, IT, and sales teams. The finding was consistent across all of them: 

"Business users just want an answer to a question. They don't want to learn how to use different tools. They just want an answer now, that influences the next step of what I'm about to do." 

Quarrio was built on that insight, and on one further constraint grounded in user behavior: any software that takes more than three minutes to learn sees adoption fall precipitously. 

The bottom line: Even as the CEO of a data company, I couldn't get the answers I needed. After 120 customer interviews, the finding was universal: people want an answer, not another tool to learn. 


Q: You draw a sharp distinction between probabilistic and deterministic AI. Can you explain that and why does it matter for business decisions? 

AI has existed since the 1950s and developed along two distinct paths from the start. One, rule-based, symbolic systems, is deterministic. The other, machine learning, neural networks, transformer-based LLMs, is probabilistic. The core problem with applying probabilistic AI to enterprise decision-making is structural, not incidental: 

"Multiplying a probability with another probability and doing that 1.7 billion times doesn't make it more accurate. It just makes it more difficult for you to identify when it's inaccurate and how inaccurate it is." 

Today's leading LLM and SLM models deliver between 65-85% accuracy. For brainstorming, summarization, and content creation, that is useful and genuinely valuable. But when AI is making or automating business decisions, especially as agentic systems take on autonomous workflows, that accuracy ceiling is not a quirk. BCG surveyed 1250 companies and found that 66% of organizations found "insufficient model accuracy and reliability" as a roadblock and "only 5% of companies get substantial value from AI". A more direct way to consider things:

"Think about automating a payment process and having hallucinations in that. That's unacceptable." 

The bottom line: Probabilistic AI tops out at 65-85% accuracy. When AI is making or automating decisions, not just drafting text, that margin of error isn't a feature limitation. It's an operational liability. 

Q: You talk about compressing the cycle from data to decision to action. What does that actually mean competitively? 

The framing comes from Jim Cates, our Chief Product Officer and Information Architect, who led the team that created SQL, DB2, and built underlying technologies for IBM Watson. His framework: shorten the cycle time to information, and you shorten cycle time to decision, to action, and ultimately to results. 

Today, the average ad hoc report in a mid-size to large US enterprise takes 2+ weeks to produce – and typically only those with access to an analyst team can get one at all. Quarrio answers the same question in a second, with 100% accurate, auditable results available to anyone in the organization. The competitive implication I pose to prospects is direct: 

"How do you think the competitive environment will look between those two firms? One second versus 2+ weeks?” 

Global AI spending is projected to exceed $300 billion annually within the next few years, and enterprises that build decision velocity into their operations (and not just AI capability) will compound that advantage across every function. 

The bottom line: The gap between 2+ weeks and 2 seconds isn't a productivity improvement – it's a structural competitive advantage that compounds across every decision in the business.


Q: You've talked about data security as a risk with LLMs that most enterprises aren't seeing. What is it? 

The risk is intellectual property leakage, and it is operating silently in most organizations right now. Most businesses' core intellectual property is in their business processes: 

"How does Goldman Sachs do things versus Morgan Stanley? On the surface these businesses look similar, but how the businesses actually function internally – that's their secret sauce." 

When employees use public LLMs to ask questions about internal operations, those processes are being exposed to the model. Quarrio's architecture eliminates this entirely: it sits behind the organization's existing security footprint, introduces no new cybersecurity infrastructure, and never exports data. Your questions, your processes, and your data stay inside your own environment - a design decision that is increasingly relevant as regulatory scrutiny of AI decision systems grows across financial services, healthcare, and government. 

The bottom line: Every query an employee sends to a public LLM is potentially leaking the company's operational secret sauce. Quarrio's architecture ensures data and questions never leave the organization. 


Q: Most enterprises are already invested in Salesforce, SAP, Tableau, and PowerBI. Does Quarrio replace any of that? 

No, and I'm emphatic on this point. Quarrio was deliberately built to sit alongside existing systems. It plugs into structured enterprise data sources (CRM, ERP, financial systems, and risk databases) and returns answers in under two seconds without requiring a new data warehouse, transformation project, or changes to existing security architecture. The reason we entered the market through sales teams and Salesforce is strategic: "If you make sales happy, you make everyone happy. But if you make finance happy, you've made finance happy and it stays there." 

Salesforce itself incubated Quarrio for several years – their former AI group leader invested immediately after seeing a prototype, their former CTO invested shortly after, and we operated from Salesforce's offices for three years. From that beachhead, deployments expand naturally into finance and marketing. 

The bottom line: Quarrio starts with sales, the fastest-moving, most ROI-visible part of any enterprise, and expands from there, making existing systems more valuable rather than replacing them. 


Q: Who in the organization actually benefits, and how large is the decision-making opportunity? 

One of the most striking realities in enterprise decision-making is how concentrated it currently is, and how much that concentration costs. As I see it: 

"The executive team and board take 1% of the decisions that are strategic. Everything else is the rest of the organization." 

That 99%, the line-of-business managers and the teams around them, are the people who today wait 2+ weeks for a report they may never get unless they have dedicated analyst support. Our goal is to put accurate, instant, auditable answers in the hands of every one of those decision-makers. The economic logic follows directly: organizations with visible AI strategies are twice as likely to see AI-driven revenue growth, and 3.5 times more likely to realize strategic AI benefits, compared to those without a coherent approach. Democratizing access to accurate data, at every level of the organization, is how those strategies become operational reality. 

The bottom line: 99% of enterprise decisions are made by line-of-business managers, not executives. Making that group faster and more accurate is the real competitive lever, and it is still largely untouched. 


Q: The agentic AI wave is accelerating. What does that mean for enterprises that haven't gotten their data foundation right? 

The stakes of inaccurate AI are rising rapidly. As AI moves from answering questions to taking autonomous actions – approving transactions, triggering capital allocations, executing workflows – the tolerance for probabilistic error effectively drops to zero. McKinsey's 2026 research found that security and risk concerns are the top barrier to scaling agentic AI, with inaccuracy specifically cited by nearly three in four organizations. Gartner has predicted that 40% of agentic AI projects will be cancelled by 2027, with hallucination and governance failure among the primary causes. As I put it at our launch

"Once AI systems start approving transactions, triggering capital allocations, or executing workflows, enterprises cannot tolerate the probability-driven guesswork of GenAI. The future belongs to AI that can prove what it did and why." 

Every agentic deployment built on a probabilistic foundation carries that risk forward. 

The bottom line: Agentic AI doesn't make the accuracy problem smaller, it makes it consequential. An autonomous agent that acts on a hallucinated answer isn't a UX problem. It's a liability event. 


Q: You've spent nearly a decade in R&D. How did you know it was time to go to market? 

Honestly, it's more art than science. We ran proofs of concept and pilots with auto parts manufacturers in Germany, oil companies in the Middle East, major utilities, and investment banks, testing the system in genuinely diverse enterprise environments before launch.  

There is always one more capability to build. Our public launch in February 2026 was timed to a convergence of market conditions: GenAI ROI disappointment reaching critical mass, agentic AI deployments accelerating the need for a trustworthy execution layer, and enterprise leaders facing an AI-or-fall-behind imperative with no clear framework for what 'good AI' actually means. 

The bottom line: Time spent on R&D and global pilots validated Quarrio technology. The market timing of GenAI disillusionment, agentic risk, and a $300 billion AI spend wave validated the moment.  Note that OpenAI took seven years before launching ChatGPT,  and raised over $168 billion along the way which tells you just how much capital is chasing probabilistic AI that still can't guarantee a correct answer. At Quarrio, we have a stewardship ethos, and when the use cases are mission-critical, such as financial decisions, supply chains, regulated industries, there is no margin for "good enough." The technology has to be right, every time, before you put it in front of the world.  


Q: What's the single most important thing you want executives to take away from this conversation? 

The tools generating the most attention – LLMs and SLMs – are genuinely valuable for the right applications. But 75% of executives rank AI as a top three strategic priority while only a quarter report meaningful value from their AI initiatives. The gap between investment and return is not a technology problem. It is a matching problem: applying the right kind of AI to the right kind of task. 

What enterprises actually need isn't just faster AI, it's what I call 'decision-grade intelligence': answers that are accurate enough, fast enough, and auditable enough to act on with confidence. The organizations that build that capability into every level of their operations will not be catching up. They will be setting the pace. 

The bottom line: Speed without accuracy is just a faster way to make a bad decision. The race isn't to the most capable AI, it's to the most trustworthy one. 


Listen to the full episode - SaaS Backwards, Episode 194: "The SaaS AI Trap: Fast Answers, Bad Decisions" - on Buzzsprout, Apple PodcastsSpotify, and YouTube. 


References:


100% Accuracy, Full Auditability, Real ROI

Copyright @2015 - 2026 Quarrio. All rights reserved.