Blog

Not All RAG is Created Equal (Part 1)

By Pat Calhoun, Chief Executive Officer
 | 
April 1, 2025
Share:

RAG—short for Retrieval-Augmented Generation—has become the default answer to every question on how to make AI in the enterprise smarter, safer, and more accurate. And to be clear: it’s a powerful advancement. With RAG, large language models (LLMs) don’t rely solely on their training data. Instead, they retrieve real-time information from trusted sources and generate answers grounded in that content.

 

But let’s get one thing straight:

Just saying you “use RAG” doesn’t mean you’re delivering the same employee experience.

That’s where the conversation starts to get interesting. Because while RAG is the engine behind many virtual agents and enterprise search tools, what happens after retrieval is where the real difference lies.

What Most People Mean by RAG

In most cases, RAG is applied to enterprise search. An employee asks a question, the system finds a relevant knowledge article, and the LLM summarizes the answer in natural language.

 

And for unstructured content—things like policy documents, HR guidelines, and benefits pages—this process works really well. At Espressive, we call this capability the Content Interrogator. It powers natural Q&A over existing documentation, across systems like Microsoft SharePoint, Confluence, ServiceNow, or even public sites.

 

For example, if someone asks:

  • "Can I work from home four days a week?"
  • "Will the company reimburse my desk?"
  • "Do I need VPN if I'm remote?"

Our virtual agent, BaristaGPT, will read the actual Work From Home Policy and responds with a concise, accurate answer—no links to click, no portals to navigate. Just a direct, human-like conversation based on real content.

 

So yes, this is RAG—and it’s valuable. But it's not the full story.

What Happens When the Content Isn't Just Text?

Let’s say the question is about reimaging a laptop, fixing VPN, or troubleshooting why someone can’t connect to Wi-Fi. These aren’t policy questions—they’re procedural, multi-step, and often include branching logic based on conditions like device type, network status, or operating system. This isn’t just about dumping 30 steps on the user and hoping they find the right one.

In many cases, these documents looks simple on the surface, but they contain hidden complexity:

  • "If the user is on Windows, do X; if the user is on Mac, do Y."
  • "If step 6 fails, skip to step 12."
  • "Only proceed if connected via VPN."

That’s the kind of structure that traditional enterprise search—even with RAG—can’t handle. Summarizing these documents flattens the logic. Linking to them forces the employee to self-navigate. And neither of those leads to resolution.

 

What’s needed is a system that can:

  • Recognize the structure and conditional logic in the content.
  • Understand where the user is in the process.
  • Ask the right questions to determine the correct path.
  • Confirm progress before moving out.

That's exactly what RunbookIQ does.

Same Engine, Two Different Experiences

Here’s where it gets interesting: both Content Interrogator and RunbookIQ start the same way—using RAG to find the most relevant knowledge article.

 

But once the content is retrieved, BaristaGPT makes a real-time decision:

  • If the article is unstructured content, it's routed to Content Interrogator, which extracts the relevant information and answers the employee's questions in natural language.
  • If the article is a structured, step-by-step guide, it's handed off to RunbookIQ, which converts it into a dynamic, multi-turn conversation—walking the user through each step, confirming success, and adapting the flow as needed.

This means BaristaGPT doesn’t just retrieve content, but truly understands the nature of that content and tailors the experience accordingly. And even better, you don’t need to do anything to your articles—BaristaGPT figures out the best path to choose in real-time.

Not All RAG is Created Equal — And That Matters

Many vendors today claim to support RAG. What they often mean is, “We retrieve documents and summarize them.” This is fine for simple Q&A, but it’s not enough when the goal is problem resolution.

 

Employees don’t care whether the system used RAG, fine-tuning, or prompt engineering—they just want their issues solved. And solving issues requires knowing when to give an answer, when to guide a process, and when to escalate.

With Espressive, it’s not one-size-fits-all. It’s one brain, two powerful experiences:

  • Content Interrogator for natural Q&A over unstructured content.
  • RunbookIQ for guided troubleshooting and multi-step workflows. 

Both are powered by RAG. But they solve completely different problems.

The Bottom Line

Saying you use RAG is like saying your car has an engine—it tells you nothing about where it can take you.

 

What matters is what you do once the content is retrieved.

 

At Espressive, we’ve built a platform that doesn’t just use RAG. We’ve built one that understands how to use it differently—depending on whether the employee needs information, instruction, or resolution.

 

Because in the end, not all RAG is created equal—and your employees deserve better than just “search.”

Share this post: