AI Hallucinations: Amusing at home, unwelcome at work
How we’ve built accuracy and confidence into Coda Brain.
![](https://sanity-images.imgix.net/production/4bc8337e031529f956577a00e0d76754621ef9b7-1040x1000.png?w=1200&auto=format%2Ccompress)
![](https://sanity-images.imgix.net/production/f5d2fc904f0f809419bd8d8afc659c21598fdf70-200x200.png?w=500&auto=format%2Ccompress)
Glenn Jaume
Product Manager at Coda
AI · 6 min read
Why hallucinations don’t cut it in enterprise AI.
When using AI at work, the tolerance for inaccuracy is extremely low; it’s critical to have full confidence in the responses and to be able to verify their truthfulness. Without that trust, the benefits and efficiencies of AI are severely undermined. Imagine you’re putting together a presentation for your senior leadership, and you need to collect sales numbers for the past quarter. You ask whatever AI you’re using to gather this data for you. Right before the presentation—or, worse, during it—you realize that the AI has hallucinated a few sales deals that don’t actually exist. Or, even worse still, you already made decisions based on that false data. Now, you’ve lost all trust in the AI. Next time, you’ll hesitate to use it. Or, you’ll waste tons of time fact-checking its answers, which rather defeats the point of using AI in the first place. Clearly, this isn’t acceptable, and that’s why the bar for accuracy in enterprise AI is so high.How we’ve built confidence into Coda Brain.
Right from the start of building Coda Brain, our turnkey enterprise AI platform, we knew that trust and accuracy were a non-negotiable. There are four features we’ve built into Coda Brain to ensure the responses it gives are accurate and verifiable:- Increasing relevancy with RAG.
- Showing our working with citations.
- Keeping the human-in-the-loop.
- Protecting security with permissions-awareness.
1. Increasing relevancy with RAG.
RAG (Retrieval Augmented Generation) is an AI technique that’s rapidly gaining in popularity, especially in enterprise AI. RAG directs the AI to retrieve specific content or data first, then use those sources to answer the user’s question. Coda Brain uses RAG to pull relevant information from your docs and any tools you’ve connected to Coda before generating its own insights, responses, or actions based on these. For example, when you ask a question like “Should we build a desktop app?”, Coda Brain will collect relevant context like meeting notes, writeups, Jira tickets, Salesforce data, and more. Then, it will synthesize that information to generate a concise response to your question.![](https://sanity-images.imgix.net/production/402b72794737fa6c6b7195c67fd8b9c932166b88-1566x912.png?w=2000&auto=format%2Ccompress)
2. Showing our working with citations.
Speaking of real, verifiable data, the second approach we’ve taken with Coda Brain is citations. We’ve made Coda Brain’s responses as transparent as possible, so you know exactly where it has come from and can adjust if needed. Behind the scenes, when we ask for a response from the AI, we not only retrieve the right set of sources, but we also direct the AI to annotate where each part of the response comes from. These are then shown to you as citations. If you hover over the citation, it will show what was retrieved and from where. This means you can verify the results yourself and it helps prevent hallucinations.![](https://sanity-images.imgix.net/production/d12df6a44dd63f5eb4db685cfbb924070e9870e8-1820x900.png?w=2000&auto=format%2Ccompress)
3. Keeping the human-in-the-loop.
Further to our goal of transparency, we’ve built Coda Brain to automatically show you the steps it’s taking as it is working. For example, in our “$10k opportunities” scenario, you might see that it identified Salesforce as the source, then which table it chose, which filters it applied, and so on. You can then adjust this if needed—say if you want to refine further to just North American opportunities, or if you want to see data from a different tool instead. This approach—called “human-in-the-loop”—makes it easy to fine-tune the AI output to your specific needs. It also means you can be confident that what you’re seeing isn’t a hallucination, as you know exactly where the information came from.![](https://sanity-images.imgix.net/production/bf9b0c2609801a8d23b81b68306c585798a1686e-2040x1980.png?w=2000&auto=format%2Ccompress)