What Are Hallucinations?
AI hallucinations are confident-sounding statements that are factually incorrect.
Common examples:
- "Brand X was founded in 2015" (actually 2018)
- "Brand X costs $99/month" (actually $49/month)
- "Brand X integrates with Salesforce" (no such integration exists)
- "Brand X is headquartered in San Francisco" (actually Brussels)
These aren't lies - the AI genuinely believes it's correct. But they can seriously mislead potential customers.
How Detection Works
During each audit, VectorGap:
1. **Extracts claims** from AI responses (facts, numbers, statements)
2. **Searches your Knowledge Base** for related content
3. **Compares claims to facts** using semantic similarity
4. **Flags contradictions** with severity ratings
5. **Records evidence** for your review
Detection requires a populated Knowledge Base. Without reference data, we can't verify claims.
Severity Levels
Detected hallucinations are categorized:
**Critical**
Major factual errors that could lose sales:
- Wrong pricing
- Claimed features that don't exist
- Incorrect availability
**High**
Significant inaccuracies:
- Wrong company details
- Outdated product information
- Misattributed capabilities
**Medium**
Minor issues:
- Slightly off numbers
- Incomplete information
- Dated but not wrong
**Low**
Potential issues requiring verification:
- Ambiguous claims
- Partially accurate statements
Responding to Hallucinations
When you find hallucinations:
**Immediate:**
1. Document the specific claim and source
2. Verify against your actual facts
3. Check if this appears across multiple providers
**Short-term:**
4. Update your website with clear, crawlable facts
5. Create GEO content specifically addressing the incorrect claim
6. Add content to authoritative third-party sites
**Ongoing:**
7. Monitor subsequent audits for improvements
8. Set alerts for recurrence
9. Build citations over time
AI systems update gradually. Expect 4-8 weeks for corrections to propagate.