5LLMs monitored
6perception metrics
30+Academy lessons
EUAI Act ready
LLM Perception Monitoring

Interpreting Audit Results

Understand your perception audit results and identify improvement opportunities.

Results Overview

After each audit, you'll see: **BPI Score** - Your overall 0-100 perception score **Metric Breakdown** - Individual scores for each of the six metrics **Provider Comparison** - How scores vary across ChatGPT, Claude, etc. **Trend Data** - Changes since your last audit **Full Responses** - Complete AI responses for each query **Detected Issues** - Hallucinations, inaccuracies, negative signals

Reading Your BPI

Your BPI score tells you how well AI represents your brand: **80-100: Excellent** AI systems are strong advocates. Minor optimization opportunities. **70-79: Good** Solid foundation. Address specific weak metrics. **50-69: Needs Work** Significant gaps in perception. Prioritize improvement. **Below 50: Critical** AI may be actively harming your brand. Immediate action needed. Context matters: compare against industry benchmarks and competitors, not just absolute scores.

Metric Analysis

Each metric highlights different issues: **Low Accuracy** (below 60) AI is stating incorrect facts. Common causes: - Outdated pricing - Wrong feature descriptions - Competitor confusion Action: Upload current docs to Knowledge Base **Low Credibility** (below 60) AI doesn't present you authoritatively. Common causes: - Limited online presence - Few authoritative citations Action: Build presence on high-authority sites **Low Visibility** (below 60) You're not appearing in category queries. Common causes: - Weak SEO/GEO foundations - Competitors dominate the space Action: Create GEO-optimized content **Low Recommendation** (below 60) AI knows you but doesn't recommend you. Common causes: - Negative reviews or sentiment - Stronger competitor positioning Action: Address negative signals, improve social proof

Provider Variations

Expect variation between providers. Each has different: - Training data and cutoff dates - Response styles and biases - User bases and contexts A 10-15 point spread between providers is normal. Larger spreads indicate provider-specific issues worth investigating. Example: If ChatGPT scores 75 but Claude scores 55, investigate what Claude is saying differently.

Tracking Changes

Trend data shows how perception evolves: **Positive trends** (5+ point increase) Your GEO efforts or content updates are working. Keep going. **Stable** (within 5 points) No major changes. May indicate a plateau - time for new initiatives. **Negative trends** (5+ point decrease) Something changed. Check: - Competitor activity - Negative press or reviews - AI model updates - Your own content changes Set up alerts to catch significant changes between audits.

Need more help?

Check out the Academy for in-depth courses or contact support.