The Perception Score Methodology
How VectorGap measures AI brand perception across accuracy, visibility, sentiment, and consistency.

Why Perception Scoring Matters
When ChatGPT, Claude, or Gemini discusses your brand, that response shapes buyer perception. But how do you measure whether those responses are helping or hurting your business?
The Perception Score provides a standardized framework for evaluating AI brand representation across multiple dimensions.
The Four Dimensions
- Accuracy (0-100)
- Accuracy (0-100)
What it measures: Whether AI states factually correct information about your brand.
We compare AI responses against your verified knowledge base. Every claim is evaluated:
- Correct: AI statement matches verified facts
- Incorrect: AI statement contradicts verified facts (hallucination)
- Unverifiable: AI makes claims we cannot confirm or deny
- Visibility (0-100)
- Visibility (0-100)
What it measures: How prominently your brand appears in relevant AI responses.
- Sentiment (0-100)
- Sentiment (0-100)
What it measures: The tone AI uses when discussing your brand.
- Consistency (0-100)
- Consistency (0-100)
What it measures: Whether different AI systems agree about your brand.
The Composite Score
The overall Perception Score combines all four dimensions with equal weighting:
Perception Score = (Accuracy + Visibility + Sentiment + Consistency) / 4
Ready to monitor your AI perception?
See exactly what ChatGPT, Claude, and Gemini say about your brand.
Get Started Free