AI Visibility

Per-Model AI Performance Analytics

Learn how to use the per-model breakdown in Citation Analytics. Understand why ChatGPT and Google AI Overview are included in every plan, and how AI Model Add-ons extend your coverage to Claude, Perplexity, and Gemini.

Ayzeo Team March 23, 2026 8 min read

Key Takeaways

  • ChatGPT and Google AI Overview are included in every subscription — they reach over 1 billion users per week combined
  • Per-model data reveals exactly which AI platforms are citing your brand and which ones are ignoring it
  • Claude, Perplexity, and Gemini are available as add-ons for broader analysis as those platforms grow
  • The All Models aggregate is the arithmetic mean across all enabled models — not a snapshot of the latest run

Why Per-Model Data Changes Your Analysis

Your aggregate AI visibility score tells part of the story. But when you break it down by model, you often find a very different picture beneath the surface.

Example: Your overall visibility rate is 68%. But ChatGPT shows you in 91% of queries while Google AI Overview only shows you in 35%. Those two problems require completely different fixes — and an average number hides both of them.

Ayzeo now surfaces visibility rate, citation rate, mention rate, and sentiment separately for each AI model in your project. Use the model selector at the top of your Citation Analytics dashboard to switch between individual platforms or the All Models aggregate.

What's Included in Every Plan

Every Ayzeo subscription includes analytics for the two AI platforms with the largest audiences in the world. You don't need to configure anything — they're enabled on all projects by default.

ChatGPT
ChatGPT
Included in all plans

OpenAI's ChatGPT surpassed 1 billion weekly active users in 2025, making it the most widely used AI assistant on the planet. ChatGPT searches the web via Bing to answer queries in real time, meaning your Bing search presence directly affects your ChatGPT visibility.

Google AI Overview
Google AI Overview
Included in all plans

Google processes over 8.5 billion searches per day, and AI Overviews now appear at the top of results for hundreds of millions of queries worldwide. This is already the default experience for the majority of Google users — making it one of the most consequential AI surfaces for any brand.

Note on Google AI Overview: Google decides per query whether to show an AI Overview. Niche or highly specific queries sometimes return no Overview at all. Your AI Overview metrics in Ayzeo are therefore based only on queries where Google actually returned an AI Overview — not padded with zeros for queries where it was absent.

Deeper Analysis with AI Model Add-ons

The AI landscape is not standing still. New platforms are gaining millions of users every month, and each represents a distinct audience with distinct search behaviour. For teams that need broader coverage, Ayzeo offers AI Model Add-ons that extend your analytics to additional platforms:

Model Audience profile Why add it Availability
Claude Claude Enterprise & professional users, fast-growing Strong in B2B, legal, finance, and research use cases Add-on
Perplexity Perplexity Research-oriented, rapidly growing Surfaces sources prominently — cited brands get visible link attribution Add-on
Gemini Gemini Growing across Google ecosystem Deep integration with Google Workspace & Android Add-on

When should you add more models? If your audience includes technical professionals, researchers, or enterprise buyers, Claude and Perplexity are disproportionately relevant. If you rely on Google's ecosystem for traffic, Gemini is a natural extension. Adding a model early means you have trend data from the moment that platform becomes significant for your industry.

How to Use the Model Selector

In your Citation Analytics dashboard, a model selector appears at the top of the page whenever your project has more than one AI model enabled. Click any model tab to scope all KPI cards, trend charts, competitor data, and prompt results to that specific model.

1

Select a model tab

The selector appears next to the time range buttons. Click ChatGPT, AI Overview, or any enabled add-on model. All metrics on the page instantly update to show only that model's data.

2

Compare metrics across models

Switch between models to spot gaps. A low visibility rate on one model but not another points to a specific indexing or content issue, not a general brand problem.

3

Use All Models for the big picture

Select All Models to see the arithmetic mean across every enabled model that returned data. This is your overall AI footprint — useful for reporting and trend tracking when you don't need to isolate a specific platform.

What Each Metric Shows Per Model

Each model view shows the same four metrics — but scoped exclusively to that platform's responses. Differences between models are expected and informative.

Visibility Rate

How often your domain appears in this model's search results for your tracked prompts. A low visibility rate on one model often points to a technical or indexing issue specific to the underlying search engine that model uses — not a general brand problem.

ChatGPT uses Bing — Google AI Overview uses Google — Perplexity uses its own crawled index.

Citation Rate

Of the times your domain appeared in this model's search results, how often it actually visited your URL. A low citation rate on one model compared to others can mean your content format suits some models better than others — or that your meta descriptions are not compelling enough to trigger a click-through on that platform.

Mention Rate

How often this model names your brand in its final answer. Significant differences between models reveal which platforms have stronger brand recognition for you — often tied to their training data sources. A model trained on a corpus where your brand is well-represented will mention you more consistently.

Sentiment

The tone of your brand's mentions on this specific model. Different AI platforms draw on different source corpora — a model that uses review sites heavily may show more negative sentiment than one that prioritises editorial content. Check sentiment per model to identify if a specific platform is presenting your brand poorly.

Want the full metric definitions? Read Understanding AI Visibility Metrics for a detailed explanation of how each metric is calculated.

How the "All Models" Aggregate Works

When you select All Models, Ayzeo calculates the arithmetic mean of each metric across every enabled model that returned data for your project. Only models that actually ran queries for your prompts are included — models with no data are never padded with zeros.

1
Per-model KPIs are computed first
Visibility rate, citation rate, and mention rate are calculated independently for ChatGPT, Google AI Overview, and every enabled add-on model.
2
Results are averaged across models
Each model with available data contributes equally to the aggregate. This ensures no single model dominates the overview just because it ran more recently.
3
Historical trends reflect the daily average
The trend chart is rebuilt as a day-by-day mean across all models, so you can track how your aggregate performance has moved over time without any model skewing the line.

Enable AI Models for Your Project

ChatGPT and Google AI Overview are already enabled on every project. To add Claude, Perplexity, or Gemini, go to your Project Settings and find the AI Models section.

Frequently Asked Questions

Q: Do I need to set anything up to see per-model data?

A: No. If you already have a project with prompts, your next visibility check will automatically populate per-model analytics. The model selector appears in Citation Analytics as soon as more than one model is enabled on your project.

Q: Why does my ChatGPT score look different from my Google AI Overview score?

A: ChatGPT uses Bing as its search index; Google AI Overview uses Google's index. Your rankings in each search engine directly affect your visibility per model. Differences are signals, not failures — they tell you exactly which search engine to improve for.

Q: The All Models aggregate looks different from what I expected — why?

A: All Models now calculates a true arithmetic mean across all enabled models that have data. Previously, it showed results from whichever model ran most recently per prompt — which could mean a single model dominated the aggregate. The new calculation is more representative of your full AI footprint.

Q: Can I see historical trends per model?

A: Yes. The trend chart in Citation Analytics shows historical performance for whichever model (or aggregate) you have selected. Historical data is available from the first time that model ran queries for your project.

Q: Can I disable a model I'm not interested in?

A: You can manage which models run queries in Project Settings. Disabling a model stops future queries but preserves all historical data already collected.

Q: Are per-model results included in white-label reports?

A: Yes. PDF reports include the model breakdown so you can present per-platform performance to clients alongside the overall aggregate.

Read the Full Feature Announcement

Learn why per-model data changes your optimization strategy and how to act on gaps between platforms.

Per-Model AI Visibility Analytics — Blog Post