A live, model-by-model breakdown of where ChatGPT, Claude, Perplexity, Gemini and other AI search models get the answers they give. Aggregated across every analysis run on BrandInsightAI in the rolling 30 days.
7
Models covered
12
Weekly snapshots
30d
Rolling window
12w
Trend depth
Cross-model bias to the major platforms
Each cell shows what percentage of that model's citations come from this domain in the current window. Brighter cells = stronger preference. The mini line below each value is the last 12 weeks of trend (where data exists).
12 weeks of trend · sparklines below each percentage show the rolling trajectory.
Source
Gemini
Perplexity
Google AIO
Google AI Mode
Claude
ChatGPT
Grok
All LLM Avg
youtube.com
0.5%
1.7%
3.9%
2.8%
0.0%
0.0%
0.6%
1.5%
reddit.com
0.8%
0.0%
3.1%
1.6%
0.0%
0.1%
0.8%
1.0%
google.com
1.2%
0.3%
0.3%
2.3%
0.1%
0.0%
0.1%
0.9%
facebook.com
0.0%
0.0%
1.1%
0.8%
0.0%
0.0%
0.2%
0.3%
wikipedia.org
0.2%
0.2%
0.1%
0.1%
0.6%
1.5%
0.2%
0.3%
tiktok.com
0.0%
0.1%
0.7%
0.7%
0.0%
0.0%
0.0%
0.2%
instagram.com
0.0%
0.2%
0.6%
0.6%
0.0%
0.0%
0.1%
0.2%
linkedin.com
0.0%
0.0%
0.2%
0.3%
0.1%
0.0%
1.0%
0.1%
Social media leaderboard
Which social platforms are powering AI search? Rankings are share of all social citations across every model in the rolling 30 days. Click any name below the podium to see its per-model breakdown.
3.6% of every AI citation in the rolling 30 days points to a social platform.
Not yet cited in this window: Twitter (legacy) · Snapchat · Threads · Discord · Tumblr · Bluesky · Mastodon.
News publisher leaderboard
Which news publishers are AI search models actually quoting? Rankings are share of all news citations across every model in the rolling 30 days. The list is hand-curated to genuine journalism brands — major absentees with zero citations are listed at the bottom.
2.8% of every AI citation in the rolling 30 days points to a news publisher.
Not yet cited in this window: The Economist · Al Jazeera · Sky News · Politico · The Atlantic · HuffPost · Vox.
Comparison & reviews leaderboard
Which decision-driving destinations — comparison sites, review platforms, consumer associations — do AI models lean on? Rankings are share of all comparison/review citations across every model in the rolling 30 days.
2.6% of every AI citation in the rolling 30 days points to a comparison or review site.
Where AI search reaches for authoritative scholarly content — universities, research institutes, journals, government science. Rankings are share of all academic/research citations across every model in the rolling 30 days.
1.2% of every AI citation in the rolling 30 days points to an academic or research source.
Not yet cited in this window: Science · JSTOR · PubMed / NCBI.
Marketplaces leaderboard
Multi-vendor commerce destinations AI search points buyers toward — global retail, classifieds, services. Rankings are share of all marketplace citations across every model in the rolling 30 days. Brand-only stores are deliberately excluded.
1.7% of every AI citation in the rolling 30 days points to a marketplace.
The voices AI search reaches for on business strategy, market sizing and industry analysis — Big 4 consultancies, research firms, financial-services research, wire services. Rankings are share of all B2B research citations across every model in the rolling 30 days.
0.5% of every AI citation in the rolling 30 days points to a B2B research or consulting source.
What's counted: every citation/source URL returned by an AI model response in the rolling 30 days (4 most recent ISO weeks aggregated), deduplicated to a single registrable domain per response (so news.bbc.co.uk, www.bbc.co.uk and bbc.co.uk all roll up to bbc.co.uk).
Models covered: only the models that natively return citation/source data — Perplexity, OpenAI Search, Anthropic Search, Gemini Search and Google AI Overview. Chat-only modes of OpenAI, Anthropic, Gemini and the xAI / DeepSeek models don't return source URLs and aren't represented here.
Quality gate: any domain has to appear across ≥ 2 distinct projects to be included in the aggregations. This keeps the headline data cross-cutting rather than reflecting a single dataset.
Major platforms table: a curated set of cross-cutting domains (Reddit, YouTube, Google, Facebook, Wikipedia, TikTok, Instagram, Medium, LinkedIn, Trustpilot) shown as a per-model heat-map. Bigger cells = stronger preference. Hover any sparkline data point to see that week's value.
Social leaderboard: a wider set of social platforms ranked by share of social citations only. Top 3 sit on a podium; ranks 4+ expand on click. Platforms not yet cited in this window are listed at the bottom.
News leaderboard: a hand-curated set of journalism brands — built from the actual top 200 cited domains plus a list of major global news publishers we'd expect to see, so absentees are visible. Same podium / expandable layout as the social leaderboard.
Trend window: sparklines on the major-platforms table show the last 12 weekly snapshots, one Monday-to-Monday ISO week each.
Refresh cadence: the rolling 30-day view refreshes within an hour — a background job recomputes the current week's snapshot every hour.
Social media leaderboard
Which social platforms are powering AI search? Rankings are share of all social citations across every model in the rolling 30 days. Click any name below the podium to see its per-model breakdown.
4
TikTok
0.2%
6.5%
▾
5
Instagram
0.2%
6.0%
▾
6
LinkedIn
0.1%
3.1%
▾
7
Medium
0.1%
2.6%
▾
8
Quora
0.1%
1.6%
▾
9
Substack
0.0%
0.6%
▾
10
Pinterest
0.0%
0.3%
▾
11
X (Twitter)
0.0%
0.1%
▾
Not yet cited in this window:
Twitter (legacy) ·
Snapchat ·
Threads ·
Discord ·
Tumblr ·
Bluesky ·
Mastodon.