TL;DR

Claude ignores most brands because they are under-represented at the entity level (weak Wikipedia/Wikidata presence) and under-cited by the high-authority sources Claude trusts (mainstream press, industry analysts). In the case study below, a mid-size B2B SaaS went from 0 to 47 Claude citations across 60 tracked prompts in 90 days by fixing four signals: their Wikipedia article, their Organization schema, their llms.txt file, and three new analyst-grade third-party reviews.

Anthropic's Claude is the most pickier of the major AI assistants when it comes to citing brands. Where ChatGPT will recommend a wide tail of niche tools and Perplexity will surface anything in the top three SERPs, Claude consistently defaults to a small set of well-established names — and ignores everything else. This is not a bug. It is the deliberate consequence of how Claude is trained and how it weights citations. Once you understand why, the path to fixing it is straightforward.

This article covers what brand citations in Claude actually look like, the three real reasons your brand is being ignored (and whether Gemini ignores you for the same reasons), a 90-day case study with the exact tactics that worked, and a measurement framework for tracking your Claude AI visibility over time.

"Across 50,000 commercial AI responses we analysed in Q1 2026, Claude was the model most likely to cite the same handful of brands per category. The mean number of distinct brands surfaced per query was 4.1 for ChatGPT, 5.8 for Perplexity, but only 2.7 for Claude."

What is Claude AI visibility?

Claude AI visibility is a measurement of how often, how prominently, and how positively your brand is mentioned inside answers generated by Anthropic's Claude model when users ask category, comparison, recommendation, or vendor-selection questions. It is the Claude-specific dimension of Generative Engine Optimization (GEO).

Visibility breaks into three observable signals:

  • Mention rate — the percentage of relevant Claude responses that name your brand at least once
  • Position — where your brand appears in the response (first mention typically receives 4× the user attention of a fifth mention)
  • Framing — whether Claude describes you positively, neutrally, comparatively, or as a worse alternative

A brand with strong Claude AI visibility is mentioned in 30%+ of relevant prompts, in the first three positions, with neutral-to-positive framing. A brand with poor visibility is mentioned in under 5% of prompts or only in negative comparisons ("unlike X, the leading providers offer...").

Why does Claude ignore my brand in recommendations?

Three root causes account for almost every case of "Claude won't mention us." They are listed in order of prevalence.

1. Weak entity representation

Before Claude can cite a brand, it must have an internally coherent entity model for that brand — a stable representation of who you are, what category you operate in, where you are based, and what you do. This entity model is built from structured knowledge sources Anthropic feeds into training: Wikipedia, Wikidata, Crunchbase, GitHub, and academic publications. If your brand is absent from these or appears inconsistently across them, Claude has nothing concrete to anchor to.

Test this fast: ask Claude "What is [your brand name]?" If it says it does not know, or describes you incorrectly, you have an entity-level problem and no amount of content publishing on your own site will fix it. The fix has to come from improving your representation in the structured knowledge graph itself.

2. Thin third-party authority

Claude's training data places enormous weight on where a brand is mentioned, not just how often. Mentions in the New York Times, MIT Technology Review, Gartner reports, Wired, or research papers carry orders of magnitude more weight than mentions in mid-tier blogs or low-traffic SEO content. This is by design — Anthropic explicitly trains Claude to prefer authoritative sources.

This is also why a brand can rank well in Google but still be invisible to Claude. Google rewards relevance and authority signals at the page level. Claude rewards entity-level authority across the corpus. Different game.

Our complete guide on building brand authority for AI assistants covers the source hierarchy in detail.

3. Training-data recency gap

Claude's knowledge cut-off is not a single date. Different parts of Claude's knowledge update at different cadences depending on Anthropic's training and fine-tuning pipeline. Brands that launched, rebranded, or pivoted recently are at the highest risk of being either absent from training data altogether or represented by stale information that no longer reflects what they actually do.

Claude's web search tool partially compensates for this — when enabled, it fetches live information — but the tool is invoked selectively and many user prompts never trigger it. So you cannot rely on web search to overcome a training-data gap on its own.

Why does Gemini ignore my brand? (And how is it different from Claude?)

The reasons overlap but the weightings are different, so the same brand can have wildly different visibility in the two models.

SignalClaudeGemini
Real-time SERP signalsLow (web search invoked selectively)Very high (integrated with Google Search)
Structured knowledge graphVery high (Wikipedia, Wikidata)High (incl. Google Knowledge Graph)
Mainstream press / analystVery highHigh
RecencyLags by months to yearsOften current (via SERP)
Long-tail / niche brandsRarely citesCites if SEO-strong

Practical implication: if you have strong traditional SEO but weak Wikipedia / press presence, you will likely be cited by Gemini and ignored by Claude. If you have strong press / analyst coverage but weak SEO, the inverse. The brands cited consistently across both have invested in both signals. For a deeper look at how Google's model selects citations, see how Gemini selects sources.

What do brand citations in Claude actually look like?

There are three observable forms.

Type 1 — Named-brand mention in answer text. Claude writes the brand name directly into its response, often in a comparative list. Example: "For B2B project management, the most commonly recommended tools include Asana, Monday, ClickUp, and Linear." This is the most valuable form because it requires no further user action and is unambiguous.

Type 2 — Inline web-search citations. When Claude invokes its web search tool, it appends links to the sources it consulted. These appear as small reference markers in the answer text and link out to the source page. Useful for traffic but typically a smaller share of total citations than type 1.

Type 3 — Structured comparison. When asked direct comparison questions ("X vs Y"), Claude often produces a structured comparison — sometimes a table — with named brands, features, and rough rankings. Brands that consistently appear in these comparisons enjoy outsized share-of-voice because comparison answers receive higher user attention than open-ended ones.

Case study: 0 → 47 Claude citations in 90 days

Brand: Mid-size B2B SaaS in the project-management category (~$15M ARR, 80 employees, US-based)
Starting position: 0 mentions across 60 tracked Claude prompts
Final position (day 90): 47 mentions across the same 60 prompts (78% mention rate)
Total investment: ~120 hours of work + ~$4,800 in third-party costs

The brand had decent traditional SEO (DR 58, ranking on page 1 for ~40% of category keywords) and was already mentioned regularly in ChatGPT and Perplexity. But Claude was a black hole — over a 30-day baseline measurement, Claude never named them once across our 60-prompt test set. Diagnosing it took an afternoon; fixing it took 90 days. Here is exactly what moved the needle.

Step 1 — Diagnostic audit (week 1)

We ran the standard GEO audit on all five major models and surfaced four specific gaps for Claude:

  • The brand had a Wikipedia article but it was a stub (3 sentences, 2 citations, flagged for notability)
  • No Wikidata entry at all — meaning the brand had zero structured-knowledge-graph presence
  • Their Organization JSON-LD schema was incomplete (no sameAs, no founder, no foundingDate)
  • Press coverage was concentrated in industry blogs, with only one mainstream-press mention in the previous 18 months

Step 2 — Wikipedia and Wikidata fixes (weeks 2–4)

The team commissioned a Wikipedia editor (sourced from a reputable agency, not paid editing — full disclosure) to expand the article from a stub to a properly-cited overview using existing high-quality references the brand already had. The article grew from 3 sentences to 8 paragraphs with 14 citations. The notability flag was removed in week 4.

In parallel, they created a Wikidata entry mirroring the Wikipedia article, with structured properties for industry, headquarters, founders, founding date, and product category. This single change — the Wikidata entry — drove the largest single jump in Claude visibility we observed (10 → 22 citations between week 4 and week 6).

Step 3 — Organization schema overhaul (week 4)

Their existing Organization schema was the bare minimum. They expanded it to a full @graph with Organization, WebSite, SoftwareApplication, FAQPage, and BreadcrumbList nodes, all cross-referenced via @id. This made their on-domain content unambiguously machine-readable. See our guide on structured data for LLMs for the schema patterns that move the needle.

Step 4 — Publish llms.txt (week 5)

They published an /llms.txt file at their domain root listing their About page, key product pages, and three flagship long-form articles in markdown form with summaries. This had no immediate measurable impact but became a baseline best-practice that other AI models (notably Perplexity) began surfacing within weeks.

Step 5 — Earn three analyst-grade citations (weeks 4–10)

The most expensive but highest-impact tactic: securing three meaningful third-party citations. The team:

  • Pitched and landed an inclusion in a major industry analyst's quarterly market overview (week 8)
  • Worked with a top-tier B2B media outlet on a customer-feature article (week 9)
  • Earned a placement on a respected "best of" listicle from a high-authority source (week 10)

Total third-party costs: ~$4,800 (a portion of which was PR-agency fees rather than pay-for-placement).

Results at day 90

Claude mention rate climbed from 0 / 60 to 47 / 60 (78%). Average position improved to 2.4 (typically named in the second or third position when mentioned). Sentiment was neutral-to-positive in 100% of cases — meaning Claude described them factually, not comparatively against competitors.

Spillover effects: Perplexity citations rose from 18 to 41, ChatGPT from 33 to 51, Gemini from 27 to 38. The Wikipedia and schema improvements were the load-bearing structural fix; everything else amplified them.

How to track your Claude AI visibility

You cannot improve what you do not measure. The minimum viable measurement system has three parts:

  1. A fixed prompt set of 30–100 queries representing real category, comparison, and recommendation patterns your customers use
  2. Weekly execution of those prompts against Claude (manual or automated)
  3. A change log tracking what you changed each week so you can attribute movement to specific actions

For ongoing measurement, we built Sight precisely to remove this manual work — it runs hundreds of brand-relevant prompts daily across Claude, ChatGPT, Perplexity, Gemini and Copilot, tracks mention rate, position and sentiment over time, and tells you which AI model is moving and why. For the methodology behind brand-relative measurement, see our piece on tracking share of voice in AI search.

What to do this week

If your brand is currently invisible to Claude, the high-leverage moves in the first 30 days are: (1) audit your Wikipedia and Wikidata representation; (2) confirm your Organization schema is complete; (3) ship an llms.txt file; (4) identify the three highest-authority publications in your category and start a deliberate PR pursuit. Tracking baseline before you start matters — without it you cannot demonstrate progress, internally or externally.

For the broader cross-model context, our deep dives on how ChatGPT recommends brands, how Gemini selects sources, and how to get cited by Perplexity cover the equivalent playbooks for the other major models. The seven-factor underlying framework is in our breakdown of the seven factors that determine AI visibility. To benchmark where your brand currently stands across all five major AI assistants, use Sight's AI visibility dashboard →