Back to Blog
Pillar 12 min read

How ChatGPT Decides Who to Recommend (And What You Can Do About It)

37% of consumers now start searches with AI. ChatGPT's citation algorithm changed 46 days before ads launched. Here's what our 5,600-query dataset reveals about how to get your brand recommended.

Paul Byrne March 2026

The Shift Already Happened. Most Brands Missed It.

Thirty-seven percent of consumers now start their searches with AI tools instead of Google. That number comes from Search Engine Land's January 2026 report. It is not a prediction. It is a measurement.

When someone asks ChatGPT "what's the best project management tool for remote teams" or "which travel company should I book a luxury safari with," the response is not a list of ten blue links. It is a recommendation. One, two, maybe three brands. Named. Described. Positioned against each other.

Your brand is either in that answer or it is not.

At SearchIntel, we have tested over 5,600 queries across five AI platforms — ChatGPT, Google AI Overviews, Claude, Gemini, and Perplexity. The brands that appear in AI answers are not necessarily the ones you would expect. Some household names score zero. Some mid-market challengers dominate.

The rules of visibility have changed. And most marketing teams are still optimising for a game that is rapidly shrinking.

How ChatGPT Actually Decides

ChatGPT does not rank pages. It does not crawl your website the way Google does. It draws from two separate systems, and understanding the difference matters.

Training data is everything the model absorbed before its knowledge cutoff. Every web page, every Reddit thread, every review, every press article, every academic paper it was trained on. This forms the model's baseline understanding of your brand — what you do, who you serve, how you compare to competitors. If your brand was rarely mentioned in the training corpus, ChatGPT may not know you exist.

Real-time search is the newer layer. When ChatGPT browses the web (via Bing), it supplements its training data with live results. This is where recent content, fresh reviews, and current press coverage can make a difference. But there is a catch: ChatGPT does not simply surface the top Bing result. It synthesises. It reads multiple sources, cross-references them, and constructs an answer that reflects the consensus view.

This means ChatGPT is not deciding based on one signal. It is building what I think of as an entity model — a composite picture of your brand assembled from thousands of data points across the internet. How consistently you are described. How often you are mentioned by others. How authoritative those others are.

Here is the part most people miss: what your brand says about itself carries far less weight than what others say about your brand. In our testing, third-party sources — Reddit, review sites, industry publications, comparison articles — dominate ChatGPT's citation patterns. Your own website is one input among hundreds.

The Algorithm Change Nobody Noticed

On 1 December 2025, something significant changed inside ChatGPT's search behaviour. Seer Interactive published analysis showing that citations per response jumped 81% — from an average of 5.7 sources to 10.4 sources per answer.

That was 46 days before ChatGPT Ads launched for free-tier US users on 9 February 2026.

Think about that timeline. OpenAI was restructuring how ChatGPT attributes sources and builds responses well before the advertising product went live. The model became dramatically more citation-heavy, referencing nearly twice as many sources per answer.

Why does this matter? Because more citations means more opportunities to appear — but also more competition for each answer. The bar for getting cited did not necessarily drop. The number of brands that could appear in a single response simply increased.

For brands that had strong third-party presence, this was a net positive. More citation slots, more chances to be included. For brands relying solely on their own website and a few backlinks, the change made them relatively less visible. The signal-to-noise ratio shifted.

We saw this in our own data. Brands with broad third-party coverage — mentioned on Reddit, reviewed on comparison sites, featured in industry press — saw their ChatGPT appearance rates increase in Q1 2026. Brands with thin third-party footprints saw theirs stay flat or decline, even when their Google rankings held steady.

What Our Data Shows

Over the past four months, we have run 5,600+ queries across ChatGPT, Google AI Overviews, Claude, Gemini, and Perplexity. Each query is run multiple times (we default to 10 runs per keyword) because AI responses are non-deterministic — ask the same question twice and you may get different brands recommended.

Here are the patterns that stood out.

FAQ-format content is cited more than any other content type. When a brand publishes clear, direct answers to the questions their customers ask — and those answers are structured in a way that AI can parse — they appear in responses far more often. In our testing, queries framed as questions ("what is the best...", "how do I choose...", "which company should I use for...") pulled FAQ-style content at roughly three times the rate of any other content format.

Third-party sources dominate over brand websites. Across all five platforms, the vast majority of cited sources are not brand-owned. Reddit threads, Trustpilot and G2 reviews, industry comparison articles, and publisher roundups account for the bulk of what AI models reference. Your own website may appear as a secondary source, but the primary recommendation usually comes from what others say about you.

Each platform behaves differently. This was one of the more striking findings. A brand can score 0% on ChatGPT and 90% on Perplexity for the same set of queries. Perplexity, which dropped its advertising programme entirely in February 2026 and returned to organic-only results, often surfaces completely different brands than ChatGPT or Claude. Google AI Overviews tends to favour sources that already rank well in traditional search. Claude leans heavily on training data. Treating "AI search" as one monolithic channel is a mistake.

Brand search volume correlates with recommendation rate. Brands that people actively search for by name — indicating broader awareness — tend to be recommended more often by AI models. This makes sense: if millions of people search for a brand, there is a large volume of content about that brand across the internet, which feeds the training data and the real-time search layer.

Consistency of brand description matters. When a brand is described the same way across multiple sources — same positioning, same category language, same value proposition — AI models appear to have higher confidence in recommending it. Brands with inconsistent messaging across their website, reviews, directory listings, and press coverage are harder for AI to categorise, and harder to categorise means harder to recommend.

Why Google Rankings Do Not Translate

This is the single most important thing I can tell you: a brand can rank number one on Google for a target keyword and score 0% across every AI platform for the same query.

I have seen this repeatedly. Not as an edge case. As a pattern.

The reason is structural. Google ranks pages. It evaluates individual URLs based on relevance, authority, and technical signals, then orders them in a list. AI models do not rank pages. They synthesise answers from a broad corpus of information and recommend brands (not URLs) based on entity-level signals.

A page can be perfectly optimised for Google — strong title tag, fast load time, good internal linking, solid backlink profile — and still contribute nothing to AI visibility if it is the only place on the internet that makes a case for the brand.

Google measures how good your page is. AI measures how trusted your brand is.

This is not a theoretical distinction. In our data, we regularly see brands with strong organic positions that are completely absent from AI responses. And we see brands with modest Google rankings that appear consistently in ChatGPT, Claude, and Perplexity because they have strong third-party presence, active Reddit discussion, and consistent positioning across the web.

If your AI search strategy is "keep doing SEO," you are optimising for one platform while your customers are migrating to five.

The 5 Signals That Matter

Based on 5,600+ queries and four months of testing, these are the five signals that most reliably predict whether a brand appears in AI recommendations.

1. Third-Party Mentions

This is the biggest lever. The volume, quality, and recency of what others say about your brand across the internet.

Reddit threads where your brand is mentioned or recommended carry significant weight. So do review sites (Trustpilot, G2, Capterra), industry comparison articles ("best X for Y"), press coverage, and expert roundups.

What to do: Audit your third-party footprint. Search your brand name on Reddit, review sites, and industry publications. If you find gaps, build a citation strategy: encourage reviews, engage authentically on Reddit, pitch data-driven stories to industry press, and get listed on relevant comparison directories.

2. FAQ and Question-Answer Content

AI models are built to answer questions. Content that is structured as direct answers to specific questions is inherently easier for them to parse and cite.

This is not about stuffing FAQ schema onto your homepage. It is about publishing genuine, useful answers to the questions your customers actually ask — on your website, on third-party platforms, and in formats that AI can extract.

What to do: List the 20 questions your customers ask most often. Publish clear, direct answers on a dedicated FAQ page and within relevant blog content. Use FAQ schema markup. But also answer those questions on Quora, Reddit, and industry forums — the third-party layer matters more than the on-site layer.

3. Entity Consistency

When every source on the internet describes your brand the same way — same category, same positioning, same key differentiators — AI models can recommend you with confidence. When your LinkedIn says one thing, your website says another, and your directory listings say a third, the model has low confidence and may skip you entirely.

What to do: Audit how your brand is described across your website, Google Business Profile, LinkedIn, directory listings, review sites, and press coverage. Align the language. Use the same category terms, the same positioning statements, the same key phrases. This is not about keyword stuffing. It is about consistency.

4. Brand Search Volume

This is the one signal you cannot fake quickly. When people search for your brand by name, it creates a feedback loop: more branded searches means more content about your brand, which means AI models have more data to draw from, which means higher confidence in recommending you.

What to do: Brand building matters. PR, partnerships, events, thought leadership, and advertising all contribute to branded search volume. This is the long game. But there are also tactical moves: co-marketing with established brands, contributing to high-visibility publications, and creating data or research that gets cited widely.

5. Content Freshness and Recency

ChatGPT's real-time search layer favours recent content. Google AI Overviews heavily weights freshness. A blog post from 2022 carries less weight than one published this month, particularly for queries where the landscape is changing.

What to do: Update your key content regularly. Refresh statistics, add new examples, and republish with current dates. Publish timely commentary on industry changes. AI models are increasingly incorporating recency as a ranking factor in their real-time search components.

What You Can Do This Week

Forget the 90-day roadmap for a moment. Here are five things you can do in the next five days that will start shifting your AI visibility.

Monday: Run the test. Open ChatGPT, Claude, and Perplexity. Ask the 10 questions your customers ask most often — the ones that should lead to your brand. Record which brands appear. Record which ones do not. You now have a baseline.

Tuesday: Audit your third-party presence. Search your brand name on Reddit. Check your Trustpilot, G2, or industry-specific review profiles. Look at comparison articles in your category. Count how many third-party sources mention your brand. Compare that number to your top competitor.

Wednesday: Publish five FAQ answers. Take the five most common customer questions and publish clear, direct answers on your website. Add FAQ schema markup. Keep each answer under 150 words. Make them factual, not promotional.

Thursday: Claim your directory listings. Identify the five most relevant directories or comparison sites in your industry. Ensure your brand is listed, described consistently, and up to date. If you are not listed, submit your profile.

Friday: Ask for three reviews. Reach out to three satisfied customers and ask them to leave reviews on the platforms that matter most in your category — Google, Trustpilot, G2, or industry-specific sites. Third-party reviews are one of the highest-signal inputs for AI models.

None of this requires a consultant. None of it requires new technology. It requires focus on the signals that AI actually uses, instead of the signals that Google used to reward.

The Bigger Picture

The advertising layer is coming. ChatGPT Ads launched in February 2026. Google AI Overviews already incorporates Shopping ads. The organic window — the period where AI recommendations are shaped purely by merit — is narrowing.

But organic recommendations are not going away. Even with ads, the core answer is still generated from the model's understanding of the category. Ads appear alongside recommendations, not instead of them. And Perplexity's decision to drop advertising entirely in February 2026 shows that not every platform is moving in the same direction.

The brands that invest in AI visibility now — while the playbook is still being written and most competitors are still focused exclusively on Google — will have a structural advantage that is very difficult to reverse-engineer later.

The question is not whether AI search matters. The data already settled that. The question is whether your brand appears when your customers ask.

Frequently Asked Questions

How does ChatGPT decide which brands to recommend?

ChatGPT draws from two sources: its training data (everything absorbed before the knowledge cutoff) and real-time web search via Bing. It builds an entity model of each brand using third-party mentions, review coverage, brand search volume, entity consistency across sources, and content that directly answers the question being asked. The model synthesises these signals into a recommendation — it does not rank individual web pages. In SearchIntel's testing of 5,600+ queries, third-party sources (Reddit, reviews, industry publications) were cited far more frequently than brand-owned websites.

Can you rank number one on Google but not appear in ChatGPT?

Yes. This is one of the most common patterns we see. Google ranks pages based on relevance, backlinks, and technical signals. ChatGPT recommends brands based on entity-level authority across the entire internet. A brand can have a perfectly optimised page that ranks first on Google and still score 0% in ChatGPT because the brand has weak third-party presence, few reviews, and limited mention across non-owned sources. Google measures how good your page is. AI measures how trusted your brand is. They are different systems with different inputs.

Do the same brands appear across all AI platforms?

No. Each AI platform behaves differently. In our testing, a brand can score 0% on ChatGPT but 90% on Perplexity for the same queries. Google AI Overviews favours sources that already rank in traditional search. Claude relies heavily on training data. Perplexity, which dropped its ad programme in February 2026, surfaces organic results that often differ significantly from ChatGPT. A proper AI visibility strategy must account for all platforms, not just one.

What type of content gets cited most by ChatGPT?

FAQ-format content — clear, direct answers to specific questions — is the content type we see cited most frequently in AI responses. In our data, question-based queries pulled FAQ-style content at roughly three times the rate of other formats. This makes structural sense: AI models are designed to answer questions, and content structured as answers is inherently easier for them to parse and cite. Publishing a dedicated FAQ section with schema markup is one of the fastest ways to improve your appearance rate.

How quickly can you improve your ChatGPT visibility?

Some changes produce results within days. Publishing FAQ content with schema markup, updating directory listings, and generating fresh reviews can affect the real-time search layer almost immediately. Training data improvements take longer — the model needs to be updated or retrained to reflect new information in its base knowledge. In our experience, brands that execute a structured AI visibility programme typically see measurable improvement within 30 to 90 days, depending on their starting position and the competitiveness of their category.

Paul Byrne is the founder of SearchIntel, the AI search agency that helps brands win visibility across ChatGPT, Claude, Gemini, and Google AI Overviews. He has spent 20 years in search strategy, including roles at Google, MediaCom (LEGO, Adidas, Shell), and TripAdvisor/Viator.

Find Out If ChatGPT Recommends Your Brand

We test your brand across ChatGPT, Gemini, Claude and Google AI Overviews. You get your visibility score, who appears instead of you, and a clear plan to fix it.

Book a call

Related Articles