Why ChatGPT Doesn't Recommend
Your Brand — And How to Fix It
You searched ChatGPT for the best tool in your category. Your competitor appeared. You didn't. Here are the 5 reasons that happens — and the exact steps to fix each one.
ChatGPT ignores brands it doesn't know well. It builds that knowledge from five sources: entity records (Wikidata), third-party reviews (G2, Capterra), structured website data (schema), web content (blogs, Reddit), and crawler access (robots.txt). Fix all five and you go from invisible to recommended — typically within 4–8 weeks.
Why this matters more than traditional SEO
When a buyer types "best CRM for agencies" into ChatGPT, they don't scroll through ten blue links and choose. They get one answer — a ranked list of two or three tools. If your brand isn't in that list, you don't exist for that buyer at that moment. No second chance, no position 4 to fall back on.
ChatGPT is now used for B2B research queries by millions of buyers daily. Adobe reported that AI-driven traffic to business websites jumped 12x between 2024 and early 2025. Semrush data shows visitors from LLM referrals convert 4.4x better than from traditional search. The buyers coming from ChatGPT are more ready to buy — and right now, most of them are finding your competitors instead of you.
There are five specific reasons ChatGPT isn't recommending your brand. Each one is fixable.
Reason 1: ChatGPT doesn't know your brand exists
ChatGPT builds its understanding of companies from entity records — structured data sources that tell it who a company is, what they do, and how credible they are. The most important of these is Wikidata, the open knowledge graph that feeds into multiple AI systems. If your brand has no Wikidata entry, ChatGPT's entity understanding of you is built entirely from your own website (weak signal) and whatever mentions it found in training data.
Secondary entity signals include: Crunchbase profile, LinkedIn company page, press mentions in credible publications, G2/Capterra listings. Each of these is a corroborating source. The more sources that describe your brand consistently, the more confident ChatGPT becomes in its entity understanding — and the more likely it is to recommend you.
Create a Wikidata entry for your brand at wikidata.org. It's free and has no notability requirement. Add: official website URL, founding date, industry, headquarters, and sameAs links to your LinkedIn and Twitter. Then write one canonical brand description (2–3 sentences) and publish it identically on your homepage, About page, LinkedIn, G2, and Crunchbase. Identical wording across sources amplifies the entity signal.
Reason 2: Your website is blocking AI crawlers
ChatGPT's web retrieval feature uses a bot called GPTBot. If your robots.txt file blocks it — intentionally or accidentally — ChatGPT cannot access your content. Many companies have legacy robots.txt configurations that block all user agents not on an explicit allowlist. When GPTBot tries to crawl your site, it gets a Disallow and moves on.
Check your robots.txt right now at yourdomain.com/robots.txt. Look for any of these patterns that would block AI crawlers: User-agent: * with Disallow: /, or the absence of an explicit allow for GPTBot, PerplexityBot, and ClaudeBot.
| AI Crawler | Engine | robots.txt directive needed |
|---|---|---|
| GPTBot | ChatGPT | User-agent: GPTBot — Allow: / |
| PerplexityBot | Perplexity | User-agent: PerplexityBot — Allow: / |
| ClaudeBot | Claude | User-agent: ClaudeBot — Allow: / |
| GoogleOther | Gemini | User-agent: GoogleOther — Allow: / |
Add explicit allow directives for all four AI crawlers to your robots.txt. This takes 10 minutes and is the highest ROI fix available. Also create an llms.txt file at your domain root — a plain-text file that tells AI crawlers your brand description, key pages, and preferred citation format. This is the AI equivalent of a sitemap.
Reason 3: No third-party voices are vouching for you
ChatGPT doesn't just take your word for it. When deciding whether to recommend a brand, it heavily weights third-party corroboration: reviews on G2 and Capterra, Reddit discussions, press mentions, analyst coverage. A brand with 50 G2 reviews, active Reddit threads, and press mentions in industry publications is far more likely to be recommended than a brand with a polished website but no third-party presence.
This is why companies with modest websites but strong community and review presence often outrank well-funded brands with beautiful marketing sites. ChatGPT treats the web as a peer review system — it's looking for social proof, not design quality.
Email your top 10 satisfied customers personally and ask for a G2 review — include a direct link to your G2 profile. Aim for at least 15 reviews with specific use cases and outcomes mentioned. Then participate authentically in 3–5 subreddits relevant to your category: answer questions where your product is the genuine right answer. Do not spam. Build real presence.
Reason 4: Your content isn't structured for extraction
ChatGPT doesn't read your pages the way a human does. It looks for extractable answers — clear, standalone statements that directly answer the question being asked. Content written as flowing prose is harder for AI to cite than content structured with direct answers, comparison tables, numbered lists, and FAQ sections.
Schema markup is the machine-readable layer that tells ChatGPT exactly what your content means. FAQPage schema on your key pages directly feeds the question-and-answer format ChatGPT uses. Organization schema tells it who you are, what you do, and how to describe you. Without schema, ChatGPT has to infer all of this — and inference produces inconsistent, incomplete descriptions.
Add FAQPage JSON-LD to your homepage, pricing page, and top blog posts — minimum 6 Q&As per page covering your product, pricing, and use cases. Add Organization JSON-LD with your Wikidata URL in the sameAs array. Rewrite your key page headings as direct questions with 2-sentence answers immediately below each heading. Every paragraph should be independently citable.
Reason 5: Your competitors have a larger citation footprint
ChatGPT's recommendations aren't just about whether you qualify — they're about relative authority. If your competitor has 200 G2 reviews, Wikipedia presence, 50 blog posts, and mentions in 30 industry publications, and you have 5 reviews and 10 blog posts, they win. Not because you did anything wrong, but because they've accumulated more citation signal over a longer period.
The good news: most markets are not saturated at the AI citation level yet. The brands leading in AI recommendations are often the ones that started 6–12 months ago, not necessarily the ones with the highest domain authority. The window to catch up is still open — but it closes as more brands invest in AEO.
Run a competitor analysis: search ChatGPT for your top 10 category queries and record which brands appear and how often. Then audit each competitor's entity signals: G2 review count, Reddit presence, blog post count, schema coverage. Your gap analysis tells you exactly which signals to prioritize. Use Surfedo's free scan to get this analysis automatically.
The 30-day fix plan
Implementing all five fixes doesn't require a developer or a large budget — just focused effort across four weeks:
| Week | Action | Time required |
|---|---|---|
| Week 1 | Create Wikidata entry · Fix robots.txt · Create llms.txt file | 3–4 hours |
| Week 2 | Add Organization + FAQPage schema to homepage, pricing, top 3 blog posts | 4–6 hours |
| Week 3 | Email 10 customers for G2 reviews · Join and participate in 3 subreddits | 2 hours + ongoing |
| Week 4 | Publish 2 query-targeted blog posts · Run baseline visibility scan · Track positions | 6–8 hours |
For Perplexity, improvements can appear within days of publishing fresh content. For ChatGPT, entity and schema changes typically take 4–8 weeks to influence responses as training cycles catch up. Start tracking before you begin so you have a true baseline.
How to verify ChatGPT is recommending you
Manual verification is simple but doesn't scale: open ChatGPT, type the queries your buyers use, and record whether your brand appears and at what position. Do this for 20–30 queries. Then repeat monthly and track the delta.
For systematic tracking, tools like Surfedo run automated scans across ChatGPT, Perplexity, Gemini, and Claude — recording your exact position per query per engine and tracking how it changes week over week. This turns AEO from guesswork into a measurable, improvable metric.
Surfedo scans ChatGPT, Perplexity, Gemini, and Claude for your brand. Free scan — no card required.


