This updated version of the V2 periodic table includes the elements that decide whether your content makes it into ChatGPT, Gemini, Claude, Perplexity, Grok, or Google AI Mode answers by building on existing data and extending the original study to June 2025.
To understand what drives AI visibility in 2025, we analyzed 2.2 million real user prompts across ChatGPT, Claude, Perplexity, Grok, Gemini, and Google AI Mode from January through June 2025. Our research identified 15 core factors that determine whether your content gets cited by these models, revealing significant shifts from traditional SEO approaches.
The study shows that while content quality and relevance remain paramount, models increasingly prioritize cross-source validation and authentic social proof over basic technical optimization. Our most significant discovery was the rise of co-occurrence as a new critical factor. LLMs now cross-reference multiple sources before citing content, making a consistent presence across authoritative domains essential for visibility.
Agent Experience (AX): Why Perplexity Values AX
Structured FAQ schema boosts “direct answer” retrieval by +31 % in Perplexity tests. Perplexity values content from which it can easily retrieve information and provide it directly to users when prompted. If it can’t crawl your site quickly and easily, your brand won’t exist. Robots‑LLMs.txt tell Grok and Gemini exactly which XML sitemap to crawl, shaving ~12 hrs off index latency.
Reviews & UGC: Why Grok Prioritizes User-Centered Information
Grok leverages live X threads as a relevance check. A surge in authentic, variant‑rich phrases about your brand increases its co‑occurrence confidence, especially for lifestyle products.
Verifiable Metrics: Why Claude Prioritizes Evidence & Verifiability
Claude penalizes claims without external data more than any other model. Add peer‑reviewed numbers or proper citations; we measured a 17 % lift in Topical Authority for B2B SaaS brands post‑case‑study launch.
Co-Occurrence: Why LLMs Cite Multiple Sources
LLMs reference multiple sites. For example, AI mode uses a query fan-out technique to break down prompts into multiple topics, then reviews the sources for each topic, looking for co-occurrences, and synthesizes the information using Gemini, bringing those sources back as personalized answers. Being present in a consistent way across multiple sources increases your chance of visibility.
AEO Periodic Table V3 confirms what many suspected and what brands and marketers are already seeing: classic SEO metrics are fading while semantic depth, structured clarity, and real‑time freshness are the new power trio.
Whether you’re a growth PM, technical SEO, or content lead, the hierarchy above tells you where to spend the next sprint’s cycles.
Footnote: How This Table Was Built
Goodie AI captured 2.2 million live prompts (Jan–Jun 2025) from GPT‑4o, Claude 3.7, Perplexity Pro, Grok 1.5, Gemini 1.5, and Google AI Mode. We extracted every outbound link, schema flag, CWV metric, and robots‑directive, then asked two independent AEO analysts to rate each answer against 14 predefined factors. Raw 0‑100 subscores were normalized per model, then averaged.