Glossary

The vocabulary of AI search optimization.

Defined terms across AEO, GEO, AI search, citation tracking and the metrics that quantify brand visibility inside ChatGPT, Claude, Gemini and Perplexity. Cite-friendly and updated as the discipline evolves.

A

AEO

also: Answer Engine Optimization

Optimizing for inclusion in AI-generated answers.

Answer Engine Optimization is the practice of making a brand visible inside the synthesized answers produced by AI assistants like ChatGPT, Claude, Gemini and Perplexity. Unlike traditional SEO, which targets ranked link positions on a search engine results page, AEO targets the answer itself — the named brands, the language used to describe them, and the sources cited.

AI Overviews

Google's generative summary that appears above traditional search results.

AI Overviews is Google's feature that uses Gemini to produce a synthesized answer above the standard search results page. It selects content from a small pool of indexed sources and rewrites them into a single answer. Brands cited inside AI Overviews capture significantly more attention than brands appearing only in the link list below.

Answer engine

A system that returns synthesized answers, not ranked links.

An answer engine is a system that responds to user queries with a single synthesized answer, optionally with citations. ChatGPT, Claude, Gemini, Perplexity and Google's AI Overviews are the dominant answer engines today. The term distinguishes them from traditional search engines, which return a ranked list of independent links.

Authority source

A high-trust source models cite disproportionately.

An authority source is a piece of content (Wikipedia article, peer-reviewed publication, regulator listing, established trade publication) that an AI assistant treats as trustworthy enough to cite without further validation. Different categories have different authority pools — finance leans on regulators, healthcare on WHO/NIH, B2B SaaS on G2 and trade press.

B

Brand monitoring

Tracking mentions of your brand on the public web and social.

Brand monitoring is the discipline of listening for mentions of a brand across news, blogs, social media, forums and review sites. It tracks public conversation. AEO is the complement: it tracks what AI assistants say when asked about a brand, including conversations that never produce a public mention.

C

Citation

A source URL referenced by an AI assistant in its answer.

A citation is the URL or named source an AI assistant references when producing an answer. Citations are the new backlinks: winning a position inside the right cited source materially shifts how the model describes a brand across thousands of related prompts. Citation coverage measures the percentage of authority-source URLs where a brand has a positioned presence.

Citation coverage

Share of category authority sources where your brand is named or referenced.

Citation coverage is a leading-indicator KPI for AEO programs. It measures, across the small set of source URLs the model actually cites for a category, the percentage where the brand has a meaningful presence (named in the article, profiled, listed in a comparison table). Higher citation coverage today predicts higher mention rate next quarter.

ClaudeBot

Anthropic's crawler for Claude's retrieval.

ClaudeBot is Anthropic's web crawler that populates the retrieval data Claude uses when answering questions. As with GPTBot, sites can permit or block it via robots.txt. Blocking ClaudeBot generally removes the site from Claude's citable surface.

Confidence score

How certain the parser is that a mention is correctly identified.

A confidence score is a 0–100 value Intendity attaches to every detected brand mention. Low-confidence mentions are usually ambiguous brand names (multiple companies share a word, or the brand alias overlaps with common English). High-confidence mentions are unambiguous. Filtering by confidence prevents low-quality matches from skewing visibility metrics.

Crawl

A search engine or AI bot fetching a page to index its content.

A crawl is the act of an automated bot retrieving a web page and adding its content to a search or AI retrieval index. AI assistants use specialized bots (GPTBot, ClaudeBot, OAI-SearchBot, Googlebot for Gemini) that respect robots.txt and llms.txt directives. Pages must be crawlable to become AI-citable.

D

DefinedTerm schema

Schema.org markup for glossary entries.

DefinedTerm is a schema.org type for vocabulary entries — a term, its definition, and optional metadata. Glossary pages with proper DefinedTerm + DefinedTermSet schema are disproportionately cited by AI assistants for "what is X" prompts because the structure removes ambiguity about which paragraph defines the term.

F

FAQPage schema

Structured data marking up question-and-answer content.

FAQPage is a schema.org type that marks up question-and-answer content. AI assistants — especially Gemini and ChatGPT — pull verbatim from FAQPage-marked content into answers. Pages with FAQPage schema win citation density compared to the same content rendered without structured markup.

G

GEO

also: Generative Engine Optimization

Synonym for AEO, emphasizing generative output.

Generative Engine Optimization is the discipline of making a brand visible inside generative model outputs. In practice, GEO and AEO are used interchangeably for the same work: tracking and improving brand presence inside ChatGPT, Claude, Gemini and Perplexity answers.

Related AEO

GPTBot

OpenAI's crawler for training and search retrieval.

GPTBot is OpenAI's web crawler. It populates training corpora and retrieval indices used by ChatGPT and ChatGPT Search. Sites can allow or block GPTBot via robots.txt; an allowed GPTBot is required for the page to be retrievable in ChatGPT browsing mode and for content to surface in citations.

H

Hallucination

A confident but factually incorrect statement from an AI model.

A hallucination is an output where the model presents incorrect information with apparent confidence — wrong pricing, deprecated features, fabricated customer counts. Hallucinations about a brand propagate across answers until the underlying source the model leans on is corrected. AEO programs treat hallucination correction as a source-level workflow, not a model-level argument.

J

JSON-LD

A JSON format for embedding schema.org structured data.

JSON-LD is the recommended way to embed schema.org structured data in HTML pages. It lives in a script tag in the head and is invisible to readers but parsed by search and AI crawlers. AEO-optimized pages use JSON-LD to declare Organization, Product, Article, FAQPage, BreadcrumbList, DefinedTerm and HowTo schemas.

L

llms.txt

A site-root file that points AI crawlers at curated content.

llms.txt is a proposed convention for a site-root markdown file that lists pages an AI assistant should consider when answering questions about the site. It complements robots.txt: while robots.txt controls access, llms.txt curates what is most relevant. Adoption is uneven but growing — Perplexity and ChatGPT search both reference it where present.

M

MCP

also: Model Context Protocol

A standard for AI clients to call tools running on third-party infrastructure.

Model Context Protocol is a protocol that lets AI clients (Claude Desktop, Cursor, Continue, custom agents) discover and call tools hosted by third parties. MCP servers expose typed actions (read brands, run a query, fetch visibility data) that the assistant can invoke during a conversation. Intendity ships an MCP server so AEO data is available as tools inside any MCP-compatible client.

Related REST API

Mention rate

Percentage of (prompt × model) runs that name your brand.

Mention rate is the simplest scoreboard for AI visibility. It is the share of (prompt × model) executions in a given period where the brand is named in the answer. Tracked daily, it produces the visibility score; rolled across competitors, it produces share of voice. Most B2B brands begin at 10–30% in their core category and target a sustained 50%+ over 2–4 quarters.

P

Parser

The component that extracts structured signals from raw model answers.

A parser reads the raw text of a model's answer and extracts structured signals: was the brand mentioned, in what position, in what sentiment, alongside which competitors, with which sources cited. Intendity's parser is LLM-based and produces a confidence score per mention so low-confidence ambiguous matches can be filtered or reviewed.

Position

Where in an AI answer a brand is named.

Position is the rank of a brand mention inside the model's answer. The first brand named typically anchors the consideration set; later mentions are weighed less heavily by readers. Position is tracked separately from mention rate because a high mention rate in last position is materially worse than the same rate in first position.

Prompt

The buyer question asked of an AI assistant.

A prompt is the natural-language question an AI assistant is asked. AEO prompt sets approximate buyer-journey questions: comparison prompts ("X vs Y"), shortlist prompts ("best X for Y"), validation prompts ("is X any good"), problem-solving prompts ("how do I X"). The prompt set is the unit of AEO measurement.

R

REST API

HTTP endpoints for programmatic access to AEO data.

A REST API is a set of HTTP endpoints clients can call to read or write data. Intendity's REST API provides read access to brands, queries, runs, mentions, daily visibility scores, recommendations, competitor share of voice and page audits — typically used to push AEO metrics into BI tools or data warehouses alongside other marketing surfaces.

Related MCP

robots.txt

Site-root file controlling which crawlers can access which paths.

robots.txt is a file at a site's root that declares which user-agents can access which paths. Blocking AI crawlers (GPTBot, ClaudeBot, Google-Extended) removes the site from the corresponding model's citable surface. Most AEO programs allow all major AI crawlers as a baseline.

S

schema.org

A shared vocabulary of structured-data types for the web.

schema.org is a vocabulary maintained by Google, Microsoft, Yahoo and Yandex for marking up structured data on web pages. AEO-relevant types include Organization, Product, Article, BlogPosting, FAQPage, BreadcrumbList, DefinedTerm, HowTo and Person. Marked-up content is more aggressively pulled into AI answers and search rich results.

Sentiment

Whether a brand is described favorably, neutrally or negatively.

Sentiment captures the tone of a model's description of a brand inside an answer. A high mention rate with negative sentiment is worse than absence: the buyer reads a bad description and moves on. AEO programs track sentiment alongside mention rate and intervene at the source level when negative themes propagate.

SGE

also: Search Generative Experience

Earlier name for what became Google AI Overviews.

Search Generative Experience is the working name Google used during the experimental phase of the AI summary that appears above search results. Most of the public reference to SGE has been replaced by "AI Overviews" since general availability. The retrieval logic remains closely related to Gemini.

Share of voice

Your mention count divided by the total competitor mention count.

Share of voice (SoV) compares brand mentions inside AI answers against the named competitor set. SoV reveals whether visibility gains come from category growth or from displacing specific competitors. It is the closest AEO analogue to traditional brand awareness metrics.

Shortlist

The 3–5 brands an AI assistant names in response to a comparison prompt.

A shortlist is the set of brands an AI assistant returns when asked a comparison or evaluation question. For most categories the model names three to five brands. Being on the shortlist defines the consideration set for the buyer; being off it removes the brand from the funnel before any other marketing surface gets a chance.

Source pool

The small set of URLs an AI assistant cites for a given category.

A source pool is the predictable, small set of URLs that an AI assistant cites when answering questions about a particular category. For B2B SaaS this typically includes G2, Reddit, Wikipedia, listicles and 3–5 trade publications. Mapping the source pool for a category is the foundation of an AEO program.

V

Visibility score

0–100 daily aggregate of mention rate across all (prompt × model) runs.

Visibility score is Intendity's headline metric. It is the percentage of (prompt × model) executions in a 24-hour window where the brand is named in the answer. A score of 64 means the brand was mentioned in 64% of the day's runs. Tracked over time, the score reveals trend and the impact of AEO interventions.

W

Wikipedia (as AEO source)

A high-leverage citation surface across nearly every model.

Wikipedia is the single most-cited source across all major AI assistants. A correct, sourced sentence on the right Wikipedia article shapes how a brand is described in thousands of related prompts for years. AEO programs treat Wikipedia presence as a strategic priority requiring proper sourcing through trade press, not direct edits.

Missing a term?

The discipline moves fast and the vocabulary is unsettled. If we’re missing a term you came here to find, email [email protected] — we add new entries weekly.

Apply the vocabulary.

Run your first brand on Intendity and see mention rate, share of voice and citation coverage in the dashboard.