All posts
May 12, 2026 · 3 min read · aeo-research · citation-policy

Schema markup didn't move AI citations, and the Ahrefs test explains why

JSON-LD correlates with citations because good sites use it, not because models read it

Schema markup didn't move AI citations, and the Ahrefs test explains why

Photo by Tyler on Unsplash

Ahrefs ran a difference-in-differences study on 1,885 pages that added JSON-LD schema, matched them against controls that never added it, and watched what happened in AI Overviews, AI Mode, and ChatGPT for 30 days. Per Search Engine Journal, the result across all three surfaces was "no meaningful citation increase." AI Overviews actually moved -4.6% relative to controls, which Ahrefs won't call a real effect but won't dismiss either.

That's the headline. The part marketing teams should care about is buried lower.

The correlation everyone was selling

For two years, the standard pitch from SEO vendors has been: pages cited by AI are about three times more likely to have JSON-LD, therefore add JSON-LD. Ahrefs analyzed 6 million URLs and confirmed the correlation. Then they isolated it, and the causal piece collapsed. Their reading is that schema is a marker of sites that also invest in better content and earn more links. The schema isn't doing the work. The site behind it is.

A separate searchVIU experiment cited in the report tested whether five AI systems actually read schema when fetching pages live. None did. They pulled visible HTML and ignored JSON-LD, Microdata, and RDFa entirely. That's a direct-fetch test, not a statement about training or indexing, so I'd hold this loosely, models could still use schema upstream in ways this experiment can't see. But combined with the citation data, the burden of proof has flipped.

What the study can't tell you

Every page Ahrefs studied already had 100+ AI Overview citations before schema was added. So the question the report answers is narrow: does adding schema lift pages that are already in the consideration set? Answer: no. The question it doesn't answer: does schema help a page get into the consideration set in the first place? Possibly. Crawling and parsing are different problems than retrieval ranking, and a page that isn't being read at all has different needs than a page that's being read and ignored.

If you're already getting cited, schema isn't the lever. If you're invisible, this study doesn't apply to you, and the honest answer is nobody's published clean data yet.

What to do this week

Pull the list of pages where your team has scheduled JSON-LD work for Q4. For any page that already shows up in AI Overviews or ChatGPT answers for its target query (check manually, ten queries, twenty minutes), move the schema task to the bottom of the backlog and put the hours into rewriting the page body with more specific numbers, named sources, and dates. That's the variable Ahrefs' correlation is actually picking up.

For pages that don't appear in any AI answer yet, leave the schema work in place. It's cheap, it doesn't hurt, and the data genuinely can't rule out that it helps with crawling and parsing on pages models haven't decided to trust yet.

The harder question

This is the first reasonably designed study on schema and AI citations, and it points one direction. It's also one study, on one 30-day window, pooling all schema types together. Product schema and FAQ schema and Article schema might behave very differently and the data wouldn't show it. So the take here isn't "schema is dead." It's that the case for schema as an AI visibility tactic was always circumstantial, and now there's a reason to stop quoting the 3x number in pitch decks.

aeo-research citation-policy

Track your own brand in AI search.

Five minutes from sign-up to your first visibility report. Free plan, no credit card.