Skip to content

Glossary

LLM Visibility

LLM Visibility is an umbrella term for how often and how favourably large language models surface a brand — vendors package it differently, but the core inputs are prompts, models, and dated snapshots.
  • Not a single mathematical standard — always ask which prompts and which models.

  • Closest Getllmspy equivalents: LLM-Score, Share of Voice, GPI.

Definition

“LLM Visibility” is industry shorthand for measuring whether ChatGPT-class assistants recommend, mention, or cite your brand when buyers ask category questions. Agencies may show dashboards, heatmaps, or share-of-voice bars. Because there is no universal formula, the useful question is always operational: Which prompt pack? Which geography? Which model endpoints? Without those, two vendors can report wildly different “visibility”.

How it's computed

Common ingredients: (1) a fixed or organic prompt set, (2) multi-model execution, (3) mention detection + sentiment + hallucination flags, (4) aggregation into a headline score or rank. Some tools emphasise citations, others emphasise raw mentions — clarify before comparing numbers.

How it works in practice

Translate marketing speak into checks

  • Ask for the exact prompt list or sampling method.
  • Ask whether competitors are in the same denominator (SoV style) or not.
  • Demand dated snapshots so you know if ChatGPT updated overnight.

How to read it

Treat “visibility up 20%” as suspicious unless the vendor discloses denominator changes, new models added to the bundle, or new prompts inserted into the pack.

When to use

  • When evaluating a new monitoring vendor.
  • When PR asks for an external buzzword mapped to internal KPIs.
  • When you need a single slide title but still document the underlying metrics.