Help
Frequently asked questions
We call current model endpoints through a unified API—the provider may swap weights or training cutoffs behind the scenes without a separate notice. Your report shows model families (ChatGPT, Claude, Gemini, etc.) as they were available when the check finished. For reproducibility, use the completion timestamp shown in the report.
We don’t query a private knowledge base about you. We send category-style scenarios without your brand name in the prompt text; brand context is only used to score whether the simulated answer mentions you. Niche or low-signal brands may appear less often—that shows up in metrics and highlights where content, SEO, and niche presence matter.
LLM-Score is a 0–100 snapshot for a single check. For each selected model we combine: whether the brand is mentioned, list position when the answer looks like recommendations, and sentiment toward the brand. Models are weighted (larger channels count more). Multiple prompts per model are merged into one row per model before scoring.
In the report, share of voice is the share of models where your brand was mentioned in at least one answer for that check. If you picked five models and three mentioned the brand, SoV is 60%. It is not internet-wide audience reach—only the models and scenarios in that run.
LLM outputs vary with wording, time, model version, tools, and chat history. We use fixed scenarios and the same check context across models so comparisons inside the report are consistent. Your personal chat may differ; both can be plausible—the report is a reproducible snapshot under our rules.
Yes. After you submit the form the job is queued on the server. You can close the page; open the report later from History after sign-in when the status is completed.
For a tracked brand, use Full audit (all scenarios in one run). Top 10 in niche ranks names models surface in the category without tying the run to one brand.
Scenarios where, across most selected models, your brand isn’t mentioned while competitors show up more often. They suggest intents to cover with landing pages or content.
Results tied to your account are kept so you can reopen reports from History. Retention details are in our privacy policy (Russian legal text). Don’t submit secrets or third-party personal data without a valid reason.
Running a check may be available without login depending on product settings; saved history usually requires sign-in (including Yandex OAuth). Without sign-in you typically can’t browse past reports.
Models hallucinate. Treat prices, percentages, and factual claims about your brand as hints—verify against your site and official sources. The report flags excerpts that are especially worth double-checking.
Still stuck? Run a free check—many labels are explained directly on the live report.