How VerisAI works
Audit maps root-cause signals observable in the snapshot.
You get an evidence-based, time-stamped snapshot of what a model may infer about your company from resolvable signals—and why.
AI Identity
AI Identity is the profile a model may infer about your company in a given run and time: what you do, who you serve, where you operate, and how credible you appear. It is derived from signals the model can resolve—not from intent.
Example: if your service taxonomy is inconsistent, models may infer the wrong category or misattribute your offerings.
AI Identity Governance
Governance means keeping your company identity machine-verifiable and stable over time: consistent entity anchors, crawlable identity pages, canonical consistency across variants, and structured data that supports entity resolution.
Audit (root-cause signals)
When AI outputs drift from reality, the drift usually matches the observable signal environment: missing identity anchors, inconsistent canonicals, blocked crawling, thin or contradictory content, or broken structured data. The audit maps specific model claims to specific observable signals.
Crawl & indexability
robots.txt, sitemaps, indexability controls, canonical paths, and fetch consistency across URL variants—so crawlers see one stable source of truth.
Identity anchors
About/Contact/legal entity signals, locations, ownership, and other machine-resolvable anchors across key pages—kept consistent across variants and languages.
Structured data integrity
Organization / WebSite schema, contact points, identifiers, and validation of critical fields used for entity resolution—no conflicting IDs or ambiguous sameAs.
Content clarity
Service taxonomy and positioning language, contradictions, thin pages, and missing context that forces model inference—so models don’t ‘fill gaps’ with guesses.
AI Visibility Score
The audit produces a quantitative AI Visibility Score (0–100) across 8 layers:
- L1 — Gateway
- Whether AI crawlers (GPTBot, ClaudeBot, PerplexityBot, Google-Extended, GrokBot) are allowed in robots.txt and can fetch the page. A blocked gateway immediately sets the score to 0.
- L2 — SSR Quality
- Whether the page delivers a complete, server-rendered HTML response: valid title, H1 tag, 500+ characters of visible content, and valid JSON-LD on initial load.
- L3 — Indexability
- Presence and accuracy of canonical tags, lang attribute, and valid JSON-LD structure. Penalties reduce the content score — prevents ambiguous entity signals.
- L4 — Content Quality
- Type-specific semantic scoring (Organization, Article, Product, etc.) — checks entity clarity, schema completeness, contact info, social links, team signals, and description quality.
- L5 — Technical SEO
- Baseline web health: HTTPS, valid sitemap, responsive viewport, and asset optimization (CSS, JS, image counts).
- L6 — On-Page SEO
- Semantic markup quality: heading hierarchy, alt text coverage, internal link density, and OpenGraph/Twitter card completeness.
- L7 — Multi-LLM Citation Readiness
- Per-platform citation signal scoring for ChatGPT, Gemini, Claude, and Perplexity — based on bot access, FAQ presence, question-format headings, definition lists, author attribution, and structured data.
- L8 — SEO Activity
- Signals of active SEO management: GTM/GA4, sitemap scale (30+ URLs), blog/content section, hreflang, advanced schema types, and 3rd-party SEO tool presence.
For detailed scoring methodology, formulas, and documentation sources, see AI Visibility Score Methodology.
Deliverables
Outputs are snapshot-based and time-stamped so you can compare changes over time and verify whether AI interpretations converge after fixes.
- AI Identity baseline (what the model claims + uncertainty patterns)
- Evidence map (claim → observable signal sources and pages)
- Forensic crawlability and SEO-compatibility findings (with affected URLs)
- Identity drift findings (where interpretation diverges from ground truth in the snapshot)