# VerisAI.eu - Complete AI Visibility Governance & Audit Methodology Version: L1 v2.0.0 | L7 v2.0.0 | Updated: 2026-02-25 ## Company - Legal name: Belvo s.r.o. - Brand: VerisAI.eu - Founded: 2025 - HQ: Praha 10, Czech Republic (Czechia) - Employees: 1 - Website: https://verisai.eu - Contact: support@verisai.eu ## What VerisAI does VerisAI audits how AI systems (ChatGPT, Gemini, Claude, Perplexity) describe your company, fixes the signals they rely on, and monitors drift over time — so your AI identity stays canonical and citation-ready. ## Category AI Visibility Governance. Not SEO. Not data privacy. Not compliance. Not GDPR. ## Free tools 1. AI Readiness Check — technical audit of a URL: checks AI bot access, SSR quality, indexability, content quality, structured data, and multi-LLM citation readiness. Available at https://verisai.eu/readiness.html 2. AI Knowledge Diff — compares your ground truth facts against what AI models actually know about your company. Flags hallucinations, missing facts, and discrepancies. Available at https://verisai.eu/knowledgediff.html ## Paid services 1. AI Visibility Audit — full 7-Layer diagnostic audit of your website across 4 audit systems: WEB accessibility analysis, URL audit, Canonical audit, and Content audit. Delivers structured report, JSON export, and Implementation Runbook. Tiers: Starter €497 (1-50 pages), Professional €997 (51-200 pages), Enterprise €1997 (200+ pages). Delivery: 3 business days. 2. WEB AI Readiness Runbook — prioritized implementation guide for website administrators derived from the AI Visibility Audit. Covers robots.txt, structured data, SSR, and technical AI visibility fixes. 3. Content AI Readiness Runbook — prioritized implementation guide for content managers. Covers entity consistency, content quality, heading structure, FAQ optimization, and AI citation signals. 4. Canonical AI Readiness Runbook — implementation guide for eliminating identity ambiguity. Covers canonical tags, URL consolidation, duplicate signals, and entity resolution. 5. AI Visibility Consultancy — advisory services for companies managing AI identity governance, citation readiness, and long-term AI visibility strategy. ## Languages English, Czech (cs), Slovak (sk) --- ## 1. Service Architecture & Governance Alignment VerisAI translates technical metrics into three core governance solutions: - **AI Visibility Runbook:** Targets Web Admins and Content Managers. Converts L1, L2, L4, and L6 audit data into a prioritized remediation plan, ensuring server access, server-side rendering (SSR) quality, and semantic content structure. - **Canonical Governance:** Targets SEO and Technical Leads. Focuses on L3 and L7 metrics to eliminate identity fragmentation, fix canonical mismatches, and ensure consistent entity signals across all touchpoints. - **Corporate Governance:** Targets C-Level and Risk Management. Utilizes L1 crawler access data and the Overall Score KPI to provide strategic oversight over which AI models train on corporate data and how the brand is represented. --- ## 2. The 7-Layer AI Readiness Audit (L1-L7 Scoring Reference) ### Overall Score Calculation Overall = (AI Readiness + SEO + Citation) / 3 AI Readiness = (L4 * 0.6) + (L3_score * 0.4) * SSR_factor SEO = (L5 + L6) / 2 Citation = L7 overall_citation_readiness SSR_factor: FULL=1.0, PARTIAL=0.7, FAIL=0.0 *Early termination rules:* If L1=BLOCKED -> final_score=0 | If L2=FAIL -> final_score=0 ### L1 - Gateway Access (Max: 100) *Method: robots.txt fetch + HTTP test* - **Scored bots (18 points each, max 90):** GPTBot (OpenAI), ClaudeBot (Anthropic), PerplexityBot (Perplexity), Google-Extended (Google), GrokBot (xAI). - **Bonus:** Existence of `/llms.txt` (+10 points). - **Informative only (no score):** OAI-SearchBot, ChatGPT-User, anthropic-ai, Claude-SearchBot, Claude-Web, Google-CloudVertexBot, Perplexity-User, meta-externalagent, Meta-ExternalFetcher, Applebot-Extended, Bytespider, DuckAssistBot, mistralai-user, DeepseekBot, CCBot. - **Status Thresholds:** PASS = At least 1 scored bot ALLOWED + HTTP ok. BLOCKED = All blocked or HTTP fail. ### L2 - Server-Side Rendering (Max: 100, Binary) *Method: HTML parsing* Evaluates the presence of 4 core elements upon initial load: (not empty), <h1>, Content length (>500 visible chars), and valid JSON-LD. - **GOOD/PASS:** All 4 elements present. - **PARTIAL:** 2-3 elements present. - **FAILED/FAIL:** 0-1 element present or noindex tag found (Blocker: immediate FAIL). ### L3 - Indexability (Score: 100 - Penalties) *Method: HTML parsing* - Missing canonical tag: -20 pts - Canonical mismatch (different URL): -20 pts - Broken JSON-LD block: -20 pts - Missing <html lang>: -10 pts *(Maximum penalty: -50)* ### L4 - Content Quality (Max: 100) *Method: Type-specific semantic scoring (e.g., Organization, Article, Product).* Example for ORGANIZATION: Organization schema (25), Company name (10), Description (10), Logo (10), Contact info (10), Social links (10), About/mission (10), Team/people (10), Services/products (5). ### L5 - Technical SEO (Max: 100) *Method: Static HTML + HTTP checks* Evaluates baseline web health: HTTPS enabled (20), Valid sitemap.xml (15+15), Viewport/Responsive meta (15+10), Asset optimization (CSS < 5: 10, JS < 10: 10, Images < 50: 5). ### L6 - On-Page SEO (Max: 100) *Method: HTML parsing* Evaluates semantic markup: Valid heading hierarchy (20), H1 presence (10), Alt text coverage (>=80%: 20, >=50%: 10), Internal links (>=5: 20, >=2: 10), and OpenGraph/Twitter card completeness (og:title 8, og:description 8, og:image 9, twitter:card 8, twitter:title 7 = 40 pts total). ### L7 - Multi-LLM Citation Readiness (Max: 100) *Method: Multi-variable calculation (overall_citation_readiness = avg(GPT + Gemini + Claude + Perplexity))* - **GPT / ChatGPT (Max 100):** OAI-SearchBot ALLOWED (25), GPTBot ALLOWED (10), JSON-LD valid (25), L4 score >=70 (20), H2 subheadings (10), Lists/bullets (10). *Note: OAI-SearchBot drives ChatGPT Search citations. GPTBot affects training data representation.* - **Gemini (Max 100):** Google-Extended ALLOWED (10), HTTP access ok (10), Author attribution (25), Published/updated dates (20), Contact information (10), JSON-LD valid (15), Content type != GENERIC (10). *Note: Gemini cites from Google Search index. Google-Extended is AI training opt-out proxy only — direct crawl access is not measurable.* - **Claude (Max 100):** L1 PASS (20), Canonical URL (10), og:url (10), HTTPS enabled (10), FAQ/Q&A section (15), Question-format headings (15), Definition lists (10), SSR GOOD (10). *Note: Claude uses Brave Search index for web citations. FAQ and question-format headings are key extraction signals.* - **Perplexity (Max 100):** PerplexityBot ALLOWED (25), FAQ section (20), Question-format headings (15), Definition lists (10), SSR GOOD (15), L4 score >=70 (15). *Note: PerplexityBot is a critical gate — without it content cannot be indexed or cited.* ### Final Status Thresholds - **>=70:** PASS / CITABLE - **40-69:** WARN / PARTIAL - **<40:** FAIL / NOT_CITABLE