Practical lens
Each lens maps: symptom → likely signal category → what we verify. This is not a remediation guide.
Lens 01: Signal inconsistency
If two AI tools describe your company differently, treat it first as a signal inconsistency problem, not as "random AI."
Read lens →Lens 02: Canonical as identity control
If your homepage and language variants disagree on what is "primary," AI identity resolution can drift because the machine lacks a stable authority anchor.
Read lens →Lens 03: Structured data as identity contract
If Organization schema is missing or fragmented across pages, machines typically rely more on third-party references and heuristics, which reduces identity certainty.
Read lens →Lens 04: Crawl access is an identity prerequisite
If one AI tool "knows" your services and another does not, assume uneven access to your core pages—not different "intelligence."
Read lens →Lens 05: Third-party references become your identity
If AI keeps citing old names, old offerings, or the wrong category, assume the machine is resolving you through stale third-party anchors.
Read lens →Lens 06: Consistency beats persuasion
If you need to "explain" your company differently in different places, expect AI to mirror that inconsistency.
Read lens →Lens 07: One entity, one "official" surface
If AI sometimes treats you like two different companies, assume your entity anchors are competing—not that the model is "confused."
Read lens →Lens 08: Language variants can create identity forks
If the EN and local-language AI answers differ materially, assume your language variants describe different "truths" to machines.
Read lens →Lens 09: Trust is a signal, not a statement
If AI adds hedging (“appears to,” “likely,” “may”), treat it as reduced signal certainty—your identity is not fully resolvable from what it can verify.
Read lens →Lens 10: AI reads what it can repeat
If AI misses something you consider “obvious,” assume it is not repeated and anchored across your primary surfaces.
Read lens →Lens 11: Your homepage is a machine identity primer
If AI misclassifies your company, treat your homepage as the first suspect—machines often anchor there.
Read lens →Lens 12: Navigation is a crawl signal
If key identity pages aren’t clearly discoverable via internal links, machines may never treat them as core evidence—even if the pages exist.
Read lens →Lens 13: Soft-404 is a trust debt
When a page looks missing to an AI crawler but returns “OK”, it damages reliability signals and reduces confidence in your surfaces.
Read lens →Core AI Governance Terminology
- Practical Lens
- A diagnostic framework mapping observable AI output symptoms directly to verifiable web signal categories.
- Identity Anchor
- The primary authoritative surface (such as a canonical homepage or JSON-LD contract) a machine uses to resolve entity reality.
- Signal Drift
- The deterioration of AI confidence caused by fragmented, contradictory, or outdated third-party references overriding your primary signals.
Frequently Asked Questions
How do I use the Practical Lens library?
Use the appropriate lens to classify an observed AI symptom into a specific signal category, then use our case studies to review evidence snapshots and verification methods.
Why does AI misclassify my company's identity?
AI misclassification typically occurs when your homepage lacks a clear machine identity primer, or when external third-party anchors override your weak canonical signals.