Practical Lens 29: HTTP vs HTTPS can look like two “official” sites

If your website works on both http and https (or redirects are inconsistent), AI tools may not agree which version is the official source.

What this lens means

If both HTTP and HTTPS versions remain reachable (or redirects differ by path), AI crawlers can treat them as competing official surfaces. That splits authority and makes citations and summaries less consistent.

Why this happens

  • Different crawlers discover different URL variants via internal links, sitemaps, and external references.
  • If redirects aren’t enforced consistently, both HTTP and HTTPS can remain valid candidates for the official source.
  • Once authority splits, tools can alternate between protocols in citations and cache different versions.

What this usually indicates

  • Protocol alternation: AI cites http in one place and https in another.
  • Duplicate discovery: both variants are crawlable and show similar content.
  • Mixed internal links: navigation links point to both http and https.
  • Canonical mismatch: canonical points to a URL that differs from the resolved protocol.

What to verify (evidence-only)

  • Does every http URL redirect cleanly to the https version in one step?
  • Do canonical tags always point to https (and match the final resolved URL)?
  • Do internal links and sitemap URLs use only https?
  • Do crawler user agents and normal requests resolve to the same final https URL?
  • Do external profiles/directories link to the https version consistently?

Frequently Asked Questions

Why does HTTP vs HTTPS matter for AI crawlers?

Two reachable versions look like two competing official sources. That splits authority and reduces consistency.

Is a single redirect enough?

Yes—what matters is that every http request ends up on one final https URL, consistently.

What’s the fastest check?

Request the same page over http and https and confirm you always end at https with one canonical URL.