AI Optimisation — Docs, API & Content for LLM Agents
In 2025–2026 your future customers will increasingly meet your product through AI agents, not Google. If ChatGPT cannot quote your docs and Perplexity cannot find your API spec — you are losing leads to competitors who optimised first.
⚠️ The Problem
Most SaaS sites are designed for humans browsing with JavaScript-heavy SPAs. LLM crawlers like GPTBot, ClaudeBot, and PerplexityBot do not run JS. They cannot follow your dynamic routes, parse your component-rendered tables, or trust your unstructured marketing copy. The result: when a founder asks Claude "what's the best tool for X?", you are not in the answer. Your competitor — with proper llms.txt, JSON-LD, and clean per-page Markdown — is.
💡 The Solution
I run a 26-criterion AI-readiness audit across five categories — Discovery (llms.txt, robots.txt, sitemap), Per-page artifacts (.md versions, JSON-LD, canonical), API spec (OpenAPI, examples, SDK), Content (curl examples, errors, glossary), and Hygiene (no-JS access, stable URLs, AI policy). You get a scored report with concrete fixes and an implementation roadmap to 75+/100.
For the implementation tier, I execute: llms.txt + llms-full.txt generation, JSON-LD TechArticle schema across the docs site, .md mirrors for SPA-rendered pages, OpenAPI cleanup, content rewrite under the CITABLE framework (source authority, recency, relevance, citations), and tracking of AI Visibility Score over time.
🎯 The Outcome
Your product becomes citable by AI agents. Concrete metrics that move: AI Visibility Score (typical lift from 18–40% baseline to 65%+), prominence in AI answers for category queries (0% → 50–70%), AI-attributed lead volume (3–5x in 90 days for typical B2B SaaS in our ICP). First-mover window in most niches is closing fast — competitors who move now own the AI answer surface for 12–18 months.