Quick start
Zero install required. Run it against any public URL to get an instant agent-readiness report.
npx botvisibility https://example.com
npx botvisibility stripe.com --json
npx botvisibility mysite.com --repo ./
npm install -g botvisibility
What it checks
BotVisibility runs 37 checks across four progressive levels of AI agent readiness.
L1 Discoverable — 14 checks
Can AI agents find your site and understand what it offers? Checks for llms.txt, agent cards, OpenAPI / Swagger specs, MCP server manifests, and AI-specific meta tags. This is the foundation — if agents cannot discover you, nothing else matters.
L2 Usable — 9 checks
Once discovered, can agents actually work with your API? Validates that operations are well-defined, authentication is documented, error responses are structured, and idempotency keys are supported. Agents need predictable interfaces.
L3 Optimized — 7 checks
Is your API efficient for agent consumption? Looks for sparse field selection, cursor-based pagination, filtering and sorting parameters, bulk operations, cache headers, and MCP tool quality. These reduce the token cost of every interaction.
L4 Agent-Native — 7 checks (--repo required)
Is your system built for AI agents? Checks for intent endpoints, agent session management, scoped tokens, and audit logging. Level 4 requires the --repo flag because these patterns live in source code, not public HTTP responses.
The --repo flag
By default, BotVisibility scans public HTTP endpoints — fetching well-known URLs, parsing headers, and validating API responses. The --repo flag goes deeper by scanning your actual source code using pattern matching.
- Works across any language — JavaScript, TypeScript, Python, Go, Java, Ruby, PHP, and more.
- Unlocks Level 3 code checks + all Level 4 checks — patterns that web scanning simply cannot detect.
- Finds implementations that are present in code but not yet exposed in public HTTP responses (e.g., pagination logic, bulk endpoints, agent session handling).
# Point --repo at your project root npx botvisibility mysite.com --repo ./ # Or a specific directory npx botvisibility mysite.com --repo ./src
Scoring
BotVisibility uses a weighted cross-level algorithm — not strict sequential gates. Strong performance at higher levels can compensate for gaps at lower ones.
- Level 1: 50%+ of L1 checks pass
- Level 2: 50%+ of L1 AND L2 checks pass — or 35% L1 with 75% L2
- Level 3: L2 achieved AND 50%+ L3 checks pass — or 35% L2 with 75% L3
This means a site with strong API design (L2/L3) but missing a few discovery files (L1) can still achieve a high score. The algorithm rewards depth, not just breadth.
CI/CD integration
Use --json output to integrate BotVisibility into your deployment pipeline. Fail builds when agent readiness drops below your threshold.
# .github/workflows/botvisibility.yml
name: BotVisibility Check
on: [push]
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check BotVisibility
run: |
SCORE=$(npx botvisibility mysite.com --json | jq '.currentLevel')
if [ "$SCORE" -lt 1 ]; then
echo "BotVisibility score below Level 1"
exit 1
fiResources
- npm package — install, version history, and docs
- GitHub repository — source code and issue tracker
- Online scanner — scan any URL from the browser
- Badge page — embed an agent-readiness badge on your site
- The Agent Tax whitepaper — why unoptimized sites cost AI agents 5-100x more tokens