BotVisibility CLI — agent readiness scanner on npm

CLI Tool

CLI via npm

Scan your site and codebase for AI agent readiness. Like Lighthouse, but for AI agents.

$ npx botvisibility stripe.com
View on npm
npmjs.com/package/botvisibility

Quick start

Zero install required. Run it against any public URL to get an instant agent-readiness report.

Basic URL scan
npx botvisibility https://example.com
JSON output for CI/CD
npx botvisibility stripe.com --json
Full scan with repo analysis
npx botvisibility mysite.com --repo ./
Global install
npm install -g botvisibility

What it checks

BotVisibility runs 43 checks across four progressive levels of AI agent readiness.

L1 Discoverable — 18 checks

Can AI agents find your site and understand what it offers? Checks for llms.txt, agent cards, OpenAPI / Swagger specs, MCP server manifests, AI-specific meta tags, Link headers (RFC 8288), Content-Signals (contentsignals.org), an RFC 9727 api-catalog, Markdown content negotiation, and WebMCP. This is the foundation — if agents cannot discover you, nothing else matters.

L2 Usable — 11 checks

Once discovered, can agents actually work with your API? Validates that operations are well-defined, authentication is documented (including OpenID Connect and RFC 9728 OAuth Protected Resource metadata), error responses are structured, idempotency keys are supported, and optionally whether protected endpoints speak the x402 payment protocol. Agents need predictable interfaces.

L3 Optimized — 7 checks

Is your API efficient for agent consumption? Looks for sparse field selection, cursor-based pagination, filtering and sorting parameters, bulk operations, cache headers, and MCP tool quality. These reduce the token cost of every interaction.

L4 Agent-Native — 7 checks (--repo required)

Is your system built for AI agents? Checks for intent endpoints, agent session management, scoped tokens, and audit logging. Level 4 requires the --repo flag because these patterns live in source code, not public HTTP responses.


The --repo flag

By default, BotVisibility scans public HTTP endpoints — fetching well-known URLs, parsing headers, and validating API responses. The --repo flag goes deeper by scanning your actual source code using pattern matching.

  • Works across any language — JavaScript, TypeScript, Python, Go, Java, Ruby, PHP, and more.
  • Unlocks Level 3 code checks + all Level 4 checks — patterns that web scanning simply cannot detect.
  • Finds implementations that are present in code but not yet exposed in public HTTP responses (e.g., pagination logic, bulk endpoints, agent session handling).
# Point --repo at your project root
npx botvisibility mysite.com --repo ./

# Or a specific directory
npx botvisibility mysite.com --repo ./src

Scoring

BotVisibility uses a weighted cross-level algorithm — not strict sequential gates. Strong performance at higher levels can compensate for gaps at lower ones.

  • Level 1: 50%+ of L1 checks pass
  • Level 2: 50%+ of L1 AND L2 checks pass — or 35% L1 with 75% L2
  • Level 3: L2 achieved AND 50%+ L3 checks pass — or 35% L2 with 75% L3

This means a site with strong API design (L2/L3) but missing a few discovery files (L1) can still achieve a high score. The algorithm rewards depth, not just breadth.


CI/CD integration

Use --json output to integrate BotVisibility into your deployment pipeline. Fail builds when agent readiness drops below your threshold.

# .github/workflows/botvisibility.yml
name: BotVisibility Check
on: [push]
jobs:
  scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Check BotVisibility
        run: |
          SCORE=$(npx botvisibility mysite.com --json | jq '.currentLevel')
          if [ "$SCORE" -lt 1 ]; then
            echo "BotVisibility score below Level 1"
            exit 1
          fi

Resources