Welcome to the Party, Cloudflare

Commentary

Welcome to the party, Cloudflare

We scanned isitagentready.com with our own audit. It passed 8 of 22 checks. Here’s the honest gap — and the move that defines what comes next.

A BotVisibility Post — April 2026

Cloudflare just shipped isitagentready.com — a public scanner that grades sites across five categories of emerging agent standards. It’s a clean v1, it ships behind Cloudflare’s distribution, and it’s the clearest sign yet that “agent-ready” is a real market category with institutional gravity behind it. We are unambiguously better off with Cloudflare in the room.

So naturally, the first thing we did was run isitagentready.com through BotVisibility’s own 43-point audit.

8 pass. 2 warn. 12 fail. 14 N/A. Level 0, Getting Started.

That’s not a dunk. It’s a map of the exact gap the category hasn’t solved — and the one Cloudflare is uniquely positioned to close.


What isitagentready got right

Start with the 8 passes, because they’re a real floor and more than most sites publish today:

  • llms.txt published with valid content
  • robots.txt AI policy — crawlers allowed
  • Skill file published
  • Link headers advertising AI discovery
  • Content Signals declaring ai-train, search, and ai-input preferences
  • Markdown content negotiation — the homepage returns text/markdown when asked
  • API catalog discovered (with a Content-Type nit)
  • MCP server discoverable at a well-known path

Eight of the eighteen Level-1 discoverability checks, shipped at launch. That beats most Fortune 500 homepages today. Credit where it’s due.


What was missing on a site that grades other people

The fails and warns, ranked by the size of the gap between “standard exists” and “standard shipped here”:

  • No agent-card.json
  • No OpenAPI spec — which is why 14 checks auto-N/A (the whole Usable and half the Optimized layer)
  • No JSON-LD with potentialAction
  • No CORS headers
  • No AI meta tags
  • No ai.json
  • No skills index
  • No WebMCP on the homepage
  • No RSS/Atom feed
  • No structured JSON errors from the API
  • No rate-limit headers
  • MCP server is discoverable — but our quality check reports “MCP server has no tools or resources”

That last one is the revealing one. Protocol Discovery is one of the five categories on isitagentready’s own rubric. The scanner publishes an MCP endpoint. Our audit says that endpoint doesn’t actually expose anything an agent can call. A crawler checking “is there an MCP server?” would answer yes. A deeper audit asking “does it work?” answers no.

That’s the whole thesis of this post in one line.


The token-tax receipts

Our audit runs a token-efficiency check on every page it scans. Here are the numbers side by side.

isitagentready.com: 6,338 raw tokens, 509 useful (8.0% content), 12.5x token ratio. At 100K agent fetches a month, the customer footing the token bill pays roughly $22,817/yr to talk to this site. Ship an OpenAPI spec and put real tools on that MCP endpoint and our projection drops the bill to about $2,200/yr — a 90% cut.

botvisibility.com: 7,935 raw tokens, 48 useful, a 165.3x raw ratio that sounds catastrophic and would be, except the audit credits the mitigations. We publish llms.txt, OpenAPI, agent-card, skill file, and a working MCP server with 5 tools and 4 resources. Agents bypass the HTML entirely. Projected agent bill: $208.80/yr. Same traffic, 109x cheaper to automate against.

That delta is the whole point. Discoverability standards get an agent to your front door. Optimization and Agent-Native standards decide whether they spend 2,000 tokens or 40,000 figuring you out once they’re inside. Nobody’s scanner has measured the bill yet. That’s the vacancy.


Where a scanner hits its ceiling

A public scanner is a crawler. It can ask three questions:

  • Does the file exist?
  • Does it parse?
  • Is the content-type right?

It can’t authenticate. It can’t make test calls. It can’t poll a job ID. It can’t verify that /send-invoice is a single intent endpoint instead of a chain of four CRUD requests. It can’t tell whether your MCP server actually does anything once an agent connects. That’s not a criticism of Cloudflare’s tool — it’s a hard ceiling for any externally-facing scanner.

It’s also exactly the set of checks that determine whether an agent succeeds cheaply with your product or burns 40,000 tokens reading your source code to figure you out.

BotVisibility’s extended audit runs those deeper checks via authenticated CLI: intent endpoints, scoped agent tokens, sandbox environments, audit logs, consequence labels on irreversible actions, native tool schemas, rate-limit and caching quality, pagination and filtering depth. That’s not a competitive feature. It’s the layer where token-spend savings actually live.


How Cloudflare should take it further

Here’s the part that matters. Cloudflare doesn’t need to outbuild anyone on checklists. They need to do the thing only they can do, which is convene the model providers.

Imagine a shared standard — call it Agent Fetch Profile — co-signed by Cloudflare and at least one of Anthropic, OpenAI, or Google. It does three things:

  • Declares how a model should prompt itself on a given domain. “Prefer Accept: text/markdown.” “Hit /v1/intents/* before considering CRUD.” “Batch reads at /bulk.” “Load tool schemas from /.well-known/tool-schemas.json.” “Cache by ETag for this TTL.”
  • Ships inside the agent runtime. Claude and GPT agents fetch the profile the moment they land on a compliant domain and automatically adopt the cheapest declared strategy. No prompt engineering on the developer side.
  • Reports the bill. Agents emit anonymized fetch-cost metrics to a standard endpoint so a site owner can watch the number drop from 18,000 tokens per task to 2,100 the week after shipping their profile.

That’s not a scanner feature. That’s a category-defining partnership that only a neutral infrastructure company with Cloudflare’s distribution could pull off. It would retire a huge chunk of the cost-of-doing-business for every serious agent platform on the planet — which is exactly the pitch that gets a model provider to co-sign instead of compete.

Right now the agent web is headed for an optimization mess. Every site invents its own discovery scheme, every agent invents its own fetch heuristic, and the token bill climbs for reasons nobody is tracking. A shared prompt-level standard, authored by the people who run the CDN and the people who run the models, is the only path that doesn’t end there.

Cloudflare’s tool, as it stands, measures readiness. What comes next should measure — and then reduce — the bill.


Where BotVisibility goes from here

A few things we’re shipping into this moment:

  • The extended CLI audit. The 14 items a crawler can’t reach — intent endpoints, scoped tokens, sandbox environments, audit logs, consequence labels, native tool schemas, rate-limit and caching headers, pagination and filtering quality.
  • WordPress plugin. Installs what a scanner only tells you is missing — generates llms.txt, agent-card.json, skills/index.json, and /.well-known/ai.json from your existing content, and keeps them current as you publish.
  • Claude Managed Agent. A resident auditor pointed at your property for continuous scan, remediate, re-check, and agent-traffic logging. A one-shot public scan is an entry point; a resident expert is how the work actually gets done.
  • Token-spend telemetry. The thing we most want to co-design with a model provider: an open spec for sites to emit and agents to report fetch-cost metrics, so the cost side of agent-readiness stops being invisible.

The aim isn’t to out-feature Cloudflare. It’s to make sure the layers a crawler can’t reach don’t get quietly left behind while everyone celebrates the baseline.


The real takeaway

If you’re a site owner, run both. Cloudflare’s scanner gets you the baseline in an afternoon. BotVisibility’s extended audit tells you whether an agent can actually transact with your product without setting a pile of tokens on fire.

If you’re Cloudflare: the scanner is a great v1. Ship an OpenAPI spec, put real tools on that MCP endpoint, and you’ll clear our 22 applicable checks in a week. Then do the thing only you can do — go get Anthropic, OpenAI, or Google on a shared prompt standard that turns agent-readiness from a discoverability question into a token-economics question.

That’s the move none of the rest of us can pull off alone. It’s also the one that actually defines the category.

Welcome to the party. Let’s go build the second half.