WebMCP: The Web's New API for AI Agents — What Developers Need to Know
Published February 2026 · ~8 min read
AI agents — autonomous programs that browse, click, fill forms, and complete tasks on behalf of users — are no longer science fiction. They are shipping in production today: Google's Gemini, OpenAI's Operator, Anthropic's Computer Use, Microsoft's Copilot Actions. The browser is their operating system.
But there is a problem. Today's agents interact with websites the same way a screen reader does: by guessing. They parse raw HTML, try to identify buttons, and hope that "Submit" means what they think it means. It works. Mostly. Until it doesn't.
On February 10, 2026, Google published a proposal that changes this fundamentally. It is called WebMCP — the Model Context Protocol for the Web — and it ships in Chrome 146 beta.
This guide explains what WebMCP is, why it matters, and what you should do about it today.
The Problem: Agents Are Flying Blind
Consider a simple task: "Book me a table at Nobu for Friday at 7pm."
An AI agent today would:
- Navigate to the restaurant's website
- Find the reservation form (by scanning the DOM)
- Guess which input field is for the date, which is for the time, which is for the party size
- Fill them in and click submit
Every step involves interpretation. The agent does not know what each field does — it infers it from labels, placeholders, surrounding text, and positional context. This works for well-built sites. For sites with poor HTML, missing labels, or dynamic rendering, agents fail silently.
This is the equivalent of building a mobile app before REST APIs existed — scraping HTML and hoping the structure doesn't change.
WebMCP gives websites a way to tell agents exactly what they can do.
What WebMCP Is
WebMCP extends the Model Context Protocol (MCP) — originally designed by Anthropic for connecting AI models to tools and data sources — to the browser environment.
In practical terms, WebMCP lets a website declare: "Here are the actions a user can take on this page, here are the inputs each action requires, and here is how to invoke them."
Instead of an agent reverse-engineering a form by reading DOM elements, the website hands the agent a structured tool definition — complete with parameter names, types, descriptions, and validation rules.
Think of it as an API contract between your website and any AI agent that visits it.
Key Properties
- Runs in the browser — No server-side changes required. Everything happens on the client.
- Opt-in — Sites choose what to expose. You can expose your search form without exposing your admin panel.
- Standard schema — Tool definitions use JSON Schema, the same format used by OpenAI function calling, Anthropic tool use, and the original MCP specification.
- Two modes — Declarative (HTML-first) and Imperative (JavaScript-first).
The Two APIs
WebMCP provides two ways to expose tools to agents:
1. Declarative API (HTML-First)
The Declarative API lets you expose existing HTML forms as agent tools by adding structured attributes. No JavaScript required.
This is designed for the common case: you already have a search form, a contact form, a booking form. You just need to tell agents what it does.
Conceptual example:
<form data-agent-tool="search-products"
data-agent-description="Search the product catalog by keyword">
<label for="query">Search</label>
<input id="query"
name="query"
type="text"
data-agent-param-description="Product search keyword"
required />
<button type="submit">Search</button>
</form>
The form already works for human users. The data-agent-* attributes make it machine-readable for agents. The agent now knows:
- Tool name:
search-products - Description: "Search the product catalog by keyword"
- Parameters: one required text field called
query - How to invoke it: submit the form
No API endpoint needed. No backend changes. The form IS the API.
2. Imperative API (JavaScript-First)
The Imperative API uses navigator.modelContext to register tools programmatically. This is for dynamic, complex interactions that don't map cleanly to HTML forms.
Conceptual example:
if ('modelContext' in navigator) {
navigator.modelContext.addTool({
name: 'add-to-cart',
description: 'Add a product to the shopping cart',
parameters: {
type: 'object',
properties: {
productId: {
type: 'string',
description: 'The product SKU'
},
quantity: {
type: 'integer',
description: 'Number of items to add',
default: 1
}
},
required: ['productId']
},
handler: async ({ productId, quantity }) => {
// Your existing add-to-cart logic
return await cartService.add(productId, quantity);
}
});
}
This registers a tool that any visiting agent can discover and invoke — with type-safe parameters and structured responses.
What This Means for Your Website
The SEO Parallel
In the early 2000s, "SEO" meant making your site readable by search engine crawlers. Sites that added meta descriptions, proper headings, and sitemaps ranked higher. Sites that didn't became invisible.
WebMCP introduces the same dynamic for AI agents. Call it AEO — Agent Engine Optimization.
Sites that expose structured tools will be preferred by agents. Users will say "book me a table at the best Italian place nearby," and the agent will favor restaurants whose sites offer a bookable tool over those that require DOM scraping.
Who Should Care
- E-commerce: Product search, add-to-cart, checkout flows
- SaaS: Onboarding wizards, settings panels, feature-specific actions
- Content sites: Search, filtering, newsletter signup
- Marketplaces: Listings, bookings, payments
- Internal tools: Any form-heavy workflow
If your site has forms, buttons, or multi-step flows, WebMCP makes them accessible to agents.
How to Prepare: The Agent-Readiness Checklist
You don't need to wait for WebMCP adoption to start. The foundations of agent readiness overlap heavily with existing best practices:
Level 1: Fundamentals (Do Now)
- Semantic HTML — Use
<main>,<nav>,<header>,<footer>,<article>,<section>. Agents use these as navigation landmarks, just like screen readers. - Heading hierarchy — One
<h1>per page, sequential levels (h1 → h2 → h3). Agents use headings to understand page structure. - Form labels — Every
<input>needs a<label>. Without labels, agents cannot identify form fields. - Descriptive link text — Replace "click here" with "View pricing plans." Agents follow links by understanding their text.
- Image alt text — Not just for accessibility. Agents use alt text to understand visual content.
Level 2: Machine Readability (Do This Week)
- JSON-LD structured data — Add Schema.org markup. This is the richest machine-readable format. Start with
Organization,WebSite, and page-specific types. - OpenGraph meta tags —
og:title,og:description,og:type. Agents use these the same way social platforms do. - Meta description — A concise summary of what the page does. Agents read this first.
- Canonical URLs — Prevent agents from encountering duplicate content.
- robots.txt + sitemap.xml — Help agents discover your full page structure.
Level 3: WebMCP (When Available)
- Add declarative attributes to your most important forms
- Register imperative tools for dynamic interactions
- Test with Chrome 146+ using the built-in agent developer tools
- Validate tool schemas ensure your JSON Schema definitions are correct
Measure Your Readiness
Use GlobalDex to scan your site and get a score across all five categories: structure, metadata, accessibility, discoverability, and WebMCP.
The Bigger Picture
WebMCP is not just a Chrome feature. It represents a shift in how we think about the web platform.
For twenty years, websites have been built for two audiences: humans (via browsers) and crawlers (via bots). WebMCP adds a third audience: agents — programs that don't just read your site but interact with it on behalf of a user.
This has implications beyond forms:
- Authentication: How do agents prove they are acting on behalf of a specific user? OAuth flows designed for humans don't work for autonomous agents.
- Permissions: Users might want agents to have partial access — "you can search but not buy."
- Trust: How does a site know an agent is legitimate, not a scraper pretending to be one?
- Economics: If agents handle transactions, who pays for the bandwidth? How do ad-supported sites monetize agent traffic?
These questions don't have answers yet. But WebMCP is the first serious infrastructure for addressing them.
What to Do Next
- Run an agent-readiness scan on your site today
- Fix the fundamentals — semantic HTML, labels, structured data
- Follow the WebMCP spec as it evolves through Chrome's Early Preview Program
- Experiment — try building a simple tool declaration on a test page
- Think about what actions your users take and which ones you'd want agents to handle
The sites that prepare now will have a meaningful advantage when agent traffic becomes significant. That timeline is measured in months, not years.
Built with GlobalDex — the AI agent-readiness index for the WebMCP era.