Every key term in AI Agent Preference scoring, defined. A reference for business owners, agencies, and anyone navigating the agent economy.
A composite rating from 0 to 100 that measures whether AI agents will choose a business over its competitors when executing transactions on behalf of consumers. The score is calculated across four dimensions: Agent Accessibility, Transaction Completeness, Data Reliability, and Competitive Position. Each score falls within one of six capability bands from Agent Preferred (90-100) to Agent Incompatible (0-9).
The AI Agent Preference Score is the headline metric produced by GradeForAI.
The first dimension of the AI Agent Preference Score. Measures whether AI agents can access, crawl, and navigate a business website. Inputs include robots.txt AI crawler configuration, page structure, CAPTCHA presence, JavaScript rendering requirements, and mobile operability.
The second and most heavily weighted dimension of the AI Agent Preference Score. Measures whether an AI agent can complete the full transaction path from discovery to booking to payment. Inputs include booking platform detection across 27+ platforms, transaction path stage coverage, contact mechanisms, pricing visibility, and online payment capability.
The third dimension of the AI Agent Preference Score. Measures whether the data an AI agent extracts will lead to a successful outcome. Inputs include entity coherence scoring against Google Places, NAP consistency across directories, structured data quality, and operational data accuracy.
The fourth dimension of the AI Agent Preference Score. Measures how a business ranks against every competitor in its metro and industry across all other dimensions. Powered by the GradeForAI database of 500,000+ scored businesses.
A sub-metric of Data Reliability that measures consistency of business identity (name, address, phone number) across the website, Google Business Profile, and other public directories. AI agents that encounter conflicting data either select the most trustworthy source or abandon the task.
A sub-metric of Transaction Completeness that identifies which scheduling or booking platform a business uses from 27+ detected platforms including ServiceTitan, Housecall Pro, Jobber, Calendly, Acuity, Square Appointments, and others.
The highest capability band in the AI Agent Preference Score. AI agents choose this business first, every time.
Capability band. AI agents can transact with this business reliably.
Capability band. AI agents can work with this business but with friction.
Capability band. AI agents struggle to complete tasks with this business.
Capability band. AI agents can find this business but cannot do business with it.
Capability band. AI agents skip this business entirely.
AI Agent Optimization (AAO) was the original framework for measuring how operable a business is for AI agents. The framework evolved into the AI Agent Preference Score in April 2026, shifting the focus from "can AI work with you" to "will AI choose you first."
AAO remains a useful category term describing the practice of optimizing for AI agent transactions.
Answer Engine Optimization. The practice of optimizing content so AI chat engines (ChatGPT, Perplexity, Gemini, Claude) cite and reference your business when answering consumer questions. AEO focuses on making your content quotable.
AEO gets you mentioned. AI Agent Preference gets you booked.
Search Engine Optimization. The practice of optimizing websites to rank in traditional search engines like Google. SEO focuses on keyword relevance, backlinks, and page authority.
SEO gets you found. It does not determine which business an AI agent selects when executing transactions.
The emerging commerce layer where autonomous AI agents execute transactions on behalf of consumers. Instead of humans searching, comparing, and clicking, agents do it: they book appointments, compare providers, fill carts, complete checkouts, and report back with outcomes.
Businesses that are Agent Preferred capture these transactions. Businesses that are not get skipped.
An autonomous software system that uses a large language model to understand a goal and take real-world actions to achieve it. AI agents navigate websites, extract data, fill forms, make purchases, schedule appointments, and perform other tasks without human intervention.
Examples include OpenAI Operator, Anthropic Claude computer use, Perplexity Shop, and Amazon Rufus.
A shared vocabulary for structured data on the web. By adding schema.org markup (usually in JSON-LD format), businesses tell search engines and AI agents what their content represents: a LocalBusiness, a Service, an Offer, an Event, and so on.
Schema.org markup is a core input to the Data Reliability dimension.
JSON for Linking Data. A W3C-standard format for embedding structured data in web pages. Most schema.org markup on modern websites is delivered as JSON-LD inside script tags. AI agents parse JSON-LD to understand business details like hours, location, pricing, and services.
Model Context Protocol. An open standard from Anthropic for connecting AI models to external tools, data sources, and business systems. MCP lets AI agents read from databases, call APIs, and interact with software in a standardized way.
A proposed text file placed at a website's root (similar to robots.txt) that tells AI agents and language models how to interact with the site, what content is indexable, and what metadata is authoritative. Adoption is growing but still low across most verticals.
A proposed JSON file placed at a well-known path on a website that advertises agent capabilities: which transaction flows exist, what authentication is required, what data contracts are offered. Agents retrieve it to decide how to engage.
See AI Agent Preference Score. The GradeForAI Score is now measured as the AI Agent Preference Score, a 0-100 metric across four dimensions.
Get your free AI Agent Preference Score across all 4 dimensions. 60 seconds. No credit card.
Get Your Free Score