The new robots.txt — but for machines that pay.
In the traditional web, you make your site discoverable by search engines with robots.txt, sitemaps, and meta tags. In the agent-native web, you need equivalent discovery mechanisms for AI agents and LLMs. That's what llms.txt and agents.txt are for.
llms.txt is a markdown file served at the root of your domain (e.g., https://example.com/llms.txt). It's designed to be consumed by large language models that are researching, summarising, or reasoning about your product. Think of it as a structured, machine-optimised version of your documentation.
The format is simple markdown with a specific structure: a title, a brief description, then sections covering your API's capabilities, endpoints, pricing, and authentication. The key principle is information density — LLMs have limited context windows, so every line should earn its place.
As of early 2026, over 844,000 sites have published llms.txt files. It's become a de facto standard for LLM-readable site descriptions.
agents.txt lives at /.well-known/agents.txt and serves a different purpose. While llms.txt is about describing your product for understanding, agents.txt is about operational discovery — telling an agent exactly how to use your API, what it costs, and how to pay.
A typical agents.txt includes the API name, base URL, authentication methods (including x402 payment details), pricing, supported operations, and contact information. An agent discovering your API can read this file and immediately know how to make a request and pay for it.
They serve different consumers. An LLM answering "what screenshot APIs exist?" will read your llms.txt to understand your product. An autonomous agent that needs to take a screenshot will read your agents.txt to figure out how to call your endpoint and pay.
The analogy is website vs API docs. Your website (llms.txt) explains what you do. Your API reference (agents.txt) explains how to use you.
Both are just text files served as static assets. No dynamic generation needed, no framework required. Drop them in your public directory and they're live.
For a working example, see nightglass's implementations: llms.txt and agents.txt.
Together with traditional SEO (for human developers finding you via Google), these files form a three-layer discovery system:
APIs that invest in all three layers will capture traffic from humans, AI assistants, and autonomous agents. APIs that only do traditional SEO will miss the growing agent segment entirely.
For the broader context on why agents are becoming the dominant API consumer, see the rise of agentic commerce. For the payment side of agent access, the x402 protocol explainer covers the full flow. For a concrete example of an API implementing all three layers, what is a screenshot API.