ClaudeBot
ClaudeBot is Anthropic's web crawler collecting public content for training and powering Claude's search and research capabilities.
ClaudeBot (user-agent ClaudeBot) respects robots.txt. Anthropic runs multiple crawlers: ClaudeBot (training), Claude-User (on-demand browsing in Claude.ai and the API), Claude-SearchBot (web search for Claude features). For site owners: allowing increases citation chances when Claude users ask questions touching your expertise; blocking protects against training but not always against on-demand fetches.
Example
In robots.txt: User-agent: ClaudeBot. Training allowed on public blog, not on gated customer portal. Transparent and specific.
Allow: /blog/
Disallow: /customer-portal/
Frequently asked questions
Claude-SearchBot vs ClaudeBot?
ClaudeBot crawls for model training. Claude-SearchBot indexes for Claude's real-time web search feature. Both respect robots.txt but have separate user-agents.
How do I get cited in Claude?
Ensure ClaudeBot access, publish about genuine expertise, structure content clearly (H2/H3, short paragraphs), allow a /llms.txt file, ensure E-E-A-T signals.
Related terms
Further reading
- → Our service: GEO
- → Blog: Measuring GEO in GA4: ChatGPT & Perplexity tracking