Operated by Common Crawl. Last seen today.

What is CCBot?


CCBot is a web crawler used by Common Crawl to maintainin an open source repository of web crawl data that is available for anyone to use. This repository has been used to train many LLMs (Large Language Models), including OpenAI's GPTs.




AI Data Scraper
Downloads web content to train AI models

Expected Behavior

It's generally unclear how AI Data Scrapers choose which websites to crawl and how often to crawl them. They might choose to visits websites with a higher information density more frequently, depending on the type of AI models they're training. For example, it would make sense that an agent training an LLM (Large Language Model) would favor sites with a lot of regularly updating text content.

Access Control

Using Robots.txt

User Agent Token Description
CCBot Should match instances of CCBot

You can block CCBot or limit its access by setting user agent token rules in your website's robots.txt.

# robots.txt
# This blocks CCBot

User-agent: CCBot
Disallow: /

Instead of doing this manually, you can generate your robots.txt automatically using the free API or Wordpress plugin.

Get the Robots.txt


Other Websites

of top websites are currently blocking CCBot in some way
Updated today