Operated by Common Crawl. Last seen today.

What is CCBot?


CCBot is a web crawler used by Common Crawl to maintainin an open source repository of web crawl data that is available for anyone to use. This repository has been used to train many LLMs (Large Language Models), including OpenAI's GPTs.




AI Data Scraper
Downloads web content to train AI models

Expected Behavior

It's generally unclear how AI data scrapers choose which websites to crawl and how often to crawl them. They might choose to visits websites with a higher information density more frequently, depending on the type of AI models they're training. For example, it would make sense that an agent training an LLM (Large Language Model) would favor sites with a lot of regularly updating text content.


Activity on Your Website

Half of your website's traffic probably comes from artificial agents, and they're becoming more intelligent every day.

Set Up Agent Analytics

Other Websites

of top websites are currently blocking CCBot in some way
Learn How →

Access Control

Should I Block CCBot?

It's up to you. AI data scrapers usually download publicly available internet content, which is freely accessible by default. However, you might want to block them if you're concerned about attribution or how your creative work could be used in the resulting AI model.

Using Robots.txt

User Agent Token Description
CCBot Should match instances of CCBot

You can block CCBot or limit its access by setting user agent token rules in your website's robots.txt.

# robots.txt
# This blocks CCBot

User-agent: CCBot
Disallow: /

Instead of doing this manually, you can generate a robots.txt that stays up to date with the agent list automatically.

Set Up Your Robots.txt