CCBot
What is CCBot?
About
CCBot is a web crawler used by Common Crawl to maintainin an open source repository of web crawl data that is available for anyone to use. This repository has been used to train many LLMs (Large Language Models), including OpenAI's GPTs.
You can set up agent analytics to see when CCBot visits your website.
Detail
Operator | Common Crawl |
Documentation | https://commoncrawl.org/faq |
Type
Expected Behavior
It's generally unclear how AI data scrapers choose which websites to crawl and how often to crawl them. They might choose to visits websites with a higher information density more frequently, depending on the type of AI models they're training. For example, it would make sense that an agent training an LLM (Large Language Model) would favor sites with a lot of regularly updating text content.
Insights
CCBot Visiting Your Website
Half of your traffic probably comes from artificial agents, and there are more of them every day. Track their activity with the API or WordPress plugin.
Set Up Agent AnalyticsOther Websites
Access Control
Should I Block CCBot?
It's up to you. AI data scrapers usually download publicly available internet content, which is freely accessible by default. However, you might want to block them if you're concerned about attribution or how your creative work could be used in the resulting AI model.
Using Robots.txt
You can block CCBot or limit its access by setting user agent token rules in your website's robots.txt. We recommend setting up agent analytics to check whether it's actually following them.
User Agent Token | Description |
---|---|
CCBot |
Should match instances of CCBot |
# robots.txt
# This should block CCBot
User-agent: CCBot
Disallow: /
Instead of doing this manually, you can use the API or Wordpress plugin to keep your robots.txt updated with the latest known AI scrapers, crawlers, and assistants automatically.
Set Up Automatic Robots.txt